Wednesday, June 18, 2014

FIX: ORA-00911: invalid character in c# but seemingly nowhere else

I am not normally a developer who targets Oracle, but something recently required me to go down this rabbit hole. I'm the first to admit that I hate Oracle primarily because I lack experience with their dialect of Sql and ... well, Java (Ask toolbars and security issues, grrr). But I digress.

The world's most useless error message

I write this at least partially as a joke because the specific solution I'm going to propose was a fix for my specific problem but because of the ambiguity of this error message, your problem could be something entirely different. Basically, what the Oracle database query parser is trying to tell us is that out of the hundreds of characters in your command text, at least one of them (maybe more) is not a character that it should be. It is, instead, an invalid one. Most likely, you've smashed two keys at the same time and put an errant underscore after a parenthesis. Maybe it's an obvious problem. Or maybe, like me, your unfamiliarity with Sql*Plus combined with this obtuse error led you down the wrong path a few times.

Bad semicolon

There's something about the Oracle provider in C# that causes it to crap all over your statement if the last character in the command text is a semicolon. Note that I've only tested this in the ODP.NET (DataAccess.dll not ManagedDataAccess.dll) 12.x client. In my case, I assumed it was that strange looking colon in front of the variable names. Nope, colon: Good. Semi-colon: Not so much.

Wednesday, December 25, 2013

Part VI - The Ultimate Linux Home Router - SquidGuard - Internet Filtering

Introduction

It's worth warning that perfect internet filtering is impossible. It's for this reason that parents who have older children often don't bother with it since it tends to filter out benign things. That said, I have young children with tablets that Santa bought them for Christmas. I've locked the tablets down rather thoroughly on their own--prohibiting the use of the browser without a password and using a simple tool to restrict YouTube access. I know, however, that there will be times that they'll need the browser and when they do, they'll be using a browser with most controversial subjects filtered out. Don't get me wrong, I'm not interested in Nerf(tm)ing up the internet and preventing them from seeing some of the realities of life. But when they need unrestricted access, I will be there supervising directly. When I can't, most of the internet will be blocked.

Run an update

Since it's been a few days since this was posted, it's worth making sure we're running the latest versions of everything. Run the following and follow any onscreen instructions.
$ sudo zypper up

Installing SquidGuad

$ sudo zypper in squidguard

Restricting Tablets - DHCP Reserved Addresses

One option (and certainly not the most bulletproof) is to set the tablets up with static IP addresses. This is really simple to get around if your children have any basic understanding of IP networking (or know how to wield google in their favor). In a later post, I'll be adding authentication to the proxy server, but until then, I wanted to at least force DHCP to assign the same address. The settings panel on my tablets is password protected, so one of the kids changing the settings is not a huge concern of mine. In addition, assigning the IP via DHCP ensures that I won't need to make other configuration changes when the tablet moves to different networks.
Reserved addresses are assigned when the DHCP server sees a client with a specific MAC address. Since it's easier to retrieve the current IP address on my tablets, I've written down the two addresses: 192.168.0.121 and 192.168.0.127, and we'll look up the host details on the server.
The following command will display all of the current leases
$ cat /var/lib/dhcp/db/dhcpd.leases | egrep "(^lease)|(hardware)"
This will display all of your current leases and their MAC addresses. Find the one that matches what was assigned to the tablets and copy the hardware address.
$ sudo nano /etc/dhcpd.conf
Find the subnet that corresponds to your network. It'll look something like this:
subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.100 192.168.0.200;
  default-lease-time 14400;
  max-lease-time 172800;
}
Modify it:
subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.100 192.168.0.200;
  default-lease-time 14400;
  max-lease-time 172800;
  host girlstablet {
    fixed-address 192.168.0.30;
    hardware ethernet xx:xx:xx:xx:xx:xx;
  }
}
I've added a host 'girlstablet', which has the MAC address "xx:xx:xx:xx:xx:xx" assigned it the address 192.168.0.30. I'll be putting all of the restricted devices on the 192.168.0 subnet in the range 192.168.0.30-39. Do the same with any other addresses, assigning them an IP address outside of the normal lease range.
Save and Exit, then run:
$ sudo rcdhcpd restart

Installing Blacklists

I'm going ot use Shalla's Blacklists for this example, but there are several others available. Shalla's is for private, non-commercial use only. If you're using it for a business, consult their web site to ensure you are compliant. Creating a script to download the filter list:
$ sudo nano /usr/sbin/update_squidguard_blacklist
Paste in the following and Save/Exit.
#!/bin/sh

cd /var/lib/squidGuard/db
rm -r *
wget 'http://www.shallalist.de/Downloads/shallalist.tar.gz'
tar -xvf shallalist.tar.gz
cd BL
mv * ..
cd ..
rmdir BL
/usr/sbin/squidGuard -C all
chown -R squid *
chown -R squid /var/log/squidGuard
rm shallalist.tar.gz
squid -k reconfigure
Run the following:
$ sudo chmod 750 /usr/sbin/update_squidguard_blacklist.sh
$ sudo ln -l /usr/sbin/update_squidguard_blacklist.sh /etc/cron.daily/update_squidguard_blacklist.sh
$ sudo nano -w /etc/squidguard.conf
Remove all lines from the file below "logdir /var/log/squidGuard" and paste in the following:
dest adv {
     domainlist adv/domains
     urllist    adv/urls
}

dest aggressive {
        domainlist      aggressive/domains
        urllist         aggressive/urls
        log             aggressiveaccess
}

dest alcohol {
        domainlist      alcohol/domains
        urllist         alcohol/urls
        log             alcoholaccess
}

dest anonvpn {
        domainlist      anonvpn/domains
        urllist         anonvpn/urls
        log             anonvpnaccess
}

dest costtraps {
        domainlist      costtraps/domains
        urllist         costtraps/urls
        log             costtrapsaccess
}

dest dating {
        domainlist      dating/domains
        urllist         dating/urls
        log             datingaccess
}

dest drugs {
        domainlist      drugs/domains
        urllist         drugs/urls
        log             drugsaccess
}

dest gamble {
        domainlist      gamble/domains
        urllist         gamble/urls
        log             gambleaccess
}

dest hacking {
        domainlist      hacking/domains
        urllist         hacking/urls
        log             hackingaccess
}

dest porn {
        domainlist      porn/domains
        urllist         porn/urls
        log             pornaccess
}

dest redirector {
        domainlist      redirector/domains
        urllist         redirector/urls
        log             redirectoraccess
}

dest sexeducation {
        domainlist      sex/education/domains
        urllist         sex/education/urls
        log             sexeducationaccess
}

dest sexlingerie {
        domainlist      sex/lingerie/domains
        urllist         sex/lingerie/urls
        log             sexlingerieaccess
}

dest spyware {
        domainlist      spyware/domains
        urllist         spyware/urls
        log             spywareaccess
}

dest violence {
        domainlist      violence/domains
        urllist         violence/urls
        log             violenceaccess
}

dest webmail {
        domainlist      webmail/domains
        urllist         webmail/urls
        log             webmailaccess
}

dest webtv {
        domainlist      webtv/domains
        urllist         webtv/urls
        log             webtvaccess
}

dest warez {
        domainlist      warez/domains
        urllist         warez/urls
        log             warezaccess
}
acl {
        admins {
                pass all
        }
        restricted {
                pass !weapons !warez !webtv !webmail !sexlingerie !sexeducation !redirector !porn !hacking !violence !aggressive !alcohol !anonvpn !costtraps !dating !drugs !gamble all
        }
        default {
                pass !spyware all
        }
}
Run the following command:
$ sudo echo 'redirect_program /usr/sbin/squidGuard' >> /etc/squid/squid.conf
$ sudo /usr/sbin/update_squidguard_blacklist.sh

Saturday, December 21, 2013

Part V - The Ultimate Linux Home Router - Services: SSH, Squid, Privoxy - Advertisement Blocking Network Wide

Secure Shell Server (SSH)

A Secure Shell server is a great utility to have accessible externally. At home, I use port forwarding through my SSH server for a variety of uses. On public Wi-Fi networks, I fire up my SSH client on my phone and setup my browser to use my home proxy so that I can surf encrypted on a public network. When I'm visiting my children's school, I use the secure shell server to bypass the exceptionally restrictive firewalls installed at US public schools (and not for evil things ... simple things like Facebook don't work!). By opening one port to SSH, I am able to access my entire home network and all of its services. I can remote into my desktop or laptop via RDP. I can issue commands to my Roku. I can copy files (using WinSCP) to and from the public or family PC I'm working on to any PC on my home network.

Securing Secure Shell

Running a Secure Shell server doesn't come without risks. It's a high value target for hackers. Brute force attacks are extremely common, and are even done in ways that reduce the probability of detection (such as very slowly trying multiple common passwords). I allow password authentication for Secure Shell and am not providing steps on eliminating that (because I simply don't care enough about it). I strongly recommend using a very complex passphrase (multiple words, punctuation, with numbers mixed in) to ensure a brute force attack is unsuccessful.

Reducing attack surface of SSH

We're going to do the following: Pick a non-standard port for SSH. You have two options, each with pros and cons:
  • Option 1 - Pick a common port, such as Port 80 or 443. This will not reduce your attack surface that much, since these ports are also commonly attacked, but since SSH will be running on that port and not the service the hacking tool is looking for, it may render the tool ineffective (probably not). The pros of picking a common port like 80 or 443 is that they're unlikely to be blocked egress on a firewall. Pick a common port if the most important thing to you is being able to access your SSH server from anywhere.
  • Option 2 - Pick an uncommon, random port like 3154 (well, don't use that now that I put it in this post ... actually, it probably won't matter, the traffic to this site is pretty miniscule).
I chose option 2 because, lately, I've rarely ran into a scenario where I couldn't access a random port egress on even Public School WiFi.
We're going to eliminate root logon. Allowing root login is a very bad idea. This ensures that the attacker needs to not only guess the password, but also the ID in order to log into your server.
Run the following:
$ sudo nano /etc/ssh/sshd_config
Replace:
#Port 22
#PermitRootLogin yes
#MaxAuthTries 6
With:
Port xxxx
PermitRootLogin no
MaxAuthTries 1
Where xxxx is the number of the port you selected above. Save and Exit.
Run the following:
$ sudo nano -w /etc/sysconfig/SuSEfirewall2.d/services/sshd
Change TCP="22" to TCP="XXXX" where XXXX is the port you selected above, then run:
$ sudo rcsshd restart
$ sudo rcSuSEfirewall2 restart
Replace xxxx with the port you selected above.

Reducing attack surface with DenyHosts

Denyhosts is a script that trolls the logs looking for authentication failures. When it finds a certain threashold of them from a particular host, it adds that host to the hosts.deny file which prevents the server from accepting connections from that host in the future. We're going to set the threshold pretty low, so you may end up locking yourself out by accident. If that happens, simply login from a different host and remove the host from hosts.deny.
$ sudo zypper ar --refresh http://download.opensuse.org/repositories/network:/utilities/openSUSE_13.1/ 'openSUSE 13.1 Network Utilities'
$ sudo zypper in denyhosts
(be sure to answer "a" when prompted)
$ sudo chkconfig --level 3 denyhosts on
Worthy Note: Because DenyHosts is running on your router, having too many login failures on a machine within your network will not only block you from logging in with SSH, it'll block you from connecting to anything (including the internet). So login with caution!

Proxy Servers

I'm debating, for the time being, precisely which proxy to install. I've used Squid for years, but there are alternatives out there and, in fact, we're going to be using one of them -- privoxy. There's no reason they all can't coexist on the same machine, however, it may not be necessary to run more than one. We'll chain squid and privoxy together and later, when we want to child-proof certain things on our network, we'll add squidGuard to the mix.
Run the following:
$ su
(enter your password)
# zypper in squid privoxy privoxy-doc
# systemctl enable squid
# systemctl enable privoxy
# nano /etc/squid/squid.conf
Replace
http_port 3128
With
http_port 3128 transparent
# nano /etc/sysconfig/SuSEfirewall2
Hit CTRL+W and type "FW_REDIRECT" to get to the line.
Replace
FW_REDIRECT=""
With
FW_REDIRECT="192.168.0.0/24,0/0,tcp,80,3128"
# rcsquid start
# rcSuSEfirewall2 restart
Browse to something on the internet on Port 80 within your local network and you can "cat /var/log/squid/access.log" to see that the proxy is working properly. Once you're sure that's up and solid, we'll chain privoxy.

Configuring Privoxy for Advertisement blocking for your whole network

I won't get into the ethical arguments around blocking ads. I would love to trust publishers to give me advertisements that weren't filled with malware, but at present, I can't. It pains me even more that the two or three affiliate links I may use on this site won't work after you follow these instructions, but I won't be a hypocrite and complain about that. Here's a longer discussion on the subject This is why I adblock and you should too. If you feel evil doing this, you can simply skip this section. This is one of a few methods we're going to use to eliminate ads on our network. This will also reduce the ability of web sites to track your behavior. Occasionally, privoxy gets it wrong and blocks something useful, but I've been running it for months now and it works brilliantly. There are some things as of this writing that are broken about the privoxy package provided in the 13.1/Tumbleweed repositories. We're going to tweak the service definition and load the service to correct this. First we'll modify the configuration:
# nano /etc/privoxy/config
Locate the following lines (hint: CTRL+W)
logdir /log
confdir /etc
Replace them with:
logdir /var/lib/privoxy/log
confdir /etc/privoxy
Locate the following lines:
#debug 4096
#debug 8192
Replace it with:
debug 4096
debug 8192
# nano /usr/lib/systemd/system/privoxy.service
Remove all lines in that file and replace it with:
[Unit]
Description=Privoxy Web Proxy With Advanced Filtering Capabilities
After=network.target

[Service]
Type=forking
PIDFile=/var/run/privoxy.pid
WorkingDirectory=/var/lib/privoxy
ExecStart=/usr/sbin/privoxy --user privoxy --pidfile /var/run/privoxy.pid /etc/privoxy/config
ExecReload=/bin/kill -USR1 $MAINPID

[Install]
WantedBy=multi-user.target
Run the following commands
# systemctl daemon-reload
# systemctl start privoxy

Chaining Squid and Privoxy

Run the following commands:
# echo 'cache_peer 127.0.0.1 parent 8118 7 no-query' >> /etc/squid/squid.conf
(configures privoxy as a parent to squid)
# echo 'acl ftp proto FTP' >> /etc/squid/squid.conf
# echo 'always_direct allow ftp' >> /etc/squid/squid.conf
(skips privoxy for FTP requests)
# echo 'never_direct allow all' >> /etc/squid/squid.conf
(uses privoxy for all other requests)
# squid -k reconfigure
To test, simply fire up a browser that doesn't have Adblock installed and visit a page (non-SSL) that has lots of ads. You'll also notice some of the ads are gone from your phone as well.

Blocking Ads via BIND DNS

As I mentioned earlier, privoxy is not fool proof. It does land itself into false positives from time to time. While putting this together, I discovered one of those cases. We'll need to get some files from a site for our BIND DNS blocking, but in order to do that, we have to first configure privoxy to whitelist the site.
# echo "{ -block }" >> /var/lib/privoxy/etc/default.action
# echo "pgl.yoyo.org" >> /var/lib/privoxy/etc/default.action
In a browser somewhere on your network, click this link and make sure it's not blocked. Then run the following commands:
# nano /usr/sbin/addownloader.sh
Paste the following:
#!/bin/sh
wget 'http://pgl.yoyo.org/as/serverlist.php?hostformat=bindconfig;showintro=0' -O /etc/named.ads.downloaded.conf
cat /etc/named.ads.downloaded.conf | egrep '^zone' > /etc/named.ads.conf
rcnamed restart
Save and exit, then run:
# chmod 750 /usr/sbin/addownlodaer.sh
# ln -s /usr/sbin/addownloader.sh /etc/cron.daily/addownloader.sh
# addownloader.sh

Updates

Corrected DNS ad blocking section. Added cron job for downloading DNS list.

Part IV - The Ultimate Linux Home Router - DHCP and Routing

Here's where we put it all together and finally replace our existing router. You might want to print this out because some of these steps will be done without internet connectivity.

Other things you may need to purchase

If your existing router also provided wireless access and you want that wireless access to remain behind the firewall (strongly recommended!), you'll need to buy a wireless access point. I love the Buffalo line of wireless routers because they run DD-WRT, an open-source, linux based router firmware. It's inexpensive -- $39 on Amazon, and with the routing capabilities disabled, you'll have a 4-port switch with wireless access. If your existing router allowed you to plug more than one device into it (and you're currently doing just that), you'll need to purchase a switch (unless you elected to buy the wireless access point with a switch). Here's the one I purchased

Preparing to replace your existing router - Figuring out the external side of the network

Option 1: Get rid of your existing router. Obviously you can't do this if your existing router is also your cable or DSL modem.
Option 2: Disable your existing cable or DSL modem's routing capability.
Option 3: Leave your existing router in place, disable all firewalling and setup your new router as a DMZ host.
You'll want to pick one of these. The steps for doing this vary depending on your existing hardware. I (currently) use Comcast Business internet and short of purchasing a static IP, I'm stuck with option #3. I've been running this way for some time now and it's not caused any problems other than having to remember I have one added bit of network complexity.

Setting up the External network

If you are using a USB external network adapter and have not yet plugged it in, plug it in (but leave the network cable disconnected).
$ su
(enter your password)
yast
We could have sudo'd that, but we're going to be doing a lot as root to get this ready, so let's live dangerously.
Select Network Devices and Network Settings Select your external network adapter and choose Edit. On my setup, my external USB adapter is eth1. I'll be referring to eth1 a bit in the future as we issue commands from the shell. If yours has a different name (as it does on my other device), remember it and substitute it whenever you see eth1 in a command. If you picked option 3 above, you'll follow the directions for assigning a static, non-routable, IP address. If you picked option 1 or 2, you probably want to leave the external side as DHCP unless you were assigned or pay for a static IP from your ISP.

Getting the existing network settings

Determine the network that your existing router assigns addresses. If you have a windows box behind the firewall, you can drop to a command prompt and type "ipconfig /all". Locate the IP address assigned and the subnet mask to determine the address. In my case, the IP address is 192.168.0.70 and subnet mask is 255.255.255.0 (/24). This means that my existing router's network is 192.168.1.0/24. Write down your DNS settings. Go to your router's configuration and locate the DHCP and LAN settings.

For Option 3 folks, only

Since we're already using an IP address on your router's existing internal network, we need to change the network that the existing router uses. I've switched mine to use 192.168.1.0/24 (192.168.1.x network with a subnet mask of 255.255.255.0) as the network and 192.168.1.1 as the router's address. I've also disabled DHCP since I'll be assigning the external network address statically. Consult your router's manual on how to make these changes, it'll be different for everyone. Oh, and when you make these changes, you're going to disconnect yourself from the internet. Your current PC will have an address on a different network and won't know how to get to the gateway, so before you save your settings, make sure you're positive they're right. If you screw it up, consult your router manual on how to hard reset the router and try again.

Assigning a Static IP

If you have paid for a routable, static IP address from your ISP, find the documentation they gave you to identify the correct DNS, Gateway and IP address and Subnet Mask to assign here. If not, use the values you discovered from the previous section.
Select Statically assigned IP address, put in the address from your ISP or from what you selected above.
Select ubnet Mask, enter the value from your ISP or from what you discovered above.
Select Hostname. Use the same hostname you used during installation, but add a "-ext" to signify that this IP points to the external side of the router.
Select General and tab over to Firewall Zone. Select External Zone. This will cause the firewall to filter all unsolicited traffic that you haven't explicitly allowed. We'll be doing a lot more customization later.
Hit F10 to save these values.
Select Routing.
Under Default IPv4 Gateway, enter the value from your ISP or from what you discovered above. This is the dumb "if there is no other explicitly assigned route, use this one" route.
Put an [X] in the box next to Enable IP Forwarding.
Hit F10 to save these values.
Select Security and Users and then Firewall. Select Masquerading.
Put an [X] in the box next to Masquerade Networks.

DHCP

Run the following:
# zypper in dhcp yast2-dhcp-server dhcp-server
# yast
Select Network Services then DHCP Server.
Select your internal network adapter, choose Select.
Make sure there's an X only next to the internal network adapter.
Select Open Firewall for Selected Interfaces. Choose Next.
On the next screen, choose Domain Name and enter the Active Directory domain name (yourdomain.net).
For Primary Name Server, Secondary Name Server, Default Gateway, and NTP Time Server, enter the IP address of the internet network adapter on your new router. Leave the rest blank.
For IP Address Range, select a range you'd like to assign to the pool. I used 192.168.1.100 - 192.168.1.200.
Under Service Start, select When Booting. Select "Next" and Quit.

Plug it all in

At this point, you need to unplug all of the ethernet cables from your router (If your router has an internet side that is Ethernet, unplug all ethernet cables except that one). Plug your external network adapter into the existing router. Plug the cables that were going to your existing router into a switch and plug that switch into the new router. If you have a relatively simple network and your ISP provided you with a router, your existing network probably looked something like this:
               /--------\
[INTERNET]=====| Router |=======[Computer]
               |  from  |=======[Computer]
               |  your  |=======[Xbox]
               |  ISP   |=======[Wireless Access Point]
               \--------/
It's now going to look something like this.
               /--------\    /---------\   /--------\
[INTERNET]=====| Router |====| Awesome |===| Switch |===[Computer]
               |  from  |    |   new   |   | or WAP |===[Computer]
               |  your  |    |  Linux  |   | with   |===[Xbox]
               |  ISP   |    | Router  |   | switch |===[Roku]
               \--------/    \---------/   \--------/
It's a little more complex. If you purchased a pure wireless access point that isn't a router and doesn't include a switch, you'll plug that into the switch. If you picked a combined unit that is switch and wireless, you'll have something similar to the above.

Refreshing your internal devices

The easiest way is to simply reboot. However, for Windows boxes, you can drop to a Administrator command prompt and type "ipconfig /release" followed by "ipconfig /renew". You should see that you've now been assigned an address from your linux router.

On the off chance you're using an AX88179 USB 3.0 Ethernet Adapter

The kernel module that supports this device is a little flaky, so we're going to install the reference driver. It can be found by searching here. Run the following:
$ su
(enter your password)
# zypper in kernel-default-devel kernel-devel kernel-source
# cd ~
# wget http://www.asix.com.tw/FrootAttach/driver/AX88179_178A_LINUX_DRIVER_v1.8.0_SOURCE.tar.bz2
# tar -xfv AX88179_178A_LINUX_DRIVER_v1.8.0_SOURCE.tar.bz2
# cd AX88179_178A_LINUX_DRIVER_v1.8.0_SOURCE.tar.bz2
# make
# make install

Tuning the Router

By default, Linux isn't tuned for routing large amounts of traffic. Most of the default settings are fine, however, if you use BitTorrent or any P2P, you'll discover right away the limitations of the default settings. Here are some settings I have applied to my router to improve streaming video and applications that create large numbers of connections. Most of these settings were adapted from sources that I googled. Some may be wildly wrong and may show my ignorance of the Linux networking stack.
$ su
(enter your password)
# echo 'ifconfig eth0 txqueuelen 10000' >> /etc/rc.d/after.local
(If your adapter isn't called eth0 and eth1, replace these with what the adapters are actually called)
# echo 'ifconfig eth1 txqueuelen 10000' >> /etc/rc.d/after.local
# nano /etc/sysctl.conf
Add the following to the bottom of the file (adapted primarily from Linux Network Tuning for 2013:
# Increase system file descriptor limit
fs.file-max = 100000

# Discourage Linux from swapping idle processes to disk (default = 60)
vm.swappiness = 10

# Increase ephermeral IP ports
net.ipv4.ip_local_port_range = 10000 65000

# Increase Linux autotuning TCP buffer limits
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE
# Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Make room for more TIME_WAIT sockets due to more clients,
# and allow them to be reused if we run out of sockets
# Also increase the max packet backlog
net.core.netdev_max_backlog = 50000
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

# If your servers talk UDP, also up these limits
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192

# Log packets with impossible addresses for security
net.ipv4.conf.all.log_martians = 1
Save and Exit.
# nano /etc/security/limits.conf
Add the following to the bottom of the file:
* soft nofile 100000
* hard nofile 100000

Very Optional - Map IRQ of Network Adapter to Specific Core

I was having some network problems related to UDP performance over a double NAT (though I didn't discover that until I did a myriad of packet captures). This was one of the steps I attempted while troubleshooting. It was a shot in the dark, and ultimately I left it out of my configuration. In case you're in the same spot, I thought I'd leave the instructions. After completing this section, you'll add a line to /etc/rc.d/after.local to run the script with the network adapter name and the core of the CPU you wish to tie it to. More comprehensive instructions are available here.
# nano /usr/local/bin/myri-irq-bind.sh
Paste in the following:
#!/bin/sh
#set -x

if [ $# -eq 0 ]; then
   echo "usage: msixbind.sh INTERFACE [CPU#]"
   exit 1;
fi

eth=$1
mask=$2

echo "Binding interface $eth"
pid=`pgrep irqbalance`
   if [ $? -eq 0 ];
   then
       echo "irqbalance is running! Pid = $pid"
       echo "it will undo anything done by this script"
       echo "Please kill it and re-run this script"
       exit
   fi

done=0
i=0
slice=0
start=0
num_slices=`grep "${eth}" /proc/interrupts | wc -l`
while [ $done != 1 ]
do
# one of the following, depending on which version of the driver is installed
   irq_data=`grep "${eth}:slice-${slice}" /proc/interrupts`

   if [ $? != 0 ];
   then
       if [ $i != 0 ];
       then
           exit
       fi
       irq_data=`grep "${eth}" /proc/interrupts`
       if [ $? != 0 ];
       then
           exit
       fi
   fi
   irq=`echo $irq_data |  awk '{print $1 ; }' | sed -e 's/://g'`
   file="/proc/irq/${irq}/smp_affinity"
   printf "Binding slice %2d to CPU %2d: writing mask 0x%08x to $file\n" $slice $mask $mask
   printf "%x" $mask > $file
   i=`expr $i + 1`
   slice=`expr $slice + 1`
   if [ $slice -eq $num_slices ];
   then
       exit
   fi
done
Save and exit.
Run the following:
# chmod 750 /usr/local/bin/myri-irq-bind.sh

Friday, December 20, 2013

Part III - The Ultimate Linux Home Router - Active Directory on openSUSE using Samba 4.1

Compiling the Latest Version of Samba 4

Samba 4 is the first version of Samba to adequately host Active Directory without the need for an expensive Windows Server. Though it's been in the wild for a while now, it's still got some rough points. I've been using it for a few months now and the only issues I've run into have been centered purely around Microsoft applications that require schema updates to function. I'm running it with a single domain controller, so I've not tried to get replication functional (this is, after all, just a home network).
There are samba packages in the Tumbleweed repository, but at this point in time they are not at the version I needed, so we'll be compiling from the source. Parts of this entry were adapted from Conrad Jones blog. Much thanks to him for publishing that. Run the following commands (make, gcc and binutils will already be present if you opted to install fish):
# zypper install make gcc binutils autogen krb5-devel krb5-client nano libacl-devel acl attr python python-devel
You may get a message indicating that python conflicts with patterns-openSUSE-minimal_base-conflicts, select to deinstall if this pops up.
# reboot
We need to reboot to ensure that the filesystem is mounted with ACL support. After reboot, login:
$ su
(Enter your password)
# cd ~
# wget http://www.samba.org/samba/ftp/stable/samba-4.1.3.tar.gz
# tar -xvf samba-4.1.3.tar.gz
# cd samba-4.1.3/
# ./configure; and make; and make install
You may want to visit Samba's Website to find the latest version rather than using 4.1.3. They move pretty quickly over there (a samba server I just installed two weeks ago was 4.1.1).
After about ten minutes, you should be compiled and ready to start configuring Samba and creating your new Active Directory domain.
Note: If you're wondering why the compile command issued was ./configure; and... instead of ./configure &&..., && is one of a handful of things fish shell doesn't support. If you're going to switch your shell to fish long-term, keep that in mind when viewing articles that have linux commands embedded. The syntax above will work in fish and bash.

Creating Samba Service Definition

# nano /usr/lib/systemd/system/samba.service
Copy and paste the following into the editor:
[Unit]
Description=Samba AD Daemon
After=syslog.target network.target
 
[Service]
Type=forking
PIDFile=/usr/local/samba/var/run/samba.pid
LimitNOFILE=16384
EnvironmentFile=-/etc/sysconfig/samba
ExecStart=/usr/local/samba/sbin/samba $SAMBAOPTIONS
ExecReload=/usr/bin/kill -HUP $MAINPID
 
[Install]
WantedBy=multi-user.target
Hit Ctrl+O (Write Out) and Ctrl+X (Exit).
Now we'll create a symbolic link to the service and configure it to run at startup
# ln -s /usr/lib/systemd/system/samba.service /etc/systemd/system/samba.service
# systemctl enable samba

Installing BIND 9 (Optional, use BIND 9 backend)

Personally, I prefer to stick with BIND over the default SAMBA DNS. It's rock solid, configurable, and there's a ton of information on how to tweak it because it's so widely used. You can stick with the internal DNS implementation if you want (and it's certainly a lot fewer steps), but I elected to use BIND.
# zypper in bind

Edit the configuration files

Now we need to add some parameters to the existing named.conf.
# nano /etc/named.conf
Directly beneath options {, put the following lines:
        # BEGIN ---- Configuration Modifications
        auth-nxdomain yes;
        # NOTE: These are the opendns.org DNS servers. I choose to use them instead
        #       of the ones provided by my ISP. They're available globally so they
        #       should work fine, however, if your ISP or router adds their own split-brain
        #       DNS (U-Verse typically does this to make it easy to connect to your
        #       router information page), you will not be able to connect to their
        #       custom DNS entries.
        forwarders { 208.67.222.222; 208.67.220.220; };
        allow-transfer { none; };

        # IMPORTANT: Change the below to point to values appropriate for YOUR home network.
        #            I used 192.168.0.0 for my wired internal network and 192.168.2.0 for
        #            my guest wireless restricted internal network.
        allow-query {
                127.0.0.0/24;
                192.168.0.0/24;
                192.168.2.0/24;
        };

        # IMPORTANT: You should only EVER let hosts sitting on the non-internet side
        #            of your router allow recursion even if you allow queries from
        #            external networks!
        allow-recursion {
                127.0.0.0/24;
                192.168.0.0/24;
                192.168.2.0/24;
        };

        pid-file "/var/run/named/named.pid";
        empty-zones-enable no;
        # END   ---- Configuration Modifications
A note about some ISP networks: I have an ISP that provides me with an actual router. Ideally, we want to set up that router to pass all traffic to this router and do nothing else with it (disabling all firewall features). Sometimes this is done in a way that results in your external network card receiving your actual public routable IP. Sometimes this is done by setting the external interface as a DMZ host. The difference between them is negligible. With my ISP, if I want to have this box actually get the public IP address, I have to pay for a static IP. So I opted to use the DMZ method. This matters for only one reason NEVER include the network that is internal to your ISP provided router in the allow-query or allow-recursion section.. If you do that, you're actually allowing the entire internet to query and recurse on your DNS server. If you want to allow querying, simply add the external network zone to the allow-query section, but definitely do not add it to the allow-recursion section.

Modify the localhost Zone Files

# rm /var/lib/named/localhost.zone
# rm /var/lib/named/127.0.0.zone
# nano /var/lib/named/localhost.zone
Paste the following into localhost.zone:
$TTL 1W

$ORIGIN localhost.

@       1D      IN      SOA     @   root (
                        42              ; serial (d. adams)
                        8H              ; refresh
                        2H              ; retry
                        4W              ; expiry
                        1D              ; minimum
                        )

@       IN      NS      @
        IN      A       127.0.0.1
        IN      AAAA    ::1
# nano /var/lib/named/127.0.0.zone
Paste the following into 127.0.0.zone:
$TTL 1W

@       IN      SOA             localhost.   root.localhost. (
                                42              ; serial (d. adams)
                                8H              ; refresh
                                2H              ; retry
                                4W              ; expiry
                                1D              ; minimum
                                )

                IN NS           localhost.

1               IN PTR          localhost.
Fix permissions:
# chown named:named /var/lib/named/*.zone
# chmod 640 /var/lib/named/*.zone
Let's test the configuration:
# named -u named
Did you see nothing? Good. Otherwise, if you got an error, run through the configuration parts of this section again and make sure you set it up correctly.

Configuring for hosting services on our Internal network

Now that we're actually going to HOST something and we have DNS setup, we need to stop requesting DHCP addresses. We're going to ignore wireless for now. I currently have only one network adapter plugged in since my other is USB based and I don't need it quite yet, so I'll be referring to the only adapter that I have. If you're following along with this from Part I, you should only have one cable plugged in and that should be your plugged into your internal network.
# yast
Select Network Devices and then Network Settings.

You're on the Overview screen. Select the network adapter that represents your plugged-in Ethernet adapter and choose Edit.
Put an X next to Statically assigned IP Address and hit TAB
Set your IP address to the desired value and hit TAB. I'm using 192.168.0.1 since .1 is commonly the router IP address.
Set your Subnet Mask and hit TAB. I'm using 255.255.255.0.
Set your fully qualified host name. This will be the computer hostname and the domain name that corresponds with your DNS and Active Directory domain. Example: host.somedomain.com
Go to the General tab.
Make sure Assigned Interface to Firewall Zone is set to Internal Zone (unprotected)
Hit F10
Go to Hostname/DNS and select Name server 1
Set it to the IP address you assigned above.
Go to Routing and select Default IPv4 Gateway and set it to your current, functioning router's IP. Do the same for IPv6 if applicable.
Hit F10 and once.
... and ... it hangs (or disconnects)! You just changed your IP address. Reconnect, but use the IP address you just set.
$ su
(enter your password)
# chkconfig -s named on
# rcnamed start
# rcnamed status
You should see the status as active (running). Now we'll test localhost lookup:
# host localhost. 127.0.0.1
Using domain server:
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases:

localhost has address 127.0.0.1
And reverse lookup:
# host 127.0.0.1 127.0.0.1
You should see:
Using domain server:
Name: 127.0.0.1
Address: 127.0.0.1#53
Aliases: 

1.0.0.127.in-addr.arpa domain name pointer localhost.
And let's nslookup something.
# nslookup google.com
You should get back results. If you do not, "nano /etc/resolv.conf" and make sure the line that says "nameserver" reads "nameserver 192.168.0.1" or whatever the IP address that you used for your server. Yast failed to update it when I updated the network settings. If it's different, change it, save it and try nslookup again

Provisioning our Active Directory Domain

We'll use samba-tool to provision the domain.
# /usr/local/samba/bin/samba-tool domain provision --use-rfc2307 --interactive
You'll be prompted for values. Most should be the defaults if you've been following the configuration since Part 1.
Realm [YOURDOMAIN.NET]:
 Domain [YOURDOMAIN]:
 Server Role (dc, member, standalone) [dc]:
 DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) [SAMBA_INTERNAL]: BIND9_DLZ
Administrator password:
Retype password:
Make sure to set the DNS backend to BIND9_DLZ

Updating BIND configuration to include samba entries

# nano /usr/local/samba/private/named.conf
Put a "#" in front of the 'database "dlopen /usr/local/samba/lib/bind9/dlz_bind9.so"' and remove the "#" from 'database "dlopen /usr/local/samba/lib/bind9/dlz_bind9_9.so"'.
Save and exit.
# nano /etc/named.conf
Add the following to the top of the file:
include "/usr/local/samba/private/named.conf";
Save and Exit.
And finally, we have to prevent BIND from running chrooted so that it can find the dlz_bind9_9.so file.
nano /etc/sysconfig/named
Change the line that reads NAMED_RUN_CHROOTED="yes" to NAMED_RUN_CHROOTED="no".
# systemctl start samba
# rcnamed restart
# rcnamed status
You should see that the server is running. Now let's check that the samba entries have been created:
# host -t SRV _ldap._tcp.yourdomain.net
You should see:
_ldap._tcp.diagonactic.net has SRV record 0 100 389 yourdomaincontroller.yourdomain.net.

Configuring Kerberos and testing your new domain

We need to be able to get kerberos tickets, so we're going to configure the kerberos client and give it a test.
# cp /usr/local/samba/private/krb5.conf /etc/krb5.conf
# kinit administrator@YOURDOMAIN.NET
(enter the password you provided in the samba-tool step above
The capital letters are not optional. You should see a message about your password expiring in several days. Now, let's see if you have a ticket.
# klist
You should see a ticket in the list with a start date, expiration date and a renew until date that corresponds with the user you entered in the kinit statement above.

Configuring your server to backup your Active Directory environment

Especially since Active Directory support in Samba is relatively new and has bugs being fixed constantly, it's a good idea to have a backup because you'll probably be applying updates. And the fact that an AD domain is something that's usually in a state of flux, it's a good idea to get that started now.
# cp ~/samba-4.1.3/source4/scripting/bin/samba_backup /usr/sbin
# chown root:root /usr/sbin/samba_backup
# chmod 750 /usr/sbin/samba_backup
# mkdir /usr/local/backups
# chmod 750 /usr/local/backups
# ln -s /usr/sbin/samba_backup /etc/cron.daily/samba_backup
# nano /usr/sbin/samba_backup
Find the part of the script that looks like this:
                for ldb in `find $relativedirname -name "*.ldb"`; do
                        tdbbackup $ldb
                        if [ $? -ne 0 ]; then
                                echo "Error while backuping $ldb"
                                exit 1
                        fi
                done
Change the "tdbbackup $ldb" line to this (you can also add samba to your path, however, this guarantees the script will execute properly without the path variable set).
/usr/local/samba/bin/tdbbackup $ldb
Test the backup:
# /usr/sbin/samba_backup
# ls /usr/local/backups
samba_backup should return silently after running for a second and you should see files "etc..tar.bz2", "samba4_private..tar.bz2", and "sysvol..tar.bz2". If you ever need to restore, follow the instructions on the Samba Wiki.

Part II - The Ultimate Linux Home Router - Tumbleweed and Customizing the Environment

Part I - Tumbleweed (optional)

This is optional and you'll have to decide for yourself if switching your build to Tumbleweed is the way to go. Basically, this step involves "rebranding" your installation away from the openSUSE 13.1 and over to Tumbleweed. Tumbleweed does not have a version. When a new package of any kind is available for your installation, it will show up as an update. If you have automatic updates enabled, this will ensure you're running the latest version of everything, including the kernel, all the time.
There are a few reasons you may not want to do this:
  • You have some kind of kernel module or modification that needs to be compiled for each kernel. In my case, the external network adapter falls under this category. Every new kernel version will require me to recompile the kernel module and install it. If the kernel is updated automatically and the system is rebooted, it will revert to the module that's built into the kernel. If that module is the same version that exists today (and I have no reason to believe it would not be), my router is going to fail. It's a risk I'm willing to accept, since I will be routinely checking up on the system and installing the kernel module is pretty simple. I also, hope, one day to script this part out somehow.
  • Rolling updates can be unreliable. And, indeed, Tumbleweed isn't as widely used and tested as the default distribution.
Ultimately, I'm using tumbleweed because I'm choosing to expose some services to the wide open internet and I want everything to be as up-to-date as possible for security purposes.
That said, I don't believe any of the remaining steps explicitly requires you to use Tumbleweed.
Here's some more information so that you can make the decision yourself.

Switching up the repositories

Connect to your box via PuTTY and login.
We're going to be firing off a lot of commands that require root access, so step one is to:
$ su
Enter your password
# zypper ar --refresh http://download.opensuse.org/repositories/openSUSE:/Tumbleweed/standard/ Tumbleweed
# zypper ar --refresh http://download.opensuse.org/distribution/openSUSE-current/repo/oss/ 'openSUSE Current OSS'
# zypper ar --refresh http://download.opensuse.org/distribution/openSUSE-current/repo/non-oss/ 'openSUSE Current non-OSS'
# zypper ar --refresh http://download.opensuse.org/update/openSUSE-current/ 'openSUSE Current OSS updates'
# zypper ar --refresh http://download.opensuse.org/update/openSUSE-non-oss-current/ 'openSUSE Current non-OSS updates'
# zypper rr openSUSE-13.1-1.10 repo-debug repo-debug-update repo-debug-update-non-oss repo-non-oss repo-oss repo-source repo-update repo-update-non-oss
# echo '[main]' > /etc/zypp/vendors.d/Tumbleweed.conf
# echo 'vendors = suse,opensuse,obs://build.opensuse.org/openSUSE:Tumbleweed' >> /etc/zypp/vendors.d/Tumbleweed.conf
# zypper lr
That last command will list out the repositories. You should see this.
# | Alias                            | Name                             | Enabled | Refresh
--+----------------------------------+----------------------------------+---------+--------
1 | Tumbleweed                       | Tumbleweed                       | Yes     | Yes
2 | openSUSE Current OSS             | openSUSE Current OSS             | Yes     | Yes
3 | openSUSE Current OSS updates     | openSUSE Current OSS updates     | Yes     | Yes
4 | openSUSE Current non-OSS         | openSUSE Current non-OSS         | Yes     | Yes
5 | openSUSE Current non-OSS updates | openSUSE Current non-OSS updates | Yes     | Yes
If you're missing any of those, run the appropriate "zypper ar --refresh" command above to add the missing repository. If you've got more than what is above, run zypper rr where is the name of the alias in the your list that isn't in the one above. This is very important. You won't be able to reliably upgrade the distribution if you have more than these repositories enabled.
Now run:
# cat /etc/zypp/vendors.d/Tumbleweed.conf
You should see:
[main]
vendors = suse,opensuse,obs://build.opensuse.org/openSUSE:Tumbleweed
If you see something different, run "rm /etc/zypp/vendors.d/Tumbleweed.conf" and issue the "echo" commands above again (along with the cat command to ensure it's correct).
We're ready to refresh the repositories and convert this install to a Tumbleweed install.
# zypper refresh
You're going to get prompted several times with a message similar to this:
Retrieving repository 'Tumbleweed' metadata ---------------

New repository or package signing key received:
Key ID: xxxx
Key Name: xxxx 
Key Fingerprint: xxxx
Key Created: Mon 18 Feb 2013 12:09:00 PM EST
Key Expires: Wed 29 Apr 2015 01:09:00 PM EDT
Repository: (repository name)

Do you want to reject the key, trust temporarily, or trust always? [r/t/a/? shows all options] (r):
Type "a" and hit enter each time.
This could take a few moments depending on your internet connection or server performance. You've already had a cup of coffee, either switch to wine or surf Hacker News or something while you wait. When it's finished:
# zypper dup
# reboot
You've probably gotten a new kernel at this point, so it's a good idea to reboot the bugger. Hopefully, you get the same IP address from DHCP. If not, log in locally and:
$ sudo /sbin/ifconfig
To find your IP address.


Part II - Fish Shell (optional)

I have been using Fish Shell for a while. It's a great productivity enhancer for the infrequent sysadmin. It's by no means required, but I strongly recommend it. More about that here.
If you visited the site you probably discovered that there's an easy way to install via zypper. I haven't been able to get this to work in quite a while on openSUSE, so we're going to build it.
# zypper in git ncurses-devel autoconf gcc-c++ cmake lynx
# cd ~
# git clone git://github.com/fish-shell/fish-shell.git
# cd fish-shell
# autconf
# ./configure && make && make install
# echo '/usr/local/bin/fish' >> /etc/shells
Note: We've installed a C++ compiler, autoconf and cmake, which are development tools that allow for compiling of applications. Generally speaking, it's a bad idea to leave these installed on a production machine. Later you might want to remove these packages if you don't need them (and you can take that advice for any package that you don't need).

Set Fish Shell as the Default Shell (Optional)

Fish is not precisely compatible with bash. Normally this isn't much of a problem provided each script you need to run identifies itself as a bash script. I have never had an issue setting fish as the default shell, but if you're concerned you can skip this and invoke fish by simply typing "fish" at a $ or # prompt. To set it as default:
# chsh -s /usr/local/bin/fish
# exit
$ chsh -s /usr/local/bin/fish
(Enter your password)
$ su
(Enter it once again)

Syntax Highlight in Nano

I prefer the GNU nano text editor over the more common vi and others. It's a simple 'notepad'-in-text-mode-like editor. The openSUSE includes a number of syntax highlighting options, however, they're not turned on by default and there are some missing. We'll download new syntax highlighting rules and configure the environment to use them as well as add a command to your profile to prevent nano from wrapping lines (fatal to configuration file editing).
# cd /usr/local/src
# git clone https://github.com/nanorc/nanorc.git
# make
# echo 'set nowrap' >> ~/.nanorc
(enter your password)

Other Useful Packages

# zypper in man

Part I - The Ultimate Linux Home (and possibly Small Business) Router based on openSUSE

Introduction


This is a multi-part post that walks through the creation of what I consider to be "The Ultimate Linux Based Home Router". Of course, you're feelings on what is truly The Ultimate will likely differ from mine, however, since this is my blog, I'm defining.
My apologies for typos or bad grammar along the way. My secret real reason for putting this together is to ensure that I have a log of what I did. I have very little time to devote to blogging, so proof-reading has only been partially done.

Distribution

I picked OpenSUSE: Tumbleweed. There are many Linux Router distributions like ClearOS and such that offer turn-key solutions. I tried them and didn't like that a lot of the features I was interested in were add-ons that cost money. These guys offer a clear advantage--they're really dead simple to setup and maintain. If you want easy, go that route. If you want free, and want to learn a bit along the way, follow along. My choice of openSUSE is purely because I'm comfortable with it. I've been working with OpenSUSE for about a decade.

Features - Work in Progress

  • High performance routing and all of the features you'd expect from a home router, DHCP, DNS, etc.
  • Active Directory without Windows Server
  • NTP Server
  • Certificate Server
  • Authenticated transparent proxy with filtering for the children
  • Advertisement filtering and privacy enhancing capabilities for every device on network
  • Secure Shell with port forwarding
  • Traffic shaping
  • Intrusion Prevention
  • Guest wireless access for children's wireless devices and other untrusted devices (these will be filtered the most aggressively, including an SSL interception/replay with a local certificate from our new CA)
  • Home web server
At the time this was published, advertisement filtering, secure shelling, NTP and Active Directory are completed. I ran into some issues with being double NATed that have to wait until Comcast is open so I can have a static IP provisioned. I'll publish the remaining steps as I get them completed.

But Why?

Partly because I can. Mostly because this year I remarried and added two (amazing) children to my life. One of them is in second grade and is just starting to regularly use the internet. He also has an older friend across the street who has no internet access, so his phone is hooked up to my wireless. We're nuts in this house about adult supervision on the Internet (yes, we're those kinds of parents). At the same time, I know we won't be around all the time, so I want my children's (and their friend's) devices to be highly restricted. Filters suck. And I know there will be times when I will need to open up access to some things, so by adding authentication to the proxy, I or my lovely wife will be able to login with an account that is less restricted.
As far as wireless is concerned, I hate the idea of giving out my wireless password to devices I don't own. Google's Android, by default, uploads the wireless keys and stores them in a retrievable format, meaning Google essentially has the keys to everyone WiFi network any android device has connected to (there's an opt-out to this feature, but most people don't). We opt out on our main network with our devices. Any other devices that want access will have to go through our guest network, which includes an SSL intercepting firewall (don't worry, I'll make sure to document how to skip that feature).

Hardware

I repurposed an old MiniITX media center PC. It's got a lot of hardware that is unnecessary, but it's powerful enough to fit the bill for a solid home router. The specs are:
  • 2.4 GHz Intel Core(TM) i3 CPU M370
  • 4GB onboard memory
  • Built-in Ethernet and Wifi
  • USB 3.0 External Ethernet

BIOS / Hardware Preparation

This is going to be different for everyone, so you'll have to hunt around, but here is what you want to configure. Depending on your hardware, some of these options may not be available. Don't worry, they are all optional.
  • SATA should be in AHCI (not IDE) mode.
  • Power on when power is lost.
  • Decide what to do about HyperThreading
  • Disable any hardware that you won't be using (Parallel ports, sound cards)
If you can't switch SATA to AHCI, later in this document when I'm referring to the main hard drive, I'll be calling it 'sda'. This may show up as 'hda' for you. Just switch it around. With regards to HyperThreading, most of the recommendations for Linux Routers advise to disable it. On my previous home-grown router (on much less powerful hardware), I ended up enabling it because it routinely fell over with too much traffic when it was disabled. I am disabling it on this hardware.
At this point, I do not have the USB Ethernet adapter even plugged into this machine. I also have an existing router in place that I do not want to disturb, yet (that way my other computer can continue to surf the internet while I'm building this new router). If you have more than one Ethernet adapter on board, make sure you only plug one cable in and make sure the cable is plugged into the ethernet adapter that you want to be your Internal/Non-internet facing adapter.

The GUI Installation

Download the correct version of openSUSE 13.1 for the processor that your new router has installed. Burn it to a DVD or follow the instructions to make a thumb drive out of it. Boot!

Welcome Screen

Set your language and accept the license agreement (don't worry, you're not signing your life away!)

Installation Mode Screen

Select New Installation. Deselect Use Automatic Configuration

Clock and Time Zone

Select Hardware Clock Set to UTC. I'm not sure that this matters, but being a guy who does a lot of work with data, I take a hard line on the idea that UTC is the only way a date/time should ever be stored.
Select Change.... Hidden behind this menu is the NTP settings. Since we're going to be configuring this server with Samba 4.1 and Active Directory, time synchronization is very important.

Change Date and Time

Select Synchronize with NTP Server. Select an NTP server and select Save NTP Configuration. I've always used us.pool.ntp.org.
Select Run NTP as daemon and finally Synchronize now for good measure.
Select Accept

Desktop Selection

I wanted the server to have as many resources available for actually serving things, so I went minimal here. You may decide you want the GUI, and if so, KDE is the default and I'd recommend sticking with that based on my past experiences with openSuSE.
Select Other.
Select Minimal Server Selection (Text Mode).

Suggested Partitioning - Reloading an existing Windows or Linux machine from scratch

First things first, if you're reloading a machine that has a different operating system on it, you might miss the fact that the openSuSE installer is likely trying to preserve your existing partition. You also, likely, have different hardware than me so the partition strategy I used might not fit. Use common sense, or do what I did and prefer shiny new things. Since I'm writing this as much for me -- to track what I did -- as for you, here's what I did.
There are some important things to consider here. I'm running on one high-performance SSD and have no option to have a second disk installed internally due to the form factor of the case I am stuck with. If you have an SSD and an HDD, you may want to partition out areas of the drive that experience frequent changes (/var comes to mind). This ensures logging and such isn't constantly writing to your SSD and reducing its lifespan. Select Create Partition Setup....
Select 1: (your disk details here)
Select Use Entire Hard Disk
I select Propose Separate Home Partition primarily because that's, minimally, how I've always done it. I also selected Use Btrfs as Default File System because... shiny. You probably should stick with ext4 for everything.

Create New User

Fill this out with values that are meaningful to you. If you're going to enable SSH and open it up to the world -- and allow password authentication via SSH, you'll want to select a miserably difficult password.
Select Use this password for system administrator.
Select Receive System Mail.
Deselect Automatic Login.

Installation Settings

We're going to customize the software later, but a few things are worth setting up here.
Under Firewall and SSH, select enable and open next to SSH service will be disabled, SSH port will be blocked. The rest of our customization we're going to do remotely via SSH, so we want this turned on.

Installation

Get some coffee...


Text Mode

If you've followed this exactly, you're rebooted into a text-mode YaST2 installation @ linux screen. If you installed KDE, you have something different. The options will probably be similar, but since I've elected to skip the GUI, I can't confirm that. Set your hostname and domain name. I'm using the same domain name that my Active Directory domain name will use. It's also a domain I registered to myself on the internet. If you're using a domain name that can be valid on the Internet, make sure you register it because we're going to configure the DNS to be split-brain and if someone else registers that name, you'll be unable to access that domain. Believe it or not, this was a problem I had with my last installation. Alternatively, you can use something that ends in ".local" and hope that ICANN doesn't open that up for new registrations.

Network Configuraiton

The external network card I'm using is problematic. The default support for it in the Kernel uses a kernel module that is really flaky and will simply "forget" the card, requiring a reboot to find it again. The manufacturer provides a driver that I'll be compiling and installing later, so for now it's unplugged and I'll only be configuring the adapter that's on board.
I also have a WiFi card in this device and at a future point will be configuring this router to act as a very filtered guest network. The neighbor kids don't have internet service at home and since they visit frequently, they have my WiFi password on their phone. Google's Android default setting includes uploading that password to Google where it's stored in plain text (unless you explicitly disable that), I'd like to have a separate, isolated, WiFi network for my friends/family/neighbors and children's devices. This subnet will also have the most restricted settings as it pertains to proxy filtering. I'm happy to let my family/friends borrow my internet when they're here, but I'd prefer if you weren't using it to download porn.
That said, the default settings here are fine. We'll fuss with most of this later, right now we want a bootable box that we can SSH into.

Test Internet Connection

Hopefully this works for you! In 12.3, it didn't for me. 13.1 appears to be perfect.
The installer will download updated packages. Considering 13.1 was released only a few short weeks ago, I'm impressed at how much as already been superseded with new packages.

Online Update

Let the update run now and select Accept when the list of packages requiring updates is displayed.

Final Steps

Allow the system to reboot itself and you might also want to eject that CD at this point.

Release Notes

Does anybody really read these? Use your other computer and look the release notes up online if you really want to see them.
Click next and finish and say hello to your login prompt (it may take a second or two to come up).
Login with the user name and password you set during installation.

Setting up / Checking SSH

I couldn't connect to the box via SSH despite selecting to enable SSH on the firewall at installation. Here are the steps I took to enable it via YaST2: $ su (Enter your password) # yast Head over to Security and Users.
Select Firewall.
Select Allowed Services.
Select Service to Allow.
Choose "Secure Shell Server"
Select Add
Select Next
Hit F9 to quit. Run the following command:
# /etc/init.d/sshd start
# chkconfig sshd on
# ifconfig

That last command will show you the IP address that was assigned via DHCP. If you intend, like I do, on doing the rest of this installation on a remote windows box, it's time to download PuTTY. Open putty, put the IP address from above in the Host Name field and change the following settings (optional - this is how I like it). On the logging tab, set Logging to "Printable Output".
On the Window tab, set Columns to 160, Rows to 96 and Lines of Scrollback to 99999 (Because 100000 is one line too many).
On the Appearance tab, Change the font to Consolas 9-point and Font Quality to Default.
Stay tuned for Part II

Sunday, August 26, 2012

FIX: Visual Studio 2012 takes dramatically longer to build than Visual Studio 2010

Symptoms

Aside from a very agitated developer, the solution or projects within the solution in question used Code Contracts in Visual Studio 2010 and the Code Contract library is not installed or working with Visual Studio 2012.  You can verify that this is the case by simply pulling up a project's properties.  If you have no code contract tab, you have no functioning add-on.  Go and install it. (As of this writing, you will have to run devenv /setup from a VS2012 Command Prompt to make it all work)

The Fix

Install the latest Code Contracts.  Reopen your solution.  Check the Code Contracts page and make sure they are configured optimally (cache enabled and background processing enabled).  It appears that even using identical settings to Visual Studio 2010, it still resulted in the compile taking far longer on Visual Studio 2012 with any code contracts enabled.  Perhaps the Code Contract's library just isn't ready for VS 2012 RTM.
Though Code Contracts are a great feature, I simply didn't need them for this project so I disabled them entirely and my build times went back to normal.  I'll follow up when I discover what the real cause was and am able to turn everything back on.

Wednesday, August 8, 2012

Getting Burned by System Center Configuration Manager (and some help to avoid it!)

A coworker sent me this great story about HP deploying a task sequence in Configuration Manager and destroying all (or at least a substantial number) of their PCs and workstations (other helpfuls for more details).

It's interesting in that it highlights a battle I had to fight a while back and goes back to a phrase I probably utter once a month.  "SCCM is the most dangerous tool we own."  Along with a couple of succinct examples as to what an administrator of SCCM could do in ten minutes to utterly destroy anything connected to it.

As the author indicated, business owners, users, and managers, often simply see it as a "patching tool" like Windows Update with a handful of other features.  Microsoft has been doing a fantastic job with patch reliability, most people own Windows computers and understand what patching is and simply expect it works 99% of the time and therefore any tool associated with patching is assumed to be simple and elegant (two words most IT folks wouldn't immediately jump to when describing Microsoft Update, but I can't think of the last time my mom or dad called about a computer problem and the culprit was MU).

Also, as the author indicated, it's painfully easy to overlook something and accidentally deploy something unintentionally with a far greater scope than intended.  In the story above it was a task sequence that included formatting the drive.  Someone not familiar with SCCM or in a shop that doesn't use all of its features may scratch their head wondering why something like this would exist. It was unlikely that, as the author states, it was just a simple reformatting.  It was more likely a whole operating system deployment.

Don't use it, it's too dangerous!

This is often the knee jerk reaction that organizations go to after a minor or major catastrophe, and I'm willing to bet that's what Australia's CommBank is wrestling with right now.  I've even argued against the use of the previous version of SMS 2003 and "it's too dangerous" was one of several of the bullet points.

Out of the box, it is too dangerous.  The way people typically architect the entire solution (which is to say, they don't) is too dangerous.

I'm going to go into more detailed steps with screens in a future blog posts assuming that they are still an issue in SCCM 2012 (and I have no doubt that some of them are) but I wanted to touch a few things that I did to mitigate the risk so that the benefits could be enjoyed.

Dramatically restrict access to the All System's Collection and make sure all administrators repeat "Don't use All Systems for anything, ever" at least once a day for a year.

At a minimum, access to do anything to or with this collection should be restricted to one or two people, preferably two people that don't actually work with the system day-to-day.  SCCM has very granular access controls (so granular that few people bother to use it and I'm told it's been dialed back in 2012 to strike a good balance).  The issue above was an administrator accidentally including All Systems as criteria for advertisement of a task sequence.  This wouldn't happen where I work.  Aside from the collection being restricted at the ACL, everyone who administers it understands the mantra of "Don't use All Systems."

Rinse and repeat for All Workstations and All Servers.

Every new roll-out of anything should be phased to collections with well defined membership criteria.

At a minimum, you need three categories for deployment.  Since SCCM is targetted at larger organizations, you likely need more than six.
The first category is "hopeless victims".  These are the workstations of your experts and volunteers.  Include people that have regular backups and that understand, fully, the dangers.  These folks should also understand that it is their responsibility to report problems --- any problem --- immediately, if they suspect it was from SCCM.  Servers in this category would have to be pure development, with impacts to them being minimal if they went down.
The second category is "dev/test".  These are servers your organization would survive a couple of days without at moderate/low impact.  For larger organizations, this would be at least two groups.
The third category is "production".  For organizations with redundant systems, I'd insert at least one additional category "Node A" of redundant systems, followed by subsequent nodes before going to applications that have servers that are a single point of failure.

That's the simplest implementation.  On the workstation side, it's a good idea to create collection criteria that spreads user impact evenly across functional areas.  Make this judgement based on the number of people that can be out sick for a day before a department fails.  Don't deploy anything to more than that many of those user's PCs in a 24 hour period.

Make sure management knows the risks, understands, and is on-board.  Make it formal.

Thankfully, the management staff from my level up is fantastic.  They understood the dangers and were willing to sign off on a policy.
Here are our rules:
Nothing gets deployed company wide on the same day it's advertised regardless of its scale.   Regardless if the roll-out is a screen saver for Marketing that goes to all customer facing users, a security patch that isn't being actively exploited or cannot be mitigated through other means or a general operating system deployment that goes to all workstations, the advertisement is at least a business day in the future.  The reason for this is to give a window to account for administrator error.  I've personally been saved by this rule.  I advertised a full Microsoft Office 2007 Professional installation to nearly the entire company (at the time about a 1GB install, multiplied by ~5,000 workstations many of which didn't meet the minimum requirements for that version).  That 24-hour buffer allowed me to review where the deployment was going, and reverse it.
Each roll out group is given one day's buffer.  To the above point: The first group (the "hopeless victims") are the only ones to receive the rollout after one day, and they're given at least one day to provide feedback.  If you've picked the right people for your hopeless victims, you won't have to send an e-mail to let them know to "watch out", they'll scream properly at the first hint of a problem.  The reason should be obvious: containment.  As you roll out, the risk for problems is highest initially.  As each group is added successfully, the risk is reduced while the surface area is increased.

Policies are meant to be broken

No exceptions, except.  Identify every exception you can.  Some of these are personnel issues--Marketing wants a new screen saver deployed company wide, they just finished testing it and want it there tomorrow.  For my own job protection, I wouldn't do something like this without C- level executive sign off.  Decide what's enough accountability if things go horribly wrong.  Most deployments of this nature are not emergencies, they're eagerness by people who don't know and shouldn't have to care about the risks (that's your job!).
The "real" emergencies almost always have more than one option.  These are the "PATCH RIGHT NOW!" situations due to malware infection.  Patching the problem is the most obvious solution, but during an emergency it's important to remember the bold friendly letters of The Hitchhiker's Guide to the Galaxy (Don't Panic!).  The few minutes it takes to step away and analyse a problem are far more valuable than the hours or days it takes to undo your poorly planned solution.  What are you trying to prevent?  In the midst of an emergency, it's difficult to see beyond the "gut reaction" solution. (System 1 says "I'm trying to patch the vulnerability to prevent an infection", System 2 says "I want to minimize the impact to my customer's personal information/my business transactions/my (specific) intellectual property).  It might be better to pull the plug to the internet for a few hours than to deploy a poorly tested solution.  Understand the solution, rank your options from lest impacting/most effective to most impacting/least effective.  Pick a few and start there.  Much of this falls into having a good plan for emergency management that includes the "Who", "What", "Why" and "When" so you can figure out the "How" as the bovine excrement hits the rotating blades of the air circulation system.  It's worthy of another post and I'll do my best.

And another thing ...

I have specifically avoided mentioning my employer.  This is my experience and is not limited to my current employer.  This is also my personal blog.  It is not sanctioned by my employer.  It is not written by me as an agent of the company I work for.  It is my opinion.  If you choose to take my advice, imagine that I'm a crazy person who has never seen a computer and has no business writing on anything computing related.

Out of respect for my best friend and coworker, everywhere you see "I", I should have wrote "we".  My experience was a result of (at a minimum) one brilliant mind sharpening my own.  I don't have permission to use his name (I haven't asked but will correct this post when I do).

And finally, at least some of the information presented has been gathered by the great number of other sources (through forums, blog posts and other heaping piles of awesomeness).  But they weren't gathered today.  They were gathered during crisis and combined with my experience, knowledge and sometimes just (Oh S*** Trial and Error).  If you pioneered the above lessons, let me know.  Send me a link and I'll update the post.

Thursday, December 22, 2011

AppliancesConnection.com (and GE Capital) ... Adventures (and failures in) User Experience (Updated 1x)

Background

After two service calls to fix an old dishwasher, I decided I'd had enough of my beautiful bride having to hand-wash 3/4 of what came out of our failing GE Profile dishwasher. I did some research and landed on a Bosch model that was both highly rated by its owners and recommended by Consumer Reports. The problem is that no local retailer carries this specific model. Being sensitive to the fact that I purchased the last dishwasher without enough research, I wanted this model. And heck, I buy everything else online, why not a semi-major appliance?

Solving Cart Abandonment at the Expense of an Angry Blogger

A lot has been written on preventing cart abandonment, and I won't say that they got it all wrong. I clicked "Add to Cart", did a quick retailmenot.com look. They actually have coupon codes named RETMENOT??, I saw this as funny and won't take issue with the whole "why don't they just offer that as a deal" element. Clearly they know a lot of customers are going to use that service to find coupon codes. They also didn't require me to set-up an account, and instead just e-mailed me a password (we'll skip the security implications -- that they're likely storing this password in plain-text in a database -- for another post).

When presented with payment options, I was offered 12-month financing if I filled out a quick credit app. I had intended on doing the equivalent of paying cash (I pay my credit cards off every cycle), but when offered an option to simply pay it off in chunks over a few months with no interest, my weakness to loss aversion kicked in and I told myself that funny little lie that somehow I'll pocket a small discount due to the interest earned with that money remaining in my investment account for a few months.

After completing my order and printing my authorization form as instructed (I felt dirty doing this, but I was on my bride's laptop which didn't have PDF Creator installed but did have a USB Laser printer attached). Then, I headed out for a small trip with the family. Upon returning, I discovered the order was on hold and I was required to submit proof of identification and fax or e-mail my authorization letter from GE Capital to AppliancesConnection.com. This seemed bizarre. I've got three other online accounts that I signed up and used same day and I've never been asked for such a sensitive piece of documentation. Coupled with the fact that they e-mailed me my account password, I was not confident about how this sensitive information was going to be stored. The inconvenience of having to scan this all in (and redact most of my drivers license) on what was in my mind "a done deal" was enough to make me cancel the order. Or, that's what I should have done. This dishwasher is hard to find, and it's the one I wanted. They were the only retailer of three that carried it and the only one with a delivery timeframe that was acceptable (my bride's poor fingers!). I'll likely never do business with them again, but they got this one.

Moan and Complain, that's what the Internet is for. STFU, how would you solve this?

  1. This is a solved problem. Amazon.com, buy.com and newegg.com have figured it out. Amazon even uses GE! Granted, I don't know AppliancesConnection.com's balance sheet and negotiating position with their payment provider, but if this is GE saying "pay us more to eliminate hassling your customers" and they're doing so claiming that the fees are to offset additional fraud, they're lying. It's a revenue booster. I could have easily forged the parts of my license they required me to send. And in the end, they were delivering to my billing/home address, which GE verified during the credit check. At some point, a dude is going to be walking this product into my foyer and I'll be signing for it.
  2. Shop for credit providers and find one that isn't stuck with policies pre-2002.
  3. Negotiate a better or equal solution that isn't quite such an awful user experience. While still messy, AppliancesConnection.com could have requested a secondary credit account with matching shipping/billing information, and only require the added scrutiny if the item is not being shipped to a matching billing addresses. This seems like it would be more effective than asking for my ID with everything but my name/address redacted. Even that seems unnecessary, though.
  4. At a minimum, ... prepare your customer for this. What followed after submitting my order was this strange progression of e-mails, one of which claiming that I had opened up a support ticket with the order (I was puzzled reading this on my phone). The credit authorization did have a section at the bottom informing the merchant to treat the transaction as they would if it were done face-to-face (laughable). I half wonder what would have happened if I had just ignored the e-mail. Would someone have called eventually?

So you jumped through the hoop anyway, STFU

You're right. At this point, I've attached the required information with the bits redacted. With how clumsy this was, I'm having second thoughts even as I write this. Will delivery scheduling be this messy? If one other thing ends up odd about this order, I'm cancelling it and probably going brick-and-mortar with my second choice dishwasher carried by a local retailer. I have a truck.

The difference: A delightful user experience

User Experience is the new customer service. If I complete a transaction and it's easy, or even delightful, it's the equivalent of being rushed to the front of the line and having a sales associate offer to help you load the product into your car. If, then, something goes wrong between the payment processing point and delivery that requires me to call customer service, I'm going to be far more forgiving and assume it's a one-off. Based on how that turns out, I'll probably do business with that merchant again. In fact, if the inconvenience is handled very well with discounts or other perks to offset the inconvenience, I may seek that retailer out first because they've now proven they know how to make things right when things go wrong. They'll be predictable if something like that inevitably happens again.

User Experience will probably be the only Customer Service I encounter when interacting with you. Do it like everyone else and I'll have my only incentive will be seeking out the best price. Do it right, and I'll start at your site and pay more for a product knowing the results will be predictably good.

Send weird, cryptic e-mail messages from do-not-reply addresses and make unusual requests for documentation, and you might get an ugly blog post on a blog nobody reads. Still, I've probably told at least 8-16 people about my only marginally bad customer experience.

UPDATE . . . 6:15 PM same day as post

The mystery solved

I kept thinking about this and it seemed so off that I had to review everything again.
After reviewing my approval documentation more closely, I discovered wording that implied I had actually applied for a more generic credit card (think Visa, Master, American Express or Discover card if nobody had ever heard of them). It's a GE Capital card (Ta Da!). So my card is accepted wherever GE Capital is accepted. Wait, what?! Where exactly? This is why I was asked for additional documentation during checkout. AppliancesConnection.com did what they'd be required to do if they were presented with a Visa/Master card that was in the just approved but not mailed yet non-card card state, so they were instructed to use the rather traditional protocol of requiring additional documentation ... except that method doesn't work online and it works even worse when the customer thinks they've just performed part of the check-out routine. Being a familiar, though infrequent experience, I would have understood what was going on if the GE Capital card was a Visa/Master/American Express/Discover Card. Perhaps there's a really good incentive (zero fees?) for landing in on the negative side of both a generic and a retail store-branded credit card, but I can't find one. Feel free to convince me.

This post was proof-read by my dog. Unfortunately, she died several years ago.