Verizon: It's not all about the network anymore.

Saturday, October 6, 2007
There's been a lot of stories going around about Verizon Wireless's anti-consumer practices of locking out handset features and several rants about their yet-to-be-released Smartphones being a year behind the competition.

I am a Verizon Wireless customer. I've been using a horrible Motorola E815 phone for the last couple of years. My next phone will be a Windows Mobile or similar device.

I've been with Verizon since the Air Touch days. I had one of their top calling plans and as a result was given a "perk" of the 3-ring 611 call (this essentially means when dialing *611, the phone was answered in three rings by a human being who ... most of the time ... could solve whatever question I had). Air Touch had customer service, good network coverage in my area and after their purchase by Verizon, coverage only got better. And lets face it, if your coverage is good, you're not calling Customer Service often. Verizon put their effort into a digital network and gave several of its customers free digital phones (with a contractual commitment). They were on the bleeding edge both in terms of network quality and the devices they offered their customers.

I would laugh at my friends who chose Nextel, or Ameritech (Cingular now AT&T) and T-Mobile, but they'd rarely hear my laughs because the it was far less often that their phones actually had a signal. Verizon was more costly, but if you wanted to actually use your phone, you went with Verizon in my area.

Late last year, my mother decided to switch to T-Mobile. I did everything in my power to stop her, as I remembered how poor T-Mobile signals were around here. Several months after purchasing her new phone I was surprised to find out that she has had no coverage issues. Another friend of mine decided on the iPhone (booming voice is heard as he says the words, as the expectation is that everyone around him is secretly envious). AT&T Wireless prior to Cingular had nearly the worst signal quality in my area (and I'm told much of the rest of the country). They were terrible, but since purchasing ... well ... everyone, AT&T's network is great here as well.

In fact, other than "Hello? ... can you hear me? I can hear you. You're breaking up. %$^#!" Sprint/Nextel , I don't know of anyone experiencing problems more or less problems than anyone else with their wireless service. And I'll assume that Sprint/Nextel figures their problems out in the short-term as well.

So if you're not going to lose a customer because of call quality . . . why would you lose a customer?

Locked Down Devices and Lack of Selection

Back to my E815 for a moment. I picked this phone up because the Motorola web site indicated it supported Bluetooth OBEX (transfer of pictures/ringtons to/from your PC without using "The Network"), and Bluetooth Dial-up Networking (allowing you to connect a PDA, laptop or other device to the internet without a USB wire). Of course, I knew that Verizon would kill these features in the production model, but a couple of well documented hacks existed for the early firmware versions. Mind you, a firmware upgrade will break Dial-up, so I'm running the initial release revision.
The phone itself, sucks. I bought it for those two features because I have a PDA, and a nice wireless headset, so I don't have to touch it very often. This is good, since setting it down on a table sideways causes it to shut off.

While they attempt to lock out features on the PDA phones, it is far less common since the devices cost more and run an OS that they don't direct control over. Of course, Apple changes the rules, or maybe Apple customers are simple-folk. I never imagined I'd see a day when someone would be willing to pay $500 (or $300 for that matter) for a phone that they'd have to break into in order to install software ... and still have to sign a contract.

It's an unfortunate trend, and I really, sincerely hope Apple loses because if they win, we all lose. The expectation that you'd be able to install software to your PDA/Smartphone or that you'd have most of the features "left alone" will be something that we will not be taking for granted. The good news is that it appears everyone but Verizon has discovered the only way to differentiate themselves from the Executive Jewelry that is the iPhone is to offer devices that are bleeding edge and flexible.

Even the Exclusive iPhone Vendor of the United States of America AT&T (EiVUSAT&T for short) seems to have gotten it with the "Tilt". This product is about two generations ahead of the VX6800. And now T-Mobile is releasing some new WM Smartphones (though, admit it, their selection has been lacking). Sprint, with their "lack of a network" network has also been ahead of the curve on new devices compared with Verizon.

Verizon has the fastest data network (for now), but they have equivalent phone coverage in my area. They've put the cart before the horse, though, because the phones they're releasing can't crunch the web pages at the speed they're downloaded. And what's the point of all that speed if the phone is running a crippled OS, or one that is so outdated that it doesn't let me actually take advantage of "the network".

Bottom line: In my area, the network doesn't matter. And my contract's up. TTYL Verizon Wireless.

A word about Seagate RMAs

After having been through quite an episode with some bad hardware causing other bad hardware, I had the pleasure of dealing with the Seagate RMA department for warranty repair.
My drives were about 2 years old and the process was incredibly simple. Put in a serial number, a model number, box it up and go.

The only cost was shipping it to them. They paid for the return shipping of my refurbished drive and shipped on the day my bad drive was received.

While filling out the online warranty repair forms, I received a notice asking if I'd prefer to upgrade my drive instead. The upgrade cost was $99.00 for a 500GB Refurbished SATA2 drive. The price was the same for both my 400GB and 300GB drive, both of which were also SATA2. Interestingly, at the time of the replacement, NewEgg was selling brand new OEM 500GB SATA2 drives for the same price, so this upgrade was less an upgrade and more of an upsell.
I had a dead 400GB from another PC, along with a 300 and 400GB drive from the HTPC and in both cases the 400GB drives were replaced with 500GB drives despite my not choosing to spend $99.00 on an upgrade. Your results may vary.

For those of you wondering if it's worth the $20 bucks for a cross ship, I can't tell you. I chose the standard replacement method. I shipped three drives USPS Priority, they arrived at the Texas facility within 3 business days and replacements were sent back UPS Ground, arriving back at my house (which is a max of 5 days depending on where you live).
Not a bad turn around. They're either really quick at identifying bad drives, or they don't bother checking. Considering they probably sell a lot of $99.00 upgrades, I'm guessing they don't put a lot of time into validating whether or not the drive they received is actually dead, though there's really no point in returning a good drive since the warranty on the refurbs is 90 days or the end of your warranty period on the drive you sent in, whichever is later.

Buy a Good Power Supply

Tuesday, September 25, 2007
If you'd like to read the story, feel free. It's not a real page turner, so I'll provide my advice first.

  1. Buy a good power supply. Cheep power supplies are not reliable and can be very tough to troubleshoot unless you are fortunate enough to own a PSU tester that actually works (mine didn't).
  2. Don't blow dust and heat directly at the bottom of your computer (i.e. make sure you don't have a heating vent in a place that directly affects the running temperature of your PC)
  3. When you have many components fail simultaneously, at least suspect the PSU. In my case, since I knew the Motherboard and Processor were dead I figured they could have been the reason that the drives failed.
  4. If you have a suitable, working spare PSU, try that and see if your problems don't go away.
  5. Buy Seagate or Maxtor hard drives. Their RMA process is incredible. They have my business for the rest of my computing days.
And here's the story . . .
I have an HTPC running Snapstream Media's Beyond TV. It's a fantastic application that is easy to get up and running on Windows. Though it's not free and there are certainly alternatives, it doesn't come with subscription fees and due to various historical reasons I won't get into in this post, I have been using it for a while and switching to MythTV would be painful as a result.

So, the rig is:
High-end Asus Motherboard running a Pentium D 850 over-clocked to 3.6GHz.
(2) 300GB Seagate Hard Drives
(1) 400GB Seagate Hard Drive
Creative Labs Soudblaster Audigy 2ZS
Hauppauge PVR-150MCE
Plextor Mpeg-4 USB TV Capture Device
AMD/ATI X1800XT PCIe Video (I'm outputting 1080i to a Sony HDTV)
Tons of custom software I've written to do various tasks that are shortcomings of BeyondTV.
Antec 500W power supply (not great, but need 500W to overclock this chip with what this box has installed)
1G of DDR2 memory to able to run at speed necessary for overclocking this chip.

About a month ago, the box started rebooting at random. As this machine is very overclocked, I assume that I'm having heat issues. The first step was to put everything back to spec. Yes, I know "overclocking bad". In this case, I disagree. The system was incredibly stable for over a year and the parts were hand picked for the task (including a very nice Zalman CPU cooler).

Things seem ok for a few days, until another sudden reboot. I had no time to troubleshoot it so I left it to its own for about a week. Unfortunately, when I did get to it ... after hours of troubleshooting ... I discovered the following had happened:
The processor is dead.
The motherboard has a black mark next to a place where a capacitor belonged and there are pieces of it throughout the case, so the board is toast too. The capacitor in question handled power to the processor, which explains my dead chip.
Two of my drives are clicking, data is unreadable. I don't back this machine up because in my short-sightedness I thought "gee, it's only TV shows, I can just record them again later". Unfortunately, one of the dead drives had all of that custom software on it and I hadn't backed up that folder in over a month. Oops!

So I put a few things on order, and send away for warranty repair on a few others. I decided to go AMD with an Athlon X2 4200+ and a suitable Asus Motherboard and for good measure, pick up a Maxtor 500GB hard drive. Still figuring this was a heat issue, I decide to go a little crazy on cable organization and end up with a very suitable, clean, airflow friendly rig.

...Except, my hard drives are clicking, I'm getting read/write failures and the one good drive that I had left is now reporting a S.M.A.R.T. failure (useless feature). I'm at a loss at this point. I've replaced almost everything in this box and it's still failing. I go down the path of troubleshooting drivers, even installing different operating systems (I always wanted a Myth box!). Nothing resolves the issue.

Suddenly it occurs to me that the drives only fail when all three are plugged in to power. They didn't even need to be plugged into the motherboard, just power. It dawns on me that I have a dead PSU. Well, not actually dead, but failing under heavy load conditions.
Something I had failed to notice is that while BeyondTV is recompressing video and I'm simultaneously watching video, my system becomes totally unstable. Similarly in Linux when I'm running the graphics test and load testing, the system becomes unstable.
Under idle or even minor load, the system is fine (mostly).

Back to Newegg, this time I purchased a very overpriced Pure80 600W power supply.

Now, there's a twist here that I completely missed when I installed the system the first time. The Antec power supply is located in the back/right side of the case (as is typical). When the case is positioned in my stand, the back right part of the case sits immediately above a heat register which during the summer blows cool air and a lot of dust, but in the winter blows a lot of piping hot air. It's a credit to this particular power supply that it didn't die any earlier. I have since blocked off this particular vent to prevent any airflow or dust.

Blank Screen after Setup is Inspecting your computer's hardware configuration in Windows XP

Monday, September 3, 2007
It's been a little while since I've been able to write, but this one hit me again this weekend, and again I spent hours trying to figure out what was going on.

The symptom is this:
Blank Screen occurs when "Setup is Inspecting your Computer's Hardware Configuration"
Blank Screen before Setup even starts.
Further, waiting it out does nothing. The drive eventually spins down and the PC is unresponsive. Of course, the monitor never goes to DPMS mode, so it looks like the video card is still receiving a signal.

This saga was part of my other adventure.

The cause of the black screen is simple:
Windows XPs setup utility cannot properly read the hard disk partition tables. I'm not sure if it chokes on every linux installation, or the specific LVM setup I had done with Fedora 7, but the hang immediately after setup starts is usually always unreadable partition tables.

A couple of fixes to try:
Unplug every hard drive and USB, Compact Flash, SD, or other "hard drive like" device, except for the one you intend to boot from and install the operating system to. Try to rerun setup. If successful, plug in a drive at a time after Windows is installed and updated.
If that doesn't work, you still have options, but the only ones I can present to you will cause your data to be destroyed, so here's hoping you have a backup.
  1. Download Knoppix (or a Linux based live CD that comes ... at least ... with fdisk).
  2. Burn the CD/DVD on another computer.
  3. Boot Knoppix ... wait.
  4. When the GUI comes up (or perhaps "if" the GUI comes up), hit CTRL+ALT+F2, this will get you to a "root shell" in text mode. (The reason I prefer this route is that it doesn't require a working mouse, which I didn't have since Knoppix couldn't initialize it)
  5. type "fdisk /dev/sda", if the hard disk you're dealing with is SCSI or SATA, otherwise type "fdisk /dev/hda" if the hard disk you're dealing with is IDE/EIDE/PATA.
  6. type "d" (for delete), hit enter.
  7. If you have one partition, type "w" (writes out the partition table), and shut down. If you have more than one, type a partition number and repeat until all partitions are gone. You can verify that all partitions are deleted by hitting "p", and seeing if any show up. Note that after you hit "w", you're going to lose all of the data on the partitions that you have deleted.
  8. Rerun Setup.
If you're comfortable with partitioning, skip step 6 and start by hitting "p". This will list out all partitions on that drive. You may find a particular partition that looks suspect. Try deleting the suspect partition and leaving the ones that look good, then rerun setup.

Of course, you've booted to a Knoppix CD, so you might try using some of the tools that are included with Knoppix to diagnose your problems, recover some of your data and copy it to another computer or drive, or do any other number of recovery/hardware tests. In my case, the data was already gone, so wiping out the partitions was an easy choice.

.Net Framework 1.1 Installation Fails (rollback at end of installation) on Vista

Sunday, July 15, 2007
I'm partly embarrassed for having to post this, because it identifies that I am guilty of a few sins:

1. Don't run Microsoft Windows Vista. Believe me, I'm not a fan. I wouldn't be doing it except to understand the operating system since I will be supporting it at my job, so please forgive me. I've installed two Ubuntu Linux boxes on the side as penance.
2. Don't run any Microsoft software before Service Pack 1, especially one that changed so much of the underlying OS and rules as Vista did.

That said, while testing software and setting up a dedicated development station, I came across the need to install the .Net Framework 1.1. This is, to date, not an uncommon need as many applications require it. This posed several problems.

Here's a few things to try if you run into this same issue. Note that each one of these instructions requires a good understanding of how your computer works. If you're a novice, call your nephew or local computer geek. Any one of these could do irreparable harm to your system including rendering it useless and losing all of your data. Back things up before proceeding and use these instructions at your own risk!!

Troubleshooting Metodologies: Try each step, re-run the installation. If it works, run the SP1 Upgrade. If that works, do a Windows update and get the latest security patches.
Be patient and do one at a time, if it fails, proceed to the next, each step is a little more risky than the previous.
  1. Especially if you are running Kaspersky Anti-virus or Zone Labs Internet Security Suite, It's time to turn off your virus protection (don't uninstall unless you have to, typically you just have to right click the icon and choose "Disable" or "Shut down ZoneAlarm Security Suite"). Most people skip this idea because anti-virus in XP rarely interferes to the extent that it does in Vista (and every setup program warns you to turn off Anti-virus, even though it is rarely necessary).
    For the .Net Framework 1.1 setup, Kaspersky and Zone Alarm will always interfere. The Framework installation tries to run regtlb.exe to register a number of libraries at the end of installation. Due to an untimely lock on the registry, regtlb will fail (though you won't see any indication that this happened unless you've enabled very aggressive logging and debugging in Windows Installer).
  2. Temporarily disable User Account Control (UAC). The easiest way I've found to do this is:
    (1) Click Start.
    (2) Type "msconfig" in the blank spot above and hit Enter.
    (3) Click "Tools", and scroll down the list.
    (4) Click "Disable UAC" and click "Launch". A command prompt window will appear and indicate, hopefully, that The Command Completed Successfully.
    (5) Reboot.
    Note that this will leave your computer unprotected by UAC, which I am a big fan of (Most Linux distributions use similar technologies. I'm glad that Microsoft stole this idea).
    After you've run the installation, repeat steps 1-5 except choose "Enable UAC".
  3. Disable nx (No Execute) and DEP (Data Execution Prevention) if you're utilizing a modern processor (and if you're not, and you're trying to run Vista, downgrade the entire system to XP, you'll thank me later). Note that DEP is also here to protect you, but often gets in the way. .Net Framework 1.1 came out before DEP, so there are some isolated issues centered around the registration of System.EnterpriseServices.dll. If you're "rolling back" at that point, this is probably your solution.
    (1) Perform the above steps for disabling UAC, it can only help you here.
    (2) Click "Start", type "cmd" and hit Enter.
    (3) Type: bcdedit.exe /set {current} nx AlwaysOff
    (4) Reboot.
    Rerun the installation. If it works, absolutely don't forget to install the service pack for .Net 1.1, it includes fixes for this problem.
    If you want to turn NX back on, repeat step 1-4, but for step (3), type: bcdedit.exe /set {current} nx OptIn
  4. Clean up a failed Framework Installation. Follow the steps linked here. (Please note that he refers to this as a Last Resort, which it is. It is highly invasive)
  5. Are you using an account other than Administrator to do this installation? Try enabling the Administrator account (Be sure to assign it a password!), login as Administrator. If you have re-enabled DEP and UAC, disable them again. Try the installation.
    The reason this may work is that the Administrator account will likely launch fewer user applications at start-up. When in doubt, make sure everything is as thin as possible and reinstall.
Things to avoid (I tried them, and they caused me grief)
  1. Manual installation. There's a few tricks outlined whereby you disable rollback in the middle of the .Net Framework 1.1 installation and then attempt to install the remaining components by hand. Don't do this. It may appear as though it worked but your .Net Framework apps will be unstable and some apps will still think the Framework is not installed.
  2. Renaming (or deleting) of the mscoree.dll in c:\Windows\System32. Doing so requires you to take ownership of the file, change the permissions on the file to values that are not safe, and ... of course ... rename the file. You'll find on Vista that the installation will run even shorter and that several components (including Event Viewer) will simply not work. This was a tip from Windows XP and should not be used in Vista.

I'd strongly advise anyone upgrading (if you can call it that) to Vista to install the .Net Framework 1.1 manually before doing any other application installs. There are thousands of forum posts and articles on how to fix problems with Framework 1.1, which indicates that it's a little touchy to get working. Make this step (2) after completing setup and installing the Application Compatibility updates. Perhaps in my many re-loads of this development system, I'll outline my recommended baseline software install for Vista, but for now, consider this step 2.

Many applications still require it and some will even try to install it manually. If this happens, you'll receive an install error for the application and it may not be terribly apparent that the installation error is related to .Net 1.1.

One last tip for those running Vista x64. You've got quite a road ahead of you. I'm running the 32-bit version at the moment, but upon my reading I ran into many articles identifying .Net 1.1 will not install without the Service Pack will not work in Vista x64. You'll be stuck in a catch-22. Vista 64 requires Service Pack 1, but the installer requires the non-Service Pack version be present.
Perhaps an application compatibility update will resolve this soon, but if not, there seems to be one sure-fire way to make it work:
If you're skilled with Windows Installer, give this solution a shot. It will allow you to install both the framework and SP1 in one shot. (The missing last step is to run "netfx.msi" from whatever folder you chose to build the administrative install point from).
Of course, it's not officially supported by Microsoft, so you're treading in tall weeds here.

Much thanks to Aaron Stebner's WebLog and the three thousand forum posts I read that led me to my final answer in this problem.
I'll get around to replying to each one when I have a few days :-)

10 Reasons to Abandon your ISP E-Mail Account

Thursday, June 7, 2007
I'm often asked why I exclusively use GMail and Yahoo! Mail for e-mail, rather than the account that my ISP gave me. My response is usually: "I don't know the ID and Password", but there are many other good reasons to permanently use these alternative services.

So lets start:

  1. Most ISPs don't provide "forwarding" service when you decide to leave them for the competition.
    If you're not planning on switching ISPs right now, you can give your friends and family a heads up. Start by sending a message to everyone you know has your existing e-mail address. Then setup a mail client rule to forward all messages to your new account on GMail (my preference) or another service.
    For added convenience, setup another rule to reply to those messages with a notice that your e-mail address has changed. You have a much longer buffer period for your friends to update their address (or ignore). You can even take it one step further and change the message to say "I no longer check this account, if you want me to receive this message, send it to "".
  2. You're no longer tethered to your ISPs service.
    ISPs know that the fact that you being hooked into their e-mail service is something that may prevent you from shopping around for a better deal. If you're not tethered to their exclusive services, you can leave for that better deal (assuming you live in an area that actually has broadband competition!). I have close friends that kept their AOL dial-up subscription for years because the pain of giving up their @aol address was too much (they've all moved on now, thankfully ... that, or I've just stopped having friends who would use AOL).
  3. Your passwords aren't sent in plain text.
    Your e-mail account is the window into your world. Most people use their "ISP" e-mail account like they use their home phone. They provide it to those with whom they engage in transactions with (such as the Credit Card Company or The Bank).
    If you make a habit of using open access points, or public access points, that silly mail client of yours is probably sending your password plain-text every 5 minutes. If someone gets that password, they simply have to collect some e-mail from you to determine who you do business with, visit the site, and use the "Forgot my Password" link, where most services will conveniently send your new password via e-mail.
    Make sure you pick a service that encrypts your ID and Password by default. GMail requires an encrypted connection on their web site, and if you use their free POP access, it'll be encrypted there too.
    I'm dumbfounded by the fact that so many ISPs do not even allow encrypted authentication on their e-mail servers.
  4. You can take your e-mail with you.
    Yes, it's web mail, so if you want to check it from a location other than home, you probably can (assuming the internet connection your using isn't blocking web mail). In addition, with GMail you can access your e-mail in the same manner you're already likely used to ... via Thunderbird, Outlook, Outlook Express or your favorite mail client at no additional charge.
    Yahoo! also offers POP3 mail, however, at the time of this writing, you'll have to subscribe to their premium service.
  5. You can send e-mail when you're not on the same network.
    This is similar in nature to the above, and may already apply to your ISP e-mail. Some ISPs only allow you to send mail if you're connected via a set of IP Addresses that they own. This means when you're using an open WiFi Network, you can receive mail just fine, but cannot send it.
    Most ISPs have addressed this limitation, but if yours hasn't, you'll now have the ability to send and receive via either your web based client, or your own local client.
  6. You may find the Web mail features to be better than your mail client.
    I have both a GMail and Yahoo! mail account that I regularly monitor. As far as GMail is concerned, I'd rather use GMail than Thunderbird. It filters spam well, it integrates with their instant messenger client (as does Yahoo!), it allows for lightning fast searching of my old mail, and it redefines the way I work with e-mail by grouping messages in conversations and allowing me to apply labels.
    Plus it's just dead simple to use. Not that I'm confounded by the "intricacies" of Thunderbird, but I find I'm more efficient using Web Mail.
  7. You will probably get more space for your mountains of mail.
    This was the initial reason many people switched to GMail. They offered you 2GB of server side storage for mail (Indeed, some people even use it as a backup service).
  8. You now have that backup you keep meaning to do.
    This comes with a caveat. If you aren't using both a local client and the web client, you're actually in worse shape than if you had only the local client and failed to backup. At least you'd still have the drive or computer that failed if you wanted to pay for costly recovery services. If Yahoo! or GMail loses your mail, you're stuck.
    That said, if you use a local client regularly (and you're leaving your mail on the remote server), you now have redundancy. A lost drive does not mean that all of your e-mail is gone.
  9. Mobile E-Mail Access may be easier and less costly.
    Of course, this always depends on who you choose as your e-mail service provider. For me, accessing my GMail account on my lousy Motorola E815's sorry excuse for a Web Browser is actually quite painless. They did mobile e-mail right, and it shows. It's easier for me to get into their service, check my mail, and get out than it is to use the kludgey POP3 client. And since my cell phone provider requires that I purchase the software to even check e-mail via my mobile phone, it costs less.
  10. SPAM and Malware are handled better.
    Maybe you'll be a little more careful with this new account and not put it on your MySpace profile, or give it to every company that asks for it. You're starting fresh, give it only to trusted sources. Set up more than one account or use services like Mailinator for disposable e-mail addresses.
    In addition, because Google or Yahoo! know about *all* of the spam their receiving, you may find (as I have) that their spam filtering far exceeds what even the most sophisticated of mail clients can pull off. Sure, your ISP may have server-side spam filtering also, but they probably don't have the user base that Google or Yahoo! Mail does.
    In addition, most Web mail services scan all attachments for viruses (typically only when you use the Web interface). If you've been lax on your Anti-virus updating, this may save you.
There are many more, but do you need more than 10 reasons?

SQL Injection Penetration Testing Tools

Thursday, May 24, 2007
I've already walked through the process of mitigating SQL Injection vulnerabilities in code, but even in the most skilled hands with the best code reviewers, you're going to miss something. We're aiming for a system that is as difficult to crack as possible. You can never achieve total security (short of disconnecting the network card and melting all of the parts into a big giant blob, but then I'm sure there's someone who will argue that they can hack the data from it!)

The final rule missing from that post was to perform penetration testing. Doing so can be difficult. If you have a security team that is skilled in the art, it's probably best to leave the real pen testing to them. If you don't, or if you just want to preliminarily test before passing it off to your security team, there are several tools that automate pen testing.

Disclaimer: Most security tools can be used for good and evil. Running these tools on software you've written, installed on your own servers, or on servers that you are responsible for securing is "good" -- assuming you're allowed to even use these tools in your organization (obvious, right?). Running this on your online banking site to see if they're "up to spec" will hopefully get you a visit by the feds, especially if you decide to take advantage of the vulnerabilities it finds. In other words, don't do it. And if you do, you deserve the consequences of your actions.

For those who don't work in security, the question is often asked: "Why should these tools even be available? They're gold in the hands of a malicious user. " Tow which the inevitable conclusion is "They should be banned!" This thinking is referred to as Security through Obscurity. It simply doesn't work. You can't ban a tool because secrets are not kept well on the internet. Beyond that, these tools are often more useful to a pen tester than a black hat.
Bear in mind that these tools all have limitations since they're based on pre-programmed attack methods. A truly creative security engineer or black hat might find another method that your tool does not check for.

That said, I ran across this link on entitled the Top 15 free SQL Injection Scanners. Some are designed for specific databases, some are general purpose. You'll find that more than one tool will prove useful and since they're free, you're out nothing if you give them each a shot. Of course, use them at your own risk. Never test live production servers unless you're operating within whatever maintenance window you have setup and have full (and tested) backups. You have no idea what these tools might actually do.

In addition, as with all penetration testing software, make sure you do a lot of additional research on the tool itself. You wouldn't buy a gun from a "guy on the street" (maybe you would? I wouldn't). If it's open-source, that's a good start. Check the community behind it and if you're skilled enough, review the source code yourself. Always consult Google. If it's commercial, avoid it unless you know the brand very well. Look for any hints of added malware. Unfortunately, you may find some security sites rank the tools as malware because they work as advertised so make sure you research well. If it's only fault is that it's designed to hack a web application, that's fine. Remember, we're trying to break into our applications so we can plug the holes and prevent someone else from discovering them. Nobody wants to get that frantic cell phone call at 3:00AM on a Saturday.

GET vs. POST - Save your users!

Friday, May 18, 2007
This is one thing I've seen gotten so wrong so often and resulted, I felt compelled to put some of my thoughts on the subject down here.

First, a little explanation. GET and POST are commands issued to a web server. Your browser uses the commands to request a web page, and provide information to a web application.

In the case of GET, this is often done via a specially formatted query string. The text and information of this query string is visible to the user assuming your application resides in a normal browser window and not one where the address has been hidden from view.
GET can be used as a method for sending form data, and it is used often in the case of Search Engines. Simply search for "dog" on Google and you'll find this URL:

GET is convenient. As you can see from the above URL, I could just as easily encode a link like this and you'd be sent away from my blog to a search about a dog. In fact every link that goes to a web page is a GET operation, not just forms like Google's Search form that use the method GET.

POST is a method that is primarily designed for sending form data (it can be used in many other ways, but we'll skip those for now). POST data does not appear in the address bar like GET data. It also cannot easily be encoded into a link.

So which should I use and when?

The answer is simple, but the solution is sometimes painful. GET should be used only when the operation it is performing is considered safe. Safe means an operation that is looking up data. I've even heard it said an operation that "asks a question", or "navigates somewhere". In the case of a search engine, you're asking a question. In the case of a link, you're navigating somewhere.

What's more important is knowing when GET should not be used.

Rule #1: Never use GET in a login form or to send sensitive data
I know that this should be obvious, but I've seen it before. For those who are still scratching their heads. When you use GET in a login form, the user ID and password of the logged in user will appear in the address bar. This is called exposure. :-)
I've seen plenty of older e-commerce sites that send credit card data this way. This is "bad" on more levels than I can get into here. Just don't do it.

Rule #2: Don't use GET for any operation that modifies something.
This is where most people fail. I've logged into my banking site, and I'm looking at my account. Next to my account is a link that says "Delete". It links to the following URL:

My, how convenient is that to code? No form necessary, no javascript, no buttons, no hidden values, just an anchor reference. That's also leaner HTML.

Technically it works. Since I'm logged in and have an authenticated, encrypted session, it's thought to be secure (assuming your account number isn't in that querystring, it might actually be secure).

Unfortunately, some browser extensions and some alternative browsers do something called "Look Ahead". And it does exactly what it sounds like. It looks at the page you're on, and pre-loads all of the links on the page, figuring that you're probably going to click on one of them. The idea is that if the browser pre-loads all of the links, and you click one, it'll already be in your cache and navigation will run lightning fast. Unfortunately, the Look Ahead doesn't know anything about what the link actually does.

If encoded the delete link that way, and I was running FasterFox, my bank account might be gone lightning fast even though I never clicked the link.

Rule #3: Never use GET to send large amounts of data
This is less of a rule and more of a suggestion. Some older web servers and browsers limited the length of a GET request. Most, nowadays, don't. I've even seen some clever hacks that use TinyURL to store entire files.

While writing this, I discovered a great resource over at the w3c. If you're still curious, check it out. Those guys are a lot smarter than me and probably came up with something I missed.

SQL Injection

Wednesday, May 16, 2007
Editors Note: I wanted to start off with a bit about how I hate needles, but I don't actually have a fear of needles. It would have probably sounded as dumb as the sentence I've just written about why I left the pun out of this entry. I'll stop now . . .

The Injection you really should fear: The SQL Injection.
(sorry, couldn't resist)

To understand why, you must understand what SQL is. SQL stands for Structured Query Language. An SQL Server is a database server that implements some form of this language. When people think of databases, they tend to think of them as simple storage and retrieval mechanisms. Non-programmers do not realize that power of the database resides in the language used to query or manipulate that data.

The fact that interaction with the database is done via a separate language is where the problem resides. You have one set of code (your web language, php, C#, java) writing code in another language (SQL).
This is how a simple web app might look:

User fills out form element, clicks submit which sends the data to a page to display results.
That page connects to the database, and inserts the search string into a query command, gets the data from the database and displays the results.

The above italics is where injection can occur. In this case, developers often forget that the query command is interpreted code. A common (and wrong) way of doing this is to create a string with the code and add the data from the form field to it, in this manner:

Command = "SELECT Name, ID, Description FROM Products_Table WHERE Name LIKE '%" + users_search_request + "%'";

That's the language mix. Command and users_search_request come from C#, the language the web page was written in, whereas the portion between the quotes is intended for interpretation by the SQL server. This works but it is insecure.

If the user types Dog Collars, the resulting command is SELECT Name, ID, Description FROM Products_Table WHERE Name LIKE '%Dog Collars%', and everything is fine. If the user, instead, types '; DROP TABLE Products_Table, in that same search box, the resulting command is SELECT Name, ID, Description FROM Products_Table WHERE Name LIKE '%'; DROP TABLE Products_Table. The command "DROP TABLE Products_Table" has now been executed, and Products_Table gone.
Of course, SQL is very powerful, so far more than table deletion can occur. And today's malicious users are less concerned about defacing, or destroying data, and more concerned about stealing it for fraudulent uses.

Like all things in security, there isn't one magic solution to this problem. Any one of these will reduce your exposure. Using all of them will nearly eliminate it.

Rule #1: Use Database IDs with limited rights. Limited Rights = Limited Exposure
If every piece of newly written code I've ever looked at is any indication, this is almost always skipped. Often developers start the coding process using a generic ID, figuring it'll save time during the development process. They'll get fewer errors due to rights issues and can focus on debugging the newly written code. Just before release, they can switch the accounts to use the limited rights account. Of course, that last step is usually missed, and the service goes production using that same near-administrator account.
In the above example, it is appropriate to use an ID that only has the ability to read the values from that table, and only those fields if necessary. I'm a big fan of using more than one ID to do different operations. IDs that need update ability should only be able to update the table(s). Be as granular as your SQL server will allow and ensure that the account used can do only what it needs to do.
Read operations should occur with a read-only ID that is limited to the tables it needs access to.
The reason this is Rule #1 is that it's one of the only things you can do at the database to protect yourself from bad code.
If your organization has a database management team that is separate from your web developers, they should apply this rule.

Rule #2: Used Built In Libraries for Querying
The easiest way to prevent code injection is to eliminate the manual writing of SQL code. Most modern languages used in web development expose methods to let you query without writing a query string. Even if you have to write a query string, you can usually do so using parameters, rather than string concatenation (as we did in the above example). The values for the parameters can then be defined using the language, where they are properly sanitized (assuming the language itself isn't plagued with security holes).
This is often seen by developers as the silver bullet that is impervious to SQL Injection. Unfortunately, languages can have security holes too. So you can't simply rely on this being your savior every time.

Rule #3: Never trust that a user's input is "safe"
Rule #2 should sanitize any data sent to the database, so if you have the option of using built in libraries, do it. You've already completed Rule #3. Don't re-sanitize, or you'll probably break your application. On the other hand, if you have to do concatenation to query, you must sanitize the information submitted by the user.
Most languages used for web design include built-in methods to sanitize input. At a minimum, the above should have been sanitized to "escape" the apostrophe. That would effectively keep the apostrophe from being used to close the quotes and the command.

Rule #4: Stop displaying error messages to your users.
Some clarification is in order here. It's OK to deliver an error to the user when an error occurs. It's not ok to allow your web server to deliver it's debug information with that error. This is usually not the default setting, but we developers certainly like it for debugging.
A common method used by malicious users is to inspect your system by intentionally sending malformed input with the hopes of getting an error.
In the above example, I could have written invalid SQL, in which case I would have likely received the line of code that it failed on, and some or all of the text of the SQL command. I could use this to further attack the database. With that error message, I now know the table name and the column names that the receiving page expects. I could then try attacking other common table names by using something like '; SELECT Name, 1 as ID, Password As Description FROM users, and continued "guessing" the table names until I found one that fits.

Optional: Use Stored Procedures if the SQL server you're using supports it
There's differing opinions on this, but when coupled with Rule #1, it can be very powerful. A stored procedure will not prevent the above problem, but if the ID only has rights to execute that stored procedure and no rights to do anything else, your malicious user is effectively stopped. Stored procedures can be designed to enforce business rules, as well, but they can make managing an application more difficult since the code for that application is now effectively spread between the database and the application.
I'm, personally, a fan of stored procedures as long as they're managed properly.

Optional: Automatically IP Ban suspect users
This is a weak protection method, but if the risk is high enough, it may be worth doing. Look for patterns in user input like '; DROP TABLE, and update your web servers security rules to ban the IP address. It won't stop them, but it will slow them down.

Best Practices for Helping Code Review
Documentation is boring, I know. As a wise developer said to me (today actually), you can spend time writing code, or you can spend time writing about code. This how most developers feel. Documentation is very important, but it always feels like it's time that could be better spent solving problems.
When documenting code, blocks that touch databases (or other areas where your code is "writing code") should be clearly commented. This allows you to do a detailed review and focus on the areas that are the most likely to be attacked.