Monday, September 04, 2006

Vista on VMWare is Very Vivacious

I just installed the 32-bit version of Windows Vista Beta 2 on VMWare Workstation 5.5.1. So far things have worked great. The installation was actually very easy (at least compared with what I've read about Beta 1). I simply downloaded the vista beta 2 iso file from Micorsoft (3.13 GB), created the new virtual machine using the Windows Vista (experimental) selection in VMWare , configured the virtual machine to use the iso image as the CDROM drive and gave the virtual machine a 30GB disk size and 1GB of memory, and then booted the machine. The Vista installer launched, installed everything, and that was it. The installation process all occured in 16 color, which is kind of ugly, but after I rebooted the virtual machine I installed VMWare Tools. After rebooting the virtual machine again, the VMWare SVGA driver kicked in and the interface went to 32-bit color mode. It looks good. So far VMWare has performed wonderfully. I'm just getting into Vista to see the differences... a lot of cosmetic ones so far, but it seems to run well.

Semi-Universal Binaries

I ran into an issue a few months back when trying to get an OSX app to run on the new MacTel boxes. Although XCode now comes with the ability to construct universal binaries from the same source code, there are still issues when you link in 3rd party libaries. This was my problem. My application used Berkeley DB, and although the souce is available (I used to to build the static libarary I linked to on the powerpc version of my software) I wasn't sure how a Universal Binary version of this library would work. I didn't want to take the chance on breaking my existing version of the software to accomodate the Universal Binary version. So, my solution was to build two separate versions and use a bootstrapper application to launch the appropriate one. Essentially, I have a powerpc version of my app that links in the powerpc version of BerkelyDB and an intel version that links in the intel version of BerkeleyDB. Each version is loaded into the Resources directory of the bootstrapper application. When the bootstrapper (which is a Universal Binary) launches, it simply detects the architecture that it's running on an then launches the appropriate version of the application. This approach is a little awkward, I know, but it has the advantage of letting you keep using the code that you know works while at the same time supporting a new platform.

Not all servers are created equal

I recently constructed a server using the Intel SR2400 2U chassis and Intel SE7320VP2 motherboard. This was an interesting experience in that it deviated from my past experience in buidling servers. In the past I've built servers by mixing and matching parts to get the best price/performance ratio. In this case, I was looking explicitly for a 2U rackmountable case that would support 6 drives. The SR2400 was one of the few choices available to me (I'm not a professional server constructor so I don't have access to the suppliers that those type of folks do). In any case, by choosing the SR2400, I locked myself into a "mostly Intel" solution since this chassis required one of two Intel motherboards to be used. This was my first experience with what I will call a "professional-grade" server chassis. What I quickly learned was that it was not enough to simply acquire the chassis and the motherboard; but it was necessary to separately acquire a riser module, a backplane, a control module, a rails kit, and a custom slimline cd/dvd drive. Although the modular nature of this design makes the initial acuisition of parts a little more difficult (at least for me, I had to go to two different vendors to get everything) it is great to have the flexibility to construct various kinds of servers from the same chassis. I know this is nothing new for people who build these things for a living, but for me it was an important lesson to learn. My final setup, besides the motherboard and chassis, was a SATA backplane to support 5 hot-swappable drives, 5 250GB drives with 4 in a RAID 5 configuartion and one as a hot spare, dual Xeon 3.0Ghz CPUs (single core), 4GB system memory (expandable to 16GB), and 3Ware 9550SX SATA II RAID controller.
Also interesting is the difference between constructing a 2U server versus a 4U server. I'm guessing that 2U chassis are probably more tailored to specific hardware due to the space constraints. For example, the SR2400 chassis came with baffling that directed airflow from a set of fans near the front of the chassis, over the CPUs and DIMM modules, and then out the back of the chassis. Because of the size of this baffling, it was necessary to remove the fans from the heatsinks. But the airflow from the chassis fans was more than sufficient to compensate for the airlfow from the CPU fans; plus the CPU fans vent air upwards whereas the chassis fans vent air along the length of the chassis and out the back.
Anyway, I've come to better appreciate some of the engineering that goes into constructing these professional-grade server components. I've also learned that a server by any other name does not necessarily smell just as sweet...