Archive for category Operating Systems


I finally got the hardware working for the linux file server. Yay! The next step is to start configuring software for moving all of the users files to the server.

To do this I decided to setup a new LDAP directory server using Fedora Directory Server. The initial steps weren’t too hard. Of course the Install Guide and PAM Configuration how-to helped. Within a couple hours I had the server running and a user able to login. The harder steps were customizing it for our systems and getting it to work with samba so that users could mount their home directories from Windows.

PCI, PCI-X, 3.3V, 5V. HUH?

In “What no Hardware RAID?” I got the external enclosure working on my home Linux computer. In this post I describe the problems I had getting it to work in a Dell PowerEdge 6650 server.

Drop the card in

I went down to the data center one afternoon thinking I had a 20 minute job to put a PCI card in a server, connect the enclosure and reboot the server. Well it turned out life is not that easy. No matter what I did, Linux would load the driver for the RAID controller card and it would attempt to find the disks connected to it, but everytime it would fail to find the disks.


My first thought was that I screwed up the kernel build. So I rebuild my vanilla kernel (2.6.25) from scratch. Still no success. My next thought was that maybe the server motherboard did not like the version of the firmware on my RAID controller card. That seemed likely since my home computer would not boot with one version of the firmware. So for the next day I tried a myriad of kernel versions and RAID controller firmware revisions and combination of those!

So with several of those combinations exhausted the department sys admin recommended that I try changing the PCI card to PCI slot 1 on the server. That slot was only PCI, while the remaining 7 slots were PCI + PCI-X. I had originally placed the card in slot 8 because it was closest to the location of the external enclosure and 1 meter cable wouldn’t reach if I used slot 1. So for the purposes of the test I set the external enclosure on the floor. I then tested, and the system still did not work so I moved the card back to slot 8 and the enclosure to its shelf. And so went another day of troubleshooting.

Getting more frustrated I begin to accept that maybe this RAID controller is fighting with my PERC SCSI RAID controller in the server. Unfortunately, the test was to remove my RAID card, thus corrupting my internal disks with OS. I eventually got the guts up to do that… still no luck.

Then I read in detail, every user review of these 2 products (the raid controller card and the external enclosure) on In a couple reviews I see people complaining that newegg specified the card as PCI-X when it is actually PCI. I didn’t see that as a problem because I was using PCI + PCI-X slots. Then I reach a review that says the card will only work at 5V despite fitting into a 3.3V PCI slot. Huh?

In comes wikipedia. Now I see, PCI-X operates the 3.3V. Past revisions of PCI operate at 5V, a new revision operates at 3.3V. The standard uses different physical ‘notches’ in the bottom of a PCI card to restrict you from inserting it into an incompatible slot. As you see in the figure below the bottom two slots are longer and have differently placed vertical ‘bars’. Those bars indicate what voltage is on that slot. A bar near the left (closer to the edge of the motherboard) indicates 3.3V. A bar near the right (as in the top slot) indicates 5V. The bottom two slots in this picture are PCI-X slots which are what I was using.

Top: PCI 5V slot, Bottom 2: PCI-X 3.3V slots

Now take a look at the PCI RAID controller again.

Syba SD-SATA2-2E2I

You can clearly see there are two ‘notches’ cut into the interface. These notches indicate that it can be inserted into a 3.3V slot or a 5V slot and the card will handle the difference. You can see that the card will also fit into either slot type in the picture above.

Back to the newegg user review. Is it true that the card can only work with 5V even though the card was made to fit into 3.3V slots? A quick email to the manufacturer reveals that the user is correct. The Syba SD-SATA2-2E2I, Silicon Image/SIL3124 card will only work in 5V PCI slots. What a bust! I wasted 3 days trying to make it work in a 3.3V slot. All because the manufacturer incorrectly made the card.

The fix was easy, I went back and put the card back into PCI slot 1. First boot and my hard drives were detected by Linux (CentOS 5.1 with Vanilla kernel 2.6.25)! Yay! Why didn’t this work a few days ago when I tried? I have no clue.

So now I have my Sil4726 enclosure working with my Sil3124 PCI card on Linux!

What no hardware RAID?

Disk Setup… at home

The first step that I felt was necessary to build my research lab file server was to set the external hard disks up and get them running a Linux computer (in the comfort of my own home) that runs the same OS as the Dell PowerEdge 6650 server.

Just reminder, here is the hardware that I bought.

External Enclosure


Silicon Image 4726 chipset based 5 bay, eSATA II hard disk enclosure. The manufacturer doesn’t really matter, its the chipset that I care about because that is what does the digital work. Anyways, SIL4726 processor can handle RAID processing itself, thus offloading the burden from your PC or controller card. You could call this cheap hardware RAID. 🙂


Syba SD-SATA2-2E2I

Silicon Image 3124 chipset based PCI RAID/eSATA II controller card. This product was marketed by Syba as SD-SATA2-2E2I. This product also has the ability to be a RAID controller. But don’t consider it to be hardware RAID. It requires a special driver that does the RAID work in your OS.

Home Build

As I said, I went to work setting this up at home. The computer I had available was an old ECS K7S5A based self built PC. I installed CentOS 5.1 as that is what I plan to run on the server.

No Boot

After putting the card in I couldn’t get my computer to boot. It would hang at the SIL3124 RAID BIOS. This was easily fixed by downgrading the firmware on the card to the previous release on

Linux Failure

The first thing I learned about this pair of devices is that the open-source Linux drivers in kernel 2.6.18 can’t see them. No matter what I tried the disks in the enclosure were never recognized by Linux. So I figured I would test my hardware in a Windows box before I sent it back as DOA.

Windows Success

I installed the hardware in Windows computer (Server 2003), installed the driver found at and the enclosure and disks are instantly available. Wow, that was easy! I guess the problem is with Linux. While the device was connected to Windows, I used the RAID manager for the external enclosure to configure a RAID 1 mirror on the disks in the enclosure.

More Linux Failures

With the OS identified as the problem I looked for help. The manufacturer hasn’t released a driver since RHEL 4 or Fedora Core 3. Ok thats a waste. So I was confined to the open-source driver. I did some reading and found that there were major enhancements to the open-source driver in recent kernels. So I updated to the latest kernel (2.6.25). This required me to download and compile a vanilla kernel from The new kernel did present some different symptoms but still failure! I was almost ready to give up when I found the linux-ide mailing list. That has the developers wriing the open-source drivers on it! As a last ditch I emailed them. Thankfully the next day I got a reply informing me that the open-source driver cannot handle the external enclosure operating in hardware RAID mode. Apparently when it acts as hardware RAID it has to impersonate a single disk to the OS (simple enough). The problem was the SIL4726 does a horrible job of the impersonation and does not abide by the defined communication standards. So the open-source developers could not realistically support the irregular device. He told me to disable hardware RAID in the enclosure and set it as just a bunch of disks (JBOD).

Linux Success

I was disappointed to turn off the RAID function in my external RAID enclosure but I was willing because I had read that Linux Software RAID is pretty darn good. With RAID off and using the latest Linux kernel (2.6.25) all of the disks in the enclosure were finally detected!