Disk Setup… at home
The first step that I felt was necessary to build my research lab file server was to set the external hard disks up and get them running a Linux computer (in the comfort of my own home) that runs the same OS as the Dell PowerEdge 6650 server.
Just reminder, here is the hardware that I bought.
External Enclosure
Silicon Image 4726 chipset based 5 bay, eSATA II hard disk enclosure. The manufacturer doesn’t really matter, its the chipset that I care about because that is what does the digital work. Anyways, SIL4726 processor can handle RAID processing itself, thus offloading the burden from your PC or controller card. You could call this cheap hardware RAID. 🙂
PCI eSATA Card
Silicon Image 3124 chipset based PCI RAID/eSATA II controller card. This product was marketed by Syba as SD-SATA2-2E2I. This product also has the ability to be a RAID controller. But don’t consider it to be hardware RAID. It requires a special driver that does the RAID work in your OS.
Home Build
As I said, I went to work setting this up at home. The computer I had available was an old ECS K7S5A based self built PC. I installed CentOS 5.1 as that is what I plan to run on the server.
No Boot
After putting the card in I couldn’t get my computer to boot. It would hang at the SIL3124 RAID BIOS. This was easily fixed by downgrading the firmware on the card to the previous release on siliconimage.com.
Linux Failure
The first thing I learned about this pair of devices is that the open-source Linux drivers in kernel 2.6.18 can’t see them. No matter what I tried the disks in the enclosure were never recognized by Linux. So I figured I would test my hardware in a Windows box before I sent it back as DOA.
Windows Success
I installed the hardware in Windows computer (Server 2003), installed the driver found at http://www.siliconimage.com/ and the enclosure and disks are instantly available. Wow, that was easy! I guess the problem is with Linux. While the device was connected to Windows, I used the RAID manager for the external enclosure to configure a RAID 1 mirror on the disks in the enclosure.
More Linux Failures
With the OS identified as the problem I looked for help. The manufacturer hasn’t released a driver since RHEL 4 or Fedora Core 3. Ok thats a waste. So I was confined to the open-source driver. I did some reading and found that there were major enhancements to the open-source driver in recent kernels. So I updated to the latest kernel (2.6.25). This required me to download and compile a vanilla kernel from kernel.org. The new kernel did present some different symptoms but still failure! I was almost ready to give up when I found the linux-ide mailing list. That has the developers wriing the open-source drivers on it! As a last ditch I emailed them. Thankfully the next day I got a reply informing me that the open-source driver cannot handle the external enclosure operating in hardware RAID mode. Apparently when it acts as hardware RAID it has to impersonate a single disk to the OS (simple enough). The problem was the SIL4726 does a horrible job of the impersonation and does not abide by the defined communication standards. So the open-source developers could not realistically support the irregular device. He told me to disable hardware RAID in the enclosure and set it as just a bunch of disks (JBOD).
Linux Success
I was disappointed to turn off the RAID function in my external RAID enclosure but I was willing because I had read that Linux Software RAID is pretty darn good. With RAID off and using the latest Linux kernel (2.6.25) all of the disks in the enclosure were finally detected!
Pingback: Ryan Burchfield … Seeking blog name » Blog Archive » PCI, PCI-X, 3.3V, 5V. HUH?
#1 by Albert on July 12, 2008 - 2:55 am
Hi,
i want to install my Venus T4S with my Debian Etch box. How can i turn off the hardware raid function ? since i want to use software raid so i can use RAID 5 with the Venus box. thanks.
#2 by Ryan on July 12, 2008 - 9:56 am
Hi Albert.
To turn off hardware RAID, I had to connect the box to a Windows computer and install the drive manager software (+ Windows drivers) from http://www.siliconimage.com for the Sil4726 (drive manager) and the PCI-E controller card is SIL3132 (driver).
Then using the drive manager GUI I was able to configure the array to JBOD.
Just for reference, I had a lot less trouble setting the box up with Windows(10 mins) than Linux(2 weeks). But of course my main problem with Linux was the hardware RAID.
-Ryan
#3 by Dan on July 15, 2009 - 5:44 pm
Ryan,
Great post. I’m looking at a 4 (or 5) bay enclosure with the SIL4726 chipset onboard. If I can find a Vista/Server 2008 x64 driver for the SYBA SIL3124 PCI card, I will be running that as well. I noticed you had no issues with Windows running RAID, which is good new for me.
I was wondering two things though…
Does the SIL4726 support running RAID 1 on drives 1 and 2, while SPANNING drives 3 and 4? Or can you SPAN 1+2+3 and leave 4 independent? Or RAID 1 on 1+2 and leave 3 and 4 JBOD? Is there any limitation on the configuration of the hardware RAID?
If you have 4 drives in the enclosure on one eSATA port connected to the SYBA card, are the other 3 ports available? Can you have more than 4 drives attached to the device?
Thanks.
#4 by Ryan on July 15, 2009 - 7:27 pm
Just a note, when using Windows RAID it was using their driver and their driver’s tool to configure the RAID. On to your questions.
1) As far as I know the answer is yes. The tool seemed quite flexible in configuring the RAID setup. I think there was a wizard mode for common simple stuff and an advanced mode that allows you pick individual disks out of the 5 in the enclosure and set their RAID config. I don’t remember what all RAID types are supported by the enclosure running in HW mode so you might want to check on that. And JBOD should not be a problem alongside a RAID as long as the SATA card that you are connected to supports PMP.
2) Although I didn’t test it I don’t see why not. Performance will degrade though with simultaneous access because all that data has to be crammed through the PCI bus.