Gentoo Linux RAID Benchmark Timimg Study |
Gentoo Linux RAID Benchmark Timimg StudyR. J. Brown, Elijah Laboratories Inc.rj@elilabs.com 08-Oct-2004
A study was conducted on a hardware platform intended to be replicated and used for certain disk intensive applications (network monitoring, and network filesystem backup) to determine performance characteristics of various RAID configurations running under the Gentoo distrubution of the Linux operating system. For the targeted applications, disk write performance is considered the limiting factor. The goal was to provide information to be used to help decide which operating system version and which RAID configuration was best suited to supporting these tasks. It is desirable for the targeted applications to have the redundancy provided by RAID. In addition, the mirroring capability provided by RAID-1 makes it possible to instantly take a complete disk drive containing the entire data of interest off-line without affecting the ability of the application to continue to operate. This is very desirable if a system or network is compromised by an intrusion exploit or otherwise exhibits unusual behavior. In particular, the ramifications of being able to instantly preserve the state of the system have applications to post-event forensics, and to evidence gathering used for legal prosecution. Hardware EnvironmentThe hardware used was a mid-level PC motherboard: K7 Triton GA-7S748 fitted with an AMD Athlon XP 2000+ CPU and 1 GB of DDR DRAM. The disk drives used in the RAID were three (3) WDC WD2500JB-00GVA0, ATA DISK drives. Test Scenario After some initial trial runs, it was determined that the default reconstruction speed limit parameters in /proc/sys/dev/raid were unsuitable for the targeted applications. These parameters were modified by the following commands: echo 20000 > /proc/sys/dev/raid/speed_limit_max(only for 2.4.26-gentoo-r9 RAID5, or else only the reconstruction daemon runs)
echo 5000 > /proc/sys/dev/raid/speed_limit_min
echo 10000 > /proc/sys/dev/raid/speed_limit_min The actual read and write tests were performed by the following commands: Write test: dd if=/dev/zero count=<sectors> of=/dev/<drive>Read test: dd of=/dev/null count=<sectors> if=/dev/<drive> The execution of these commands was timed by the time command. ResultsAll tests were performed directly on /dev/md0 as a raw device, without any filesystem being present on that device; therefore, these results represent the performance of the RAID only, and not that of any particular filesystem. Any filesystem overhead will add to these times accordingly. This test only measured the performance of the RAID itself. This study clearly shows the speed advantage of the 2.6.8 kernel over the 2.4.26 kernel, especially for disk write operations. In fact, the 2.6.8 kernel proved so superior that the 2.4.26 kernel was not even tested with a RAID-1 configuration. The only situation in which the performance of the 2.4.26 kernel was superior was in disk read operations. Since the limiting factor in the targeted applications is disk write speed, the faster read speed of the 2.4.26 kernel was not considered to be a significant advantage. For certain other applications, this better read speed could be a consideration that might influence a decision to use the 2.4.26 kernel instead of the 2.6.8 kernel. As mentioned above, the mirroring of RAID-1 is desirable in the targeted applications. It was expected that the write speed of RAID-1 might have been significantly slower than the write speed of RAID-5, but this proved to not be the case. Furthermore, the cpu idle time observed during the write tests showed a lower cpu utilization for RAID-1 writes than for RAID-5 writes, as there is no parity calculation required for RAID-1 writes. Even with 3 active mirror drives, the performance of RAID-1 was superior. Likewise, the performance of RAID-1 was superior when the array was oeprating in a degraded mode with only one drive, and during reconstruction after a degradation had been remediated by the addition of a fresh drive. Tabulated Data
Addendum
The benchmark testing reported in this study was continued after the above report was written. As it turned out, it was necessary, for several reasons, to switch to the 2.6.7 kernel using the hardened-dev-sources. It was discovered that EVMS did not work properly under 2.6.8, and neither did cdrecord when burning DVDs. After the switch to 2.6.7, the benchmarks were re-run on that version of the kernel, and then the various filesystems were also benchmarked while running RAID-1 in partitions rather than on the whole disk. The platform being developed needed to run RAID-1 on 3 seperate disks, and needed to run it in partitions, to allow for the possibility that a replacement drive in the future might not be exactly the same size as the drives the array was originally constructed with. It also needed to run with the entire RAID-1 encrypted, so that when a drive was removed and kept as a backup copy, theft of the drive would not result in a compromise of the data contained on it. To this end, timing studies were conducted using encryption on RAID-1 running in partitions on 3 seperate drives under reiserfs, as that seemed like the optimal filesystem for this project, given the results that had already been obtained. Finally, studies were run to determione the effect of the chunk-size parameter in the /etc/raidtab file. As it turned out, that parameter seems to have little effect on the speed of the RAID-1 psuedo-device. So here are the additional results:
|
|