In a previous post I had described how I deployed a IBM branded FusionIO drive on Redhat Enterprise 5.
I am now running that same card on CentOS 6, and am using the new version (2.2.3) of IBM’s version of the driver.
Actually I think there is a new-new version (3) of the driver now out for some people. I’m not sure if IBM has put out this driver or not for their high-iops cards.
[root@srv ~]# cat /etc/redhat-release CentOS release 6.2 (Final)
The kernel I am running is stock RHEL 6:
[root@srv ~]# uname -a Linux example.com 2.6.32-220.17.1.el6.x86_64 #1 \ SMP Wed May 16 00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux
This is what I see in terms of PCI devices for the FusionIO cards:
[root@srv ~]# lspci | grep -i fusion 8f:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01) 90:00.0 Mass storage controller: Fusion-io ioDimm3 (rev 01)
So the card is physically installed in the server, but the driver has not been loaded, so they are not usable at this point. Also should note that one 640GB cards actually looks like 2x 320GB devices to the OS.
First, we download the zip file containing the RPMs from IBM.
Warning: These drivers are for the IBM version of the FusionIO cards. If you are not running the IBM version you probably need different drivers and RPMs.
# wget ftp://download2.boulder.ibm.com/ecc/sar/CMA/XSA/ibm_dd_highiop_ssd-2.2.3_rhel6_x86-64.zip SNIP!
Inside that zip are several RPMs:
[root@srv tmp]# mkdir fio [root@srv tmp]# cd fio/ [root@srv fio]# unzip ../ibm_dd_highiop_ssd-2.2.3_rhel6_x86-64.zip Archive: ../ibm_dd_highiop_ssd-2.2.3_rhel6_x86-64.zip inflating: rhel6/fio-common-188.8.131.52-1.0.el6.x86_64.rpm inflating: rhel6/fio-firmware-highiops-101583.6-1.0.noarch.rpm inflating: rhel6/fio-snmp-agentx-184.108.40.206-1.0.el6.x86_64.rpm inflating: rhel6/fio-sysvinit-220.127.116.11-1.0.el6.x86_64.rpm inflating: rhel6/fio-util-18.104.22.168-1.0.el6.x86_64.rpm inflating: rhel6/high_iops-gui-22.214.171.1244-1.1.noarch.rpm inflating: rhel6/iomemory-vsl-126.96.36.199-1.0.el6.el6.src.rpm inflating: rhel6/iomemory-vsl-2.6.32-71.el6.x86_64-188.8.131.52-1.0.el6.el6.x86_64.rpm inflating: rhel6/libfio-184.108.40.206-1.0.el6.x86_64.rpm inflating: rhel6/libfusionjni-220.127.116.11-1.0.el6.x86_64.rpm
So far when I’ve been running these servers I haven’t installed all of those RPMs, only a subset.
So lets install those RPMs:
[root@srv rhel6]# yum localinstall --nogpg \ fio-common-18.104.22.168-1.0.el6.x86_64.rpm \ libfio-22.214.171.124-1.0.el6.x86_64.rpm fio-util-126.96.36.199-1.0.el6.x86_64.rpm \ fio-sysvinit-188.8.131.52-1.0.el6.x86_64.rpm \ fio-firmware-highiops-101583.6-1.0.noarch.rpm \ iomemory-vsl-2.6.32-71.el6.x86_64-184.108.40.206-1.0.el6.el6.x86_64.rpm SNIP! Transaction Test Succeeded Running Transaction Installing : fio-util-220.127.116.11-1.0.el6.x86_64 1/6 Installing : fio-common-18.104.22.168-1.0.el6.x86_64 2/6 Installing : iomemory-vsl-2.6.32-71.el6.x86_64-22.214.171.124-1.0.e 3/6 Installing : libfio-126.96.36.199-1.0.el6.x86_64 4/6 Installing : fio-sysvinit-188.8.131.52-1.0.el6.x86_64 5/6 Installing : fio-firmware-highiops-101583.6-1.0.noarch 6/6 Installed: fio-common.x86_64 0:184.108.40.206-1.0.el6 fio-firmware-highiops.noarch 0:101583.6-1.0 fio-sysvinit.x86_64 0:220.127.116.11-1.0.el6 fio-util.x86_64 0:18.104.22.168-1.0.el6 iomemory-vsl-2.6.32-71.el6.x86_64.x86_64 0:22.214.171.124-1.0.el6.el6 libfio.x86_64 0:126.96.36.199-1.0.el6
As you can see the sysvinit RPM contains a couple of init.d files.
[root@srv rhel6]# rpm -qpl fio-sysvinit-188.8.131.52-1.0.el6.x86_64.rpm /etc/init.d/iomemory-vsl /etc/sysconfig/iomemory-vsl
Let’s chkconfig this on permanently.
[root@srv rhel6]# chkconfig iomemory-vsl on
We also need to enable iomemory-vsl in /etc/sysconfig/iomemory-vsl.
[root@srv init.d]# cd /etc/sysconfig [root@srv sysconfig]# grep ENABLED iomemory-vsl # If ENABLED is not set (non-zero) then iomemory-vsl init script will not be #ENABLED=1 [root@srv sysconfig]# vi iomemory-vsl [root@srv sysconfig]# grep ENABLED iomemory-vsl # If ENABLED is not set (non-zero) then iomemory-vsl init script will not be ENABLED=1 [root@srv sysconfig]#
And we can start or restart iomemory-vsl:
[root@srv sysconfig]# service iomemory-vsl restart Stopping iomemory-vsl: Unloading module iomemory-vsl [FAILED] Starting iomemory-vsl: Loading module iomemory-vsl Attaching: [ ] ( 0%) /Attaching: [ Attaching: [====================] (100%) \ fioa Attaching: [====================] (100%) fiob [ OK ]
At this point I’m going to reboot the server as well, just to make sure everything is going to get loaded if the server spontaneously restarts, which they have been known to do. ;)
[root@srv sysconfig]# reboot
Now after the reboot there are a couple more block storage devices on this server:
[root@srv ~]# ls /dev/fio? /dev/fioa /dev/fiob
We want to create a lvm physical volume (pv) on that block device:
[root@srv ~]# pvcreate /dev/fioa Device /dev/fioa not found (or ignored by filtering).
Ooops. Error message. What went wrong? Well, the “or ignored by filtering” is where to start looking. This FusionIO knowledge base entry (which you have to login to see, how annoying is that) shows that we need to add an entry to the lvm.conf on the server:
Locate and edit the /etc/lvm/lvm.conf configuration file. Add an entry similar to the following to that file: types = [ "fio", 16 ]
That is precisely what I will do.
[root@srv lvm]# grep types lvm.conf # List of pairs of additional acceptable block device types found # types = [ "fd", 16 ] types = [ "fio", 16 ]
# let's see if the types were loaded [root@srv ~]# lvm dumpconfig | grep types types=["fio", 16] [root@srv ~]# pvcreate /dev/fioa Physical volume "/dev/fioa" successfully created [root@srv ~]# pvcreate /dev/fiob Physical volume "/dev/fiob" successfully created
And create a volume group and add the pvs to it.
[root@srv ~]# vgcreate hiops /dev/fioa Volume group "hiops" successfully created [root@srv ~]# vgextend hiops /dev/fiob Volume group "hiops" successfully extended [root@srv ~]# vgs VG #PV #LV #SN Attr VSize VFree hiops 2 0 0 wz--n- 504.91g 504.91g system 1 9 0 wz--n- 58.56g 36.66g vm 1 11 2 wz--n- 1.31t 228.09g
I should note at this point that there is only 504g in the hiops volume group when there should be about 600g.
Previously, using the fio-format command, I had formatted these drives to only 80% capacity. But that was on another server, and I’m not sure it’s really necessary to do that unless you are looking for extreme performance or perhaps additional reliability.
I believe that in some cases with SSD, PCIe or otherwise, it’s not a bad idea to use less than 100% of the drive. That said, if you are looking to max out these drives performance-wise, I’d suggest talking to your vendor rather than just listening to me. :)
(AFAIK, these cards can actually take an external power source to increase performance even more. But we don’t use that functionality.)
So I’m going to reformat these drives to 100% usage. Just for fun. Why not get back that 100g because the performance/endurance at 100% is going to be fine for our usage.
Note: Brand new drives won’t have to be formatted. I’m only doing this because I had formatted the drives when they were in the other server.
Warning: Reformatting will obviously delete any data on these drives!
# first detach the /dev/fioa [root@srv ~]# fio-detach /dev/fct0 Detaching: [====================] (100%) - [root@srv ~]# fio-format -s 100% /dev/fct0 Creating a device of size 322.55GBytes (300.40GiBytes). Using block (sector) size of 512 bytes. WARNING: Formatting will destroy any existing data on the device! Do you wish to continue [y/n]? y Formatting: [====================] (100%) \ Formatting: [====================] (100%) Format successful. # then attach... [root@srv ~]# fio-attach /dev/fct0 Attaching: [====================] (100%) - fioa
And we can add that device back with pvcreate and then we should see a larger drive:
[root@srv ~]# pvcreate /dev/fioa Physical volume "/dev/fioa" successfully created [root@srv ~]# pvs /dev/fioa PV VG Fmt Attr PSize PFree /dev/fioa hiops lvm2 a- 300.40g 300.40g
I reformatted the other side of the drive back to 100% as well. (With new drives this shouldn’t be necessary.)
And the fio-status now is:
[root@srv ~]# fio-status Found 2 ioDrives in this system with 1 ioDrive Duo Fusion-io driver version: 2.2.3 build 66 Adapter: ioDrive Duo IBM 640GB High IOPS MD Class PCIe Adapter, Product Number:68Y7381 SN:XXXXX External Power: NOT connected PCIE Power limit threshold: 24.75W Sufficient power available: Unknown Connected ioDimm modules: fct0: IBM 640GB High IOPS MD Class PCIe Adapter, Product Number:68Y7381 SN:XXXXX fct1: IBM 640GB High IOPS MD Class PCIe Adapter, Product Number:68Y7381 SN:XXXXX fct0 Attached as 'fioa' (block device) IBM 640GB High IOPS MD Class PCIe Adapter, Product Number:68Y7381 SN:XXXXX Alt PN:68Y7382 Located in slot 0 Upper of ioDrive Duo SN:XXXXX PCI:8f:00.0 Firmware v5.0.6, rev 101583 322.55 GBytes block device size, 396 GBytes physical device size Sufficient power available: Unknown Internal temperature: avg 50.2 degC, max 51.2 degC Media status: Healthy; Reserves: 100.00%, warn at 10.00% fct1 Attached as 'fiob' (block device) IBM 640GB High IOPS MD Class PCIe Adapter, Product Number:68Y7381 SN:XXXXX Alt PN:68Y7382 Located in slot 1 Lower of ioDrive Duo SN:XXXXX PCI:90:00.0 Firmware v5.0.6, rev 101583 322.55 GBytes block device size, 396 GBytes physical device size Sufficient power available: Unknown Internal temperature: avg 46.3 degC, max 46.8 degC Media status: Healthy; Reserves: 100.00%, warn at 10.00%
Finally we can create a logical volume (lv) to use.
[root@srv ~]# vgs hiops VG #PV #LV #SN Attr VSize VFree hiops 1 0 0 wz--n- 300.40g 300.40g [root@srv ~]# lvcreate -n test -L10.0G /dev/hiops Logical volume "test" created
If you have any corrections or other comments, please let me know!