UPDATE 06/20/2013 – Included steps to update Dell Firmware and BIOS. I also updated when to setup the /etc/mulitpath.conf file, updated the order of some software install steps and fixed some formatting and mistakes to make the process more smooth.

UPDATE 03/30/2013 – The original post used older tools to try and make this work. I had issues with multipath and decided to call Dell as my MD3000i was still in warranty. While they would NOT support this or my OS, they did offer me the latest Resource CD and answered a few questions along the way. Thanks to their help, and various blogs on the interwebs, I can confidently setup CentOS 6.4 to a Dell MD3000i utilizing MPIO. I’ve also submitted this blog to my technician, along with my /etc/multipath.conf file to share with the Dell ESS team should any other Dell customers inquire on this setup, they would have a reference. I’ve successfully setup three different Dell servers using this procedure.

Recently I worked on a project to stand up a CentOS based XEN environment using Dell hardware. I’ve used Linux in the past, mostly on test machines and for specific software vendor builds. My experience and day-to-day in my career has been in Windows Server administration, so I decided to extensively document my configuration and experience during the setup over a series of posts.

The hardware involved (minus switching, Cisco) in this project is all Dell, specifically, Dell PE servers (M600’s) and a Dell MD3000i. The XEN servers will be utilizing the iSCSI space for 3.8TB of R10 storage.We are using some specialty software that requires the use of CentOS, so the base OS for these boxes will be CentOS 6.4. By choosing CentOS 6, this will allow us to use XEN v4.

Dell supports RHEL6, so they inadvertently support CentOS 6 as well; however, the md3000i product is EOL, no further firmware updates have been released, and the resources will not officially support RHEL6. I documented my setup and configuration of the host components as I went and turned it into this guide so that anyone searching for help with this particular setup may be helped.

http://en.community.dell.com/techcenter/os-applications/w/wiki/red-hat.aspx

 

Before beginning this guide, I assume that you have your server built fresh and are ready to begin configuration. Need help installing CentOS 6? You can find the ISO here:

http://isoredirect.centos.org/centos/6/isos/x86_64/

 

Below is a good installation guide for CentOS 6.0:
http://linuxmoz.com/how-to-install-centos-6-linux-for-servers-desktops/

 

Once you’ve got your CentOS server built and setup we can begin this guide. This guide assumes that:

-You have planned out and provisioned your network(s)
-You have basic linux command line knowledge, no need to be an expert (mounting disks, using vi, etc.)
-You have installed CentOS 6.x minimal installation

 

Now let’s get started, below are the steps that we will perform to get a CentOS 6.x, specifically 6.4 host from build to XEN host with iSCSI backend storage.

 

First we get the server on the network.

*Make sure you setup the appropriate adapter. This guide assumes eth0. To confirm the adapter, do the following:

# ls /etc/sysconfig/network-scripts/

Look for the adapters named “ifcfg-xxxx”, yours may be em1 or p3p1 depending on your situation.

You can view the below file for a breakdown of how your system maps network adapters to MAC address and also manufacturer. For example, (igb) and (0x8086:——) equals Intel, (bnx2) and (0x1434:—–) equals Broadcom.

# cat /etc/udev/rules.d/70-persistent-net.rules

 

Setup management NIC ETH0:
================
# vi /etc/sysconfig/network-scripts/ifcfg-eth0

Make sure you set change ONBOOT and BOOTPROTO as I have below:

DEVICE=eth0
HWADDR=(Edited)
TYPE=Ethernet
UUID=(Edited)
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.100.10
NETMASK=255.255.255.0
GATEWAY=192.168.100.1

 

Setup iSCSI NIC ETH1
================
# vi /etc/sysconfig/network-scripts/ifcfg-eth1

*Important, no GATEWAY needed for iSCSI network, just like on a Win box

DEVICE=eth1
HWADDR=(Edited)
TYPE=Ethernet
UUID=(Edited)
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.0.0.10
NETMASK=255.255.255.0

 

Restart Network service:
================
# service network restart

 

Setup DNS:
================
# vi /etc/resolv.conf

 

Enter in your DNS servers here:
================
nameserver 8.8.8.8
nameserver 8.8.4.4

 

Let’s go ahead and disable SELinux here. If it’s fully enforced it can cause issue:
================
# vi /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing – SELinux security policy is enforced.
# permissive – SELinux prints warnings instead of enforcing.
# disabled – No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted – Targeted processes are protected,
# mls – Multi Level Security protection.
SELINUXTYPE=targeted

 

Install wget, it’s not installed by default in minimal configuration builds:
================
# yum -y install wget

 

We also need to install perl to in order to add the Dell linux repos:
================
# yum -y install perl

 

You need to install glibc, because if you do not, the Dell install disc will give you an error. We also need Java:
================
# yum -y install glibc.i686
# yum -y install java

 

Install these two Dell repositories and then install srvadmin tools:
================
# wget -q -O – http://linux.dell.com/repo/community/bootstrap.cgi | bash
# wget -q -O – http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash

(The firmware repository is no longer maintained, so we won’t add this one.)

# yum -y install srvadmin-all

 

As part of the OMSA installation, we need to poke a hole in the firewall for the port needed, 1311:
================
# vi /etc/sysconfig/iptables

 

Here is what you will need to add:
-A INPUT -m state –state NEW -m tcp -p tcp –dport 1311 -j ACCEPT

# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A INPUT -m state –state NEW -m tcp -p tcp –dport 1311 -j ACCEPT
COMMIT

 

Save the file and restart iptables:
================
# service iptables restart

 

Now let’s look at the srvadmin status, enable the services, and then start them:

# /opt/dell/srvadmin/sbin/srvadmin-services.sh status
dell_rbu (module) is stopped
ipmi driver is stopped
dsm_sa_datamgrd is stopped
dsm_sa_eventmgrd is stopped
dsm_sa_snmpd is stopped
dsm_om_shrsvcd is stopped
dsm_om_connsvcd is stopped

# /opt/dell/srvadmin/sbin/srvadmin-services.sh enable
racsvc 0:off 1:off 2:on 3:on 4:on 5:on 6:off
instsvcdrv 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dataeng 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dsm_om_shrsvc 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dsm_om_connsvc 0:off 1:off 2:off 3:on 4:off 5:on 6:off

# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
Starting Systems Management Device Drivers:
Starting dell_rbu: [ OK ]
Starting ipmi driver: [FAILED]
Starting Systems Management Device Drivers:
Starting dell_rbu: Already started [ OK ]
Starting ipmi driver: [FAILED]
Starting DSM SA Shared Services: [ OK ]
Starting DSM SA Connection Service: [ OK ]

 

Notice that the ipmi drive fails, oh man… That’s ok, thanks to the interwebs we have a solution from way back in 06′:

http://lists.us.dell.com/pipermail/linux-poweredge/2006-December/028773.html

Adam Williams wrote:
> try running modprobe ipmi_si and modprobe ipmi_devintf and if that > works put them and srvadmin-services.sh start in /etc/rc.d/rc.local

In my case running the modprobe dev worked, followed by running the service start again.

# modprobe ipmi_devintf

# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
Starting Systems Management Device Drivers:
Starting dell_rbu: Already started [ OK ]
Starting ipmi driver: Already started [ OK ]
Starting Systems Management Data Engine:
Starting dsm_sa_datamgrd: [ OK ]
Starting dsm_sa_eventmgrd: [ OK ]
Starting dsm_sa_snmpd: [ OK ]
DSM SA Shared Services is already started
DSM SA Connection Service is already started

 

We just need to make sure this runs at startup, so as Adam suggested, I added these to the proper file:

# vi /etc/rc.d/rc.local

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don’t
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
modprobe ipmi_devintf
/opt/dell/srvadmin/sbin/srvadmin-services.sh start

 

Now we should check for and update our BIOS and firmware. Thanks to a post over at how2centos, we can do this quick and easy. Run the following commands to do so:

# yum install dell_ft_install
# yum install $(bootstrap_firmware)
# inventory_firmware
# update_firmware
# update_firmware –yes

 

Install and configure iSCSI Initiator:
================
# yum -y install iscsi-initiator-utils

 

Configure a proper name for your initiator. IMPORTANT – Make sure this is 20 characters or less otherwise you’ll get an error when trying to discover targets that you cannot login. It will say it refused connection. This bit me, hard, so don’t let it bite you:
================
# vi /etc/iscsi/initiatorname.iscsi

 

Use something like this:
================
InitiatorName=iqn.yy-mm.:san.hostname

Whatever you decided, make sure you are able to identify the host and that it does NOT match another server. We’ll finish the configuration of the iSCSI adapter later. In order to configure the md3000i host components on your CentOS 6.x system, we need to install some components.

 

Now we need to dedicate an interface to iSCSI. I am going to create an interface called “ieth1” that will use eth1 for iSCSI. You’ll need to evaluate your setup and set your interface and ifname the way you’d like:
================
# vi /var/lib/iscsi/ifaces/ieth1

iface.iscsi_ifacename = ieth1
iface.net_ifacename = eth1
iface.hwaddress = default
iface.transport_name = tcp

 

 

Now let’s go ahead and install the multipath device driver:
================
# yum install device-mapper-multipath

We need to set the multipath.conf file, but this will be done later. Dell will add to this file if we created it now complicating things. 

Now we can finally get to the Dell tools and software installed for our MD3000i on our CentOS 6.4 host. You can download the most recent Dell md3000i Resource CD here (thanks again to Justin and the unknown Linux colleague over at ESS):

http://ftp.us.dell.com/FOLDER01021520M/1/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso

This Resource CD contains the drivers and software for the MD3000/i, MD32xx/i, and MD35xx products. It also has the drivers for coexistence, should you want to connect to multiple MD series models concurrently on the same host. You will either need to burn the CD and insert it to your server, or easier yet, if you have iDRAC mount it or you can use an NFS share.

Screenshots below for iDRAC mount:

Pic_1

When you launch iDRAC viewer, here is where you mount the media:

Pic_2

We’ll need to create a directory and mount the CD, regardless of which method you chose.
================
# mkdir /mnt/cd
# mount /dev/cdrom /mnt/cd

 

Note – if you have more than one CD/DVD drive connected to the server, you may get the following:

mount /dev/cdrom /mnt/cd
mount: you must specify the filesystem type

On one of my servers this was the case because I had a USB DVD drive attached. Try /dev/cdrom1 in this case.

 

Now that we have the resource CD mounted, we’ll need to navigate to the root of the CD and run the installer. My entries are in RED, I have a Windows server that has the MDSM utility installed and manages several of these SANs. I only am interested in the host components for this project. Further this is the only scenario I’ve tested, and most likely if you are running a Linux server you don’t have a GUI.

TLDR: Only install host components, use a Windows server to manage MDSM.

Make sure that you select 5, for MD3000i when prompted during the installation.
================

# ./autorun
Preparing to install…
Extracting the JRE from the installer archive…
Unpacking the JRE…
Extracting the installation resources from the installer archive…
Configuring the installer for this system’s environment…

Launching installer…

===============================================================================
Choose Locale…
—————-

1- Deutsch
->2- English
3- Español
4- Français

CHOOSE LOCALE BY NUMBER: 2
===============================================================================
Dell MD Storage Software (created with InstallAnywhere)
——————————————————————————-

Preparing CONSOLE Mode Installation…

 
===============================================================================
Welcome
——-

This wizard installs the software necessary to discover, configure, manage, and
monitor all Dell PowerVault MD Series Storage Arrays available on your network.

PRESS <ENTER> TO CONTINUE:

 

===============================================================================
License Agreement
—————–

Installation and Use of Dell MD Storage Software Requires Acceptance of the
Following License Agreement:

No file found to preview

DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N): y

 

===============================================================================
Installation Type
—————–

Please choose the Install Set to be installed by this installer.

->1- Full (Recommended)
2- Management Station
3- Host Only

4- Customize…

ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS <ENTER> TO ACCEPT THE DEFAULT
: 3

 

===============================================================================
Choose MD Series Model
———————-

Choose the MD Series storage array that you are connecting to the host server.
If selecting a fibre channel option, no other models may be chosen. If choosing
a non-fibre channel option, multiple selections are allowed.

1- Fibre Channel (MD3600f, MD3620f, MD3660f)
2- iSCSI (MD3200i, MD3220i, MD3260i, MD3600i, MD3620i, MD3660i)

3- SAS (MD3200, MD3220, MD3260)
4- SAS (MD3000)
5- iSCSI (MD3000i)

ENTER A COMMA-SEPARATED LIST OF NUMBERS REPRESENTING THE DESIRED CHOICES, OR
PRESS <ENTER> TO ACCEPT THE DEFAULT: 5

 
===============================================================================
Configuration Utility
———————

The Modular Disk Configuration Utility configures the iSCSI network of host
servers and iSCSI-based Modular Disk storage arrays via a wizard-driven
interface.

Would you like to automatically run the Modular Disk Configuration Utility the
first time the system is rebooted?

->1- Yes (Recommended)
2- No

ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT:: 2

 
===============================================================================
Installation Location
———————

Where would you like to install?

Default Install Folder: /opt/dell/mdstoragesoftware

ENTER AN ABSOLUTE PATH, OR PRESS <ENTER> TO ACCEPT THE DEFAULT
:

 

===============================================================================
Installation Summary
——————–

Please Review the Following Before Continuing:

Product Name:
Dell MD Storage Software

Install Folder:
/opt/dell/mdstoragesoftware

Link Folder:
/root

Install Set:
Host Only

MD Storage Arrays:
,,,,,,iSCSI (MD3000i)

Disk Space Information (for Installation Target):
Required: 154,591,992 Bytes
Available: 47,898,775,552 Bytes

PRESS <ENTER> TO CONTINUE:

 

===============================================================================
Ready To Install
—————-

InstallAnywhere is now ready to install Dell MD Storage Software onto your
system at the following location:

/opt/dell/mdstoragesoftware

PRESS <ENTER> TO INSTALL:

 

===============================================================================
Installing…
————-

[==================|==================|==================|==================]
[——————|——————|——————|——————]
===============================================================================
Install Complete
—————-

Congratulations! Dell MD Storage Software has been successfully installed to:

/opt/dell/mdstoragesoftware

You must restart the system to complete the installation.

->1- Yes, restart my system now
2- No, I will restart my system myself later

ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT:: 2

I told the software not to reboot my server, but it did anyways in my case every time. Maybe it’s a bug or maybe it does not like me, either way be prepared for that to happen regardless of your choice.

*NOTE – Dell will modify the /etc/multipath.conf file we created earlier. Go back into this file and remove anything modified by the installer and make it match the above /etc/multipath.conf file.

 

Now we are going to setup the multipath configuration file. It is VERY important that the un-commented data be exactly like mine. Remove the existing file and create it anew is the easiest way to do this.

*Note – We’ll be updating this file again later to add an alias for the WWID. More on that later.
================
# rm /etc/multipath.conf
# vi /etc/multipath.conf

# Gabriel Beaver custom setup 03/27/2013
devices {
device {
vendor “DELL”
product “MD32xxi”
path_grouping_policy group_by_prio
path_checker rdac
path_selector “round-robin 0”
hardware_handler “1 rdac”
failback immediate
features “2 pg_init_retries 50”
no_path_retry 30
rr_min_io 100
prio “/sbin/mpath_prio_rdac /dev/%n”
}
device {
vendor “DELL”
product “MD32xx”
path_grouping_policy group_by_prio
path_checker rdac
path_selector “round-robin 0”
hardware_handler “1 rdac”
failback immediate
features “2 pg_init_retries 50”
no_path_retry 30
rr_min_io 100
prio “/sbin/mpath_prio_rdac /dev/%n”
}
}
# END GB custom setup 03/27/2013
# GB custom blacklist
blacklist {
devnode “^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*”
devnode “^hd[a-z]”
devnode “^sda”
devnode “^sda[0-9]”
device {
vendor DELL
product “PERC|Universal|Virtual”
}
}
# END GB custom blacklist

 

Next, start the services we need and ensure they run at startup:

================
# /etc/init.d/iscsi start
# /etc/init.d/iscsid start
# /etc/init.d/multipathd start
# chkconfig iscsi on
# chkconfig iscsid on
# chkconfig multipathd on

 

Now we need to run a discovery of our targets:
==================
# iscsiadm -m discovery -t sendtargets -p (IP of a controller)
1.1.1.1:3260,1 iqn.1984-05.com.dell:powervault.md3000i.1234
1.1.1.2:3260,1 iqn.1984-05.com.dell:powervault.md3000i.5678
1.1.1.3.14:3260,2 iqn.1984-05.com.dell:powervault.md3000i.91011
1.1.1.4:3260,2 iqn.1984-05.com.dell:powervault.md3000i.121314

 

Now you need to go to the MDSM and configure the access for your initiator. I have MDSM loaded and managing multiple arrays on a Windows Server. I’ll show you how to add your server to the SAN.

Launch MDSM, select your SAN and click Manage.

Click Configure tab, then select Configure Host Access (Manual). With this setup, I’ve been unable to get Auto to work, but adding it Manual works just fine.

You need to give your adapter a name and select Linux:

Pic_3

Moment of truth, now it’s time to see if your adapter is seen. Horray! My adapter is seen and matches the name I setup in the iscsi configuration file earlier. Select it and click Add.

Pic_4

Next, you have to specify if you will be sharing a virtual disk, in my case I will be:

Pic_5

Then add any other nodes and finish the wizard accordingly.

Next, execute a login command from your host:
===========================
# iscsiadm -m node –login

Next run: cat /proc/partitions. You should see your new devices showing up similar to these in red. My MD3000i has 4 NICs, 2 for each controller, total of 4 IP addresses. Linux sees each of these as a “sdx” drive and it sees my 1.5TB vDisk appropriately:

8 16 2928672768 sdb
8 17 2928670720 sdb1
8 0 142737408 sda
8 1 512000 sda1
8 2 142224384 sda2
253 0 52428800 dm-0
253 1 8192000 dm-1
253 2 81600512 dm-2
8 32 1610612736 sdc
8 48 1610612736 sdd
8 64 1610612736 sde
8 80 1610612736 sdf
253 3 1610612736 dm-3
253 4 1610611712 dm-4

 

In order for MPIO to work we will NOT map or reference the storage by the ID seen above. Let’s look at what the multipathd service sees now. If everything is correct you’ll see something simlar to the below. If you get errors when running the multipath, where it says “error on line 49” or such, make sure you copied my /etc/multipath.conf file correctly, issue “/etc/init.d/multipathd restart” and try the below again:

# multipath -ll

(30000000000c0d1000000ab123456789) dm-3 DELL,MD3000i
size=1.5T features=’3 queue_if_no_path pg_init_retries 50′ hwhandler=’1 rdac’ wp=rw
|-+- policy=’round-robin 0′ prio=6 status=active
| |- 6:0:0:0 sdd 8:48 active ready running
| `- 7:0:0:0 sdc 8:32 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
|- 4:0:0:0 sdf 8:80 active ghost running
`- 5:0:0:0 sde 8:64 active ghost running

 

The number in your parenthesis is the WWID and you can view the device by looking at the contents of the /dev/mapper directory:
==============
# ls /dev/mapper

You should see that WWID there. This is a horrible name and I sure don’t want to map a WWID as a drive? I’m guessing you don’t want to either. This is where having an alias is great. Let’s append the configuration file to setup an alias.

# vi /etc/multipath.conf

# GB custom multipaths
multipaths {
multipath {
wwid 30000000000c0d1000000ab123456789
alias idata
}
}
# END GB custom multipaths

Next, issue a restart of multipath and run the list again.

# /etc/init.d/multipathd restart
# multipath -ll

idata (30000000000c0d1000000ab123456789) dm-3 DELL,MD3000i
size=1.5T features=’3 queue_if_no_path pg_init_retries 50′ hwhandler=’1 rdac’ wp=rw
|-+- policy=’round-robin 0′ prio=6 status=active
| |- 6:0:0:0 sdd 8:48 active ready running
| `- 7:0:0:0 sdc 8:32 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
|- 4:0:0:0 sdf 8:80 active ghost running
`- 5:0:0:0 sde 8:64 active ghost running

Notice, our alias is showing up here, and if you do an ls of the /dev/mapper directory again, you’ll see the alias.

Now we need to setup a partition and configure the filesystem type. We’ll use parted. You can use fdisk, but only for drives smaller than 2 TB. I am also going to use ext4 filesystem on this drive.

# parted /dev/mapper/idata

(parted) mklabel gpt
(parted) unit TB
(parted) mkpart primary 0 -0
(parted) print

Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/onapp-data: 1649GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 1649GB 1649GB primary

(parted) quit

# mkfs.ext4 /dev/mapper/idata1

 

Now all that is left is to mount it to the location of your choice. You’ll also need to add it to your /etc/fstab file to have it mapped every time you reboot.

 

So there you have it. We have setup a CentOS 6.x server with the Dell OMSA host utilities and successfully configured the MPIO iSCSI connection to a Dell md3000i SAN device.

Best of luck to those who read! Any questions hit the comments, I’ll try my best to help.


Gabe

CentOS 6.x and Dell md3000i Setup Guide
%d bloggers like this: