UPDATE 06/20/2013 – Included steps to update Dell Firmware and BIOS. I also updated when to setup the /etc/mulitpath.conf file, updated the order of some software install steps and fixed some formatting and mistakes to make the process more smooth.

UPDATE 03/30/2013 – The original post used older tools to try and make this work. I had issues with multipath and decided to call Dell as my MD3000i was still in warranty. While they would NOT support this or my OS, they did offer me the latest Resource CD and answered a few questions along the way. Thanks to their help, and various blogs on the interwebs, I can confidently setup CentOS 6.4 to a Dell MD3000i utilizing MPIO. I’ve also submitted this blog to my technician, along with my /etc/multipath.conf file to share with the Dell ESS team should any other Dell customers inquire on this setup, they would have a reference. I’ve successfully setup three different Dell servers using this procedure.

Recently I worked on a project to stand up a CentOS based XEN environment using Dell hardware. I’ve used Linux in the past, mostly on test machines and for specific software vendor builds. My experience and day-to-day in my career has been in Windows Server administration, so I decided to extensively document my configuration and experience during the setup over a series of posts.

The hardware involved (minus switching, Cisco) in this project is all Dell, specifically, Dell PE servers (M600’s) and a Dell MD3000i. The XEN servers will be utilizing the iSCSI space for 3.8TB of R10 storage.We are using some specialty software that requires the use of CentOS, so the base OS for these boxes will be CentOS 6.4. By choosing CentOS 6, this will allow us to use XEN v4.

Dell supports RHEL6, so they inadvertently support CentOS 6 as well; however, the md3000i product is EOL, no further firmware updates have been released, and the resources will not officially support RHEL6. I documented my setup and configuration of the host components as I went and turned it into this guide so that anyone searching for help with this particular setup may be helped.

http://en.community.dell.com/techcenter/os-applications/w/wiki/red-hat.aspx

 

Before beginning this guide, I assume that you have your server built fresh and are ready to begin configuration. Need help installing CentOS 6? You can find the ISO here:

http://isoredirect.centos.org/centos/6/isos/x86_64/

 

Below is a good installation guide for CentOS 6.0:
http://linuxmoz.com/how-to-install-centos-6-linux-for-servers-desktops/

 

Once you’ve got your CentOS server built and setup we can begin this guide. This guide assumes that:

-You have planned out and provisioned your network(s)
-You have basic linux command line knowledge, no need to be an expert (mounting disks, using vi, etc.)
-You have installed CentOS 6.x minimal installation

 

Now let’s get started, below are the steps that we will perform to get a CentOS 6.x, specifically 6.4 host from build to XEN host with iSCSI backend storage.

 

First we get the server on the network.

*Make sure you setup the appropriate adapter. This guide assumes eth0. To confirm the adapter, do the following:

# ls /etc/sysconfig/network-scripts/

Look for the adapters named “ifcfg-xxxx”, yours may be em1 or p3p1 depending on your situation.

You can view the below file for a breakdown of how your system maps network adapters to MAC address and also manufacturer. For example, (igb) and (0x8086:——) equals Intel, (bnx2) and (0x1434:—–) equals Broadcom.

# cat /etc/udev/rules.d/70-persistent-net.rules

 

Setup management NIC ETH0:
================
# vi /etc/sysconfig/network-scripts/ifcfg-eth0

Make sure you set change ONBOOT and BOOTPROTO as I have below:

DEVICE=eth0
HWADDR=(Edited)
TYPE=Ethernet
UUID=(Edited)
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.100.10
NETMASK=255.255.255.0
GATEWAY=192.168.100.1

 

Setup iSCSI NIC ETH1
================
# vi /etc/sysconfig/network-scripts/ifcfg-eth1

*Important, no GATEWAY needed for iSCSI network, just like on a Win box

DEVICE=eth1
HWADDR=(Edited)
TYPE=Ethernet
UUID=(Edited)
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=10.0.0.10
NETMASK=255.255.255.0

 

Restart Network service:
================
# service network restart

 

Setup DNS:
================
# vi /etc/resolv.conf

 

Enter in your DNS servers here:
================
nameserver 8.8.8.8
nameserver 8.8.4.4

 

Let’s go ahead and disable SELinux here. If it’s fully enforced it can cause issue:
================
# vi /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing – SELinux security policy is enforced.
# permissive – SELinux prints warnings instead of enforcing.
# disabled – No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted – Targeted processes are protected,
# mls – Multi Level Security protection.
SELINUXTYPE=targeted

 

Install wget, it’s not installed by default in minimal configuration builds:
================
# yum -y install wget

 

We also need to install perl to in order to add the Dell linux repos:
================
# yum -y install perl

 

You need to install glibc, because if you do not, the Dell install disc will give you an error. We also need Java:
================
# yum -y install glibc.i686
# yum -y install java

 

Install these two Dell repositories and then install srvadmin tools:
================
# wget -q -O – http://linux.dell.com/repo/community/bootstrap.cgi | bash
# wget -q -O – http://linux.dell.com/repo/hardware/latest/bootstrap.cgi | bash

(The firmware repository is no longer maintained, so we won’t add this one.)

# yum -y install srvadmin-all

 

As part of the OMSA installation, we need to poke a hole in the firewall for the port needed, 1311:
================
# vi /etc/sysconfig/iptables

 

Here is what you will need to add:
-A INPUT -m state –state NEW -m tcp -p tcp –dport 1311 -j ACCEPT

# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-prohibited
-A FORWARD -j REJECT –reject-with icmp-host-prohibited
-A INPUT -m state –state NEW -m tcp -p tcp –dport 1311 -j ACCEPT
COMMIT

 

Save the file and restart iptables:
================
# service iptables restart

 

Now let’s look at the srvadmin status, enable the services, and then start them:

# /opt/dell/srvadmin/sbin/srvadmin-services.sh status
dell_rbu (module) is stopped
ipmi driver is stopped
dsm_sa_datamgrd is stopped
dsm_sa_eventmgrd is stopped
dsm_sa_snmpd is stopped
dsm_om_shrsvcd is stopped
dsm_om_connsvcd is stopped

# /opt/dell/srvadmin/sbin/srvadmin-services.sh enable
racsvc 0:off 1:off 2:on 3:on 4:on 5:on 6:off
instsvcdrv 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dataeng 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dsm_om_shrsvc 0:off 1:off 2:off 3:on 4:off 5:on 6:off
dsm_om_connsvc 0:off 1:off 2:off 3:on 4:off 5:on 6:off

# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
Starting Systems Management Device Drivers:
Starting dell_rbu: [ OK ]
Starting ipmi driver: [FAILED]
Starting Systems Management Device Drivers:
Starting dell_rbu: Already started [ OK ]
Starting ipmi driver: [FAILED]
Starting DSM SA Shared Services: [ OK ]
Starting DSM SA Connection Service: [ OK ]

 

Notice that the ipmi drive fails, oh man… That’s ok, thanks to the interwebs we have a solution from way back in 06′:

http://lists.us.dell.com/pipermail/linux-poweredge/2006-December/028773.html

Adam Williams wrote:
> try running modprobe ipmi_si and modprobe ipmi_devintf and if that > works put them and srvadmin-services.sh start in /etc/rc.d/rc.local

In my case running the modprobe dev worked, followed by running the service start again.

# modprobe ipmi_devintf

# /opt/dell/srvadmin/sbin/srvadmin-services.sh start
Starting Systems Management Device Drivers:
Starting dell_rbu: Already started [ OK ]
Starting ipmi driver: Already started [ OK ]
Starting Systems Management Data Engine:
Starting dsm_sa_datamgrd: [ OK ]
Starting dsm_sa_eventmgrd: [ OK ]
Starting dsm_sa_snmpd: [ OK ]
DSM SA Shared Services is already started
DSM SA Connection Service is already started

 

We just need to make sure this runs at startup, so as Adam suggested, I added these to the proper file:

# vi /etc/rc.d/rc.local

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don’t
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
modprobe ipmi_devintf
/opt/dell/srvadmin/sbin/srvadmin-services.sh start

 

Now we should check for and update our BIOS and firmware. Thanks to a post over at how2centos, we can do this quick and easy. Run the following commands to do so:

# yum install dell_ft_install
# yum install $(bootstrap_firmware)
# inventory_firmware
# update_firmware
# update_firmware –yes

 

Install and configure iSCSI Initiator:
================
# yum -y install iscsi-initiator-utils

 

Configure a proper name for your initiator. IMPORTANT – Make sure this is 20 characters or less otherwise you’ll get an error when trying to discover targets that you cannot login. It will say it refused connection. This bit me, hard, so don’t let it bite you:
================
# vi /etc/iscsi/initiatorname.iscsi

 

Use something like this:
================
InitiatorName=iqn.yy-mm.:san.hostname

Whatever you decided, make sure you are able to identify the host and that it does NOT match another server. We’ll finish the configuration of the iSCSI adapter later. In order to configure the md3000i host components on your CentOS 6.x system, we need to install some components.

 

Now we need to dedicate an interface to iSCSI. I am going to create an interface called “ieth1” that will use eth1 for iSCSI. You’ll need to evaluate your setup and set your interface and ifname the way you’d like:
================
# vi /var/lib/iscsi/ifaces/ieth1

iface.iscsi_ifacename = ieth1
iface.net_ifacename = eth1
iface.hwaddress = default
iface.transport_name = tcp

 

 

Now let’s go ahead and install the multipath device driver:
================
# yum install device-mapper-multipath

We need to set the multipath.conf file, but this will be done later. Dell will add to this file if we created it now complicating things. 

Now we can finally get to the Dell tools and software installed for our MD3000i on our CentOS 6.4 host. You can download the most recent Dell md3000i Resource CD here (thanks again to Justin and the unknown Linux colleague over at ESS):

http://ftp.us.dell.com/FOLDER01021520M/1/DELL_MDSS_Consolidated_RDVD_4_1_0_88.iso

This Resource CD contains the drivers and software for the MD3000/i, MD32xx/i, and MD35xx products. It also has the drivers for coexistence, should you want to connect to multiple MD series models concurrently on the same host. You will either need to burn the CD and insert it to your server, or easier yet, if you have iDRAC mount it or you can use an NFS share.

Screenshots below for iDRAC mount:

Pic_1

When you launch iDRAC viewer, here is where you mount the media:

Pic_2

We’ll need to create a directory and mount the CD, regardless of which method you chose.
================
# mkdir /mnt/cd
# mount /dev/cdrom /mnt/cd

 

Note – if you have more than one CD/DVD drive connected to the server, you may get the following:

mount /dev/cdrom /mnt/cd
mount: you must specify the filesystem type

On one of my servers this was the case because I had a USB DVD drive attached. Try /dev/cdrom1 in this case.

 

Now that we have the resource CD mounted, we’ll need to navigate to the root of the CD and run the installer. My entries are in RED, I have a Windows server that has the MDSM utility installed and manages several of these SANs. I only am interested in the host components for this project. Further this is the only scenario I’ve tested, and most likely if you are running a Linux server you don’t have a GUI.

TLDR: Only install host components, use a Windows server to manage MDSM.

Make sure that you select 5, for MD3000i when prompted during the installation.
================

# ./autorun
Preparing to install…
Extracting the JRE from the installer archive…
Unpacking the JRE…
Extracting the installation resources from the installer archive…
Configuring the installer for this system’s environment…

Launching installer…

===============================================================================
Choose Locale…
—————-

1- Deutsch
->2- English
3- Español
4- Français

CHOOSE LOCALE BY NUMBER: 2
===============================================================================
Dell MD Storage Software (created with InstallAnywhere)
——————————————————————————-

Preparing CONSOLE Mode Installation…

 
===============================================================================
Welcome
——-

This wizard installs the software necessary to discover, configure, manage, and
monitor all Dell PowerVault MD Series Storage Arrays available on your network.

PRESS <ENTER> TO CONTINUE:

 

===============================================================================
License Agreement
—————–

Installation and Use of Dell MD Storage Software Requires Acceptance of the
Following License Agreement:

No file found to preview

DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N): y

 

===============================================================================
Installation Type
—————–

Please choose the Install Set to be installed by this installer.

->1- Full (Recommended)
2- Management Station
3- Host Only

4- Customize…

ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS <ENTER> TO ACCEPT THE DEFAULT
: 3

 

===============================================================================
Choose MD Series Model
———————-

Choose the MD Series storage array that you are connecting to the host server.
If selecting a fibre channel option, no other models may be chosen. If choosing
a non-fibre channel option, multiple selections are allowed.

1- Fibre Channel (MD3600f, MD3620f, MD3660f)
2- iSCSI (MD3200i, MD3220i, MD3260i, MD3600i, MD3620i, MD3660i)

3- SAS (MD3200, MD3220, MD3260)
4- SAS (MD3000)
5- iSCSI (MD3000i)

ENTER A COMMA-SEPARATED LIST OF NUMBERS REPRESENTING THE DESIRED CHOICES, OR
PRESS <ENTER> TO ACCEPT THE DEFAULT: 5

 
===============================================================================
Configuration Utility
———————

The Modular Disk Configuration Utility configures the iSCSI network of host
servers and iSCSI-based Modular Disk storage arrays via a wizard-driven
interface.

Would you like to automatically run the Modular Disk Configuration Utility the
first time the system is rebooted?

->1- Yes (Recommended)
2- No

ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT:: 2

 
===============================================================================
Installation Location
———————

Where would you like to install?

Default Install Folder: /opt/dell/mdstoragesoftware

ENTER AN ABSOLUTE PATH, OR PRESS <ENTER> TO ACCEPT THE DEFAULT
:

 

===============================================================================
Installation Summary
——————–

Please Review the Following Before Continuing:

Product Name:
Dell MD Storage Software

Install Folder:
/opt/dell/mdstoragesoftware

Link Folder:
/root

Install Set:
Host Only

MD Storage Arrays:
,,,,,,iSCSI (MD3000i)

Disk Space Information (for Installation Target):
Required: 154,591,992 Bytes
Available: 47,898,775,552 Bytes

PRESS <ENTER> TO CONTINUE:

 

===============================================================================
Ready To Install
—————-

InstallAnywhere is now ready to install Dell MD Storage Software onto your
system at the following location:

/opt/dell/mdstoragesoftware

PRESS <ENTER> TO INSTALL:

 

===============================================================================
Installing…
————-

[==================|==================|==================|==================]
[——————|——————|——————|——————]
===============================================================================
Install Complete
—————-

Congratulations! Dell MD Storage Software has been successfully installed to:

/opt/dell/mdstoragesoftware

You must restart the system to complete the installation.

->1- Yes, restart my system now
2- No, I will restart my system myself later

ENTER THE NUMBER FOR YOUR CHOICE, OR PRESS <ENTER> TO ACCEPT THE DEFAULT:: 2

I told the software not to reboot my server, but it did anyways in my case every time. Maybe it’s a bug or maybe it does not like me, either way be prepared for that to happen regardless of your choice.

*NOTE – Dell will modify the /etc/multipath.conf file we created earlier. Go back into this file and remove anything modified by the installer and make it match the above /etc/multipath.conf file.

 

Now we are going to setup the multipath configuration file. It is VERY important that the un-commented data be exactly like mine. Remove the existing file and create it anew is the easiest way to do this.

*Note – We’ll be updating this file again later to add an alias for the WWID. More on that later.
================
# rm /etc/multipath.conf
# vi /etc/multipath.conf

# Gabriel Beaver custom setup 03/27/2013
devices {
device {
vendor “DELL”
product “MD32xxi”
path_grouping_policy group_by_prio
path_checker rdac
path_selector “round-robin 0”
hardware_handler “1 rdac”
failback immediate
features “2 pg_init_retries 50”
no_path_retry 30
rr_min_io 100
prio “/sbin/mpath_prio_rdac /dev/%n”
}
device {
vendor “DELL”
product “MD32xx”
path_grouping_policy group_by_prio
path_checker rdac
path_selector “round-robin 0”
hardware_handler “1 rdac”
failback immediate
features “2 pg_init_retries 50”
no_path_retry 30
rr_min_io 100
prio “/sbin/mpath_prio_rdac /dev/%n”
}
}
# END GB custom setup 03/27/2013
# GB custom blacklist
blacklist {
devnode “^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*”
devnode “^hd[a-z]”
devnode “^sda”
devnode “^sda[0-9]”
device {
vendor DELL
product “PERC|Universal|Virtual”
}
}
# END GB custom blacklist

 

Next, start the services we need and ensure they run at startup:

================
# /etc/init.d/iscsi start
# /etc/init.d/iscsid start
# /etc/init.d/multipathd start
# chkconfig iscsi on
# chkconfig iscsid on
# chkconfig multipathd on

 

Now we need to run a discovery of our targets:
==================
# iscsiadm -m discovery -t sendtargets -p (IP of a controller)
1.1.1.1:3260,1 iqn.1984-05.com.dell:powervault.md3000i.1234
1.1.1.2:3260,1 iqn.1984-05.com.dell:powervault.md3000i.5678
1.1.1.3.14:3260,2 iqn.1984-05.com.dell:powervault.md3000i.91011
1.1.1.4:3260,2 iqn.1984-05.com.dell:powervault.md3000i.121314

 

Now you need to go to the MDSM and configure the access for your initiator. I have MDSM loaded and managing multiple arrays on a Windows Server. I’ll show you how to add your server to the SAN.

Launch MDSM, select your SAN and click Manage.

Click Configure tab, then select Configure Host Access (Manual). With this setup, I’ve been unable to get Auto to work, but adding it Manual works just fine.

You need to give your adapter a name and select Linux:

Pic_3

Moment of truth, now it’s time to see if your adapter is seen. Horray! My adapter is seen and matches the name I setup in the iscsi configuration file earlier. Select it and click Add.

Pic_4

Next, you have to specify if you will be sharing a virtual disk, in my case I will be:

Pic_5

Then add any other nodes and finish the wizard accordingly.

Next, execute a login command from your host:
===========================
# iscsiadm -m node –login

Next run: cat /proc/partitions. You should see your new devices showing up similar to these in red. My MD3000i has 4 NICs, 2 for each controller, total of 4 IP addresses. Linux sees each of these as a “sdx” drive and it sees my 1.5TB vDisk appropriately:

8 16 2928672768 sdb
8 17 2928670720 sdb1
8 0 142737408 sda
8 1 512000 sda1
8 2 142224384 sda2
253 0 52428800 dm-0
253 1 8192000 dm-1
253 2 81600512 dm-2
8 32 1610612736 sdc
8 48 1610612736 sdd
8 64 1610612736 sde
8 80 1610612736 sdf
253 3 1610612736 dm-3
253 4 1610611712 dm-4

 

In order for MPIO to work we will NOT map or reference the storage by the ID seen above. Let’s look at what the multipathd service sees now. If everything is correct you’ll see something simlar to the below. If you get errors when running the multipath, where it says “error on line 49” or such, make sure you copied my /etc/multipath.conf file correctly, issue “/etc/init.d/multipathd restart” and try the below again:

# multipath -ll

(30000000000c0d1000000ab123456789) dm-3 DELL,MD3000i
size=1.5T features=’3 queue_if_no_path pg_init_retries 50′ hwhandler=’1 rdac’ wp=rw
|-+- policy=’round-robin 0′ prio=6 status=active
| |- 6:0:0:0 sdd 8:48 active ready running
| `- 7:0:0:0 sdc 8:32 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
|- 4:0:0:0 sdf 8:80 active ghost running
`- 5:0:0:0 sde 8:64 active ghost running

 

The number in your parenthesis is the WWID and you can view the device by looking at the contents of the /dev/mapper directory:
==============
# ls /dev/mapper

You should see that WWID there. This is a horrible name and I sure don’t want to map a WWID as a drive? I’m guessing you don’t want to either. This is where having an alias is great. Let’s append the configuration file to setup an alias.

# vi /etc/multipath.conf

# GB custom multipaths
multipaths {
multipath {
wwid 30000000000c0d1000000ab123456789
alias idata
}
}
# END GB custom multipaths

Next, issue a restart of multipath and run the list again.

# /etc/init.d/multipathd restart
# multipath -ll

idata (30000000000c0d1000000ab123456789) dm-3 DELL,MD3000i
size=1.5T features=’3 queue_if_no_path pg_init_retries 50′ hwhandler=’1 rdac’ wp=rw
|-+- policy=’round-robin 0′ prio=6 status=active
| |- 6:0:0:0 sdd 8:48 active ready running
| `- 7:0:0:0 sdc 8:32 active ready running
`-+- policy=’round-robin 0′ prio=1 status=enabled
|- 4:0:0:0 sdf 8:80 active ghost running
`- 5:0:0:0 sde 8:64 active ghost running

Notice, our alias is showing up here, and if you do an ls of the /dev/mapper directory again, you’ll see the alias.

Now we need to setup a partition and configure the filesystem type. We’ll use parted. You can use fdisk, but only for drives smaller than 2 TB. I am also going to use ext4 filesystem on this drive.

# parted /dev/mapper/idata

(parted) mklabel gpt
(parted) unit TB
(parted) mkpart primary 0 -0
(parted) print

Model: Linux device-mapper (multipath) (dm)
Disk /dev/mapper/onapp-data: 1649GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags
1 1049kB 1649GB 1649GB primary

(parted) quit

# mkfs.ext4 /dev/mapper/idata1

 

Now all that is left is to mount it to the location of your choice. You’ll also need to add it to your /etc/fstab file to have it mapped every time you reboot.

 

So there you have it. We have setup a CentOS 6.x server with the Dell OMSA host utilities and successfully configured the MPIO iSCSI connection to a Dell md3000i SAN device.

Best of luck to those who read! Any questions hit the comments, I’ll try my best to help.


Gabe

20 Comments

  1. Kory

    Reply

    Thanks for taking the time to write this up. I had to do nearly this exact same thing but for an MD3000 not the MD3000i. Anyone else out there trying to do this here are a few tips that I came across.

    Do not use the Dell MD3000 Resource CD on their site. Use the one linked in this post. The one on their site is for RHEL 5 not for RHEL 6. The one linked here is for RHEL 6 / CentOS 6. If you try to use a different one you will have issues with Java JRE.

    I did the minimal install as well but eventually added the X window system (something like yum installgroup “X Window System” “Desktop”) Then set the /etc/inittab file to run level 5. Reboot and complete the installation. Once in the desktop you can then open a terminal, switch to super user “su” and then you can run the MD Storage Manager client by going to /opt/dell/mdstoragemanager/client/SMclient” or something like that. (I’m not at the server now to double check all of these).

    If you need to update your firmware from .06 to .07 or from Generation 1 to Generation 2 firmware, like I had to in order to get SATA to work, there is a link in the client that you need to click to upgrade the firmware. If you try the download firmware link it will not be able to find the firmware or it will say it is not compatible with this version. All you need to do is click on the other link specially designed to upgrade from Generation 1 to Generation 2 firmware.

    If you encounter an issue where it will not recognize your array anymore after the firmware upgrade all you need to do is remove the array and then do the automatic search once again. It will find it again.

    Regarding the above instructions and the MD3000, you can ignore any part of the above about the iSCSI stuff and setting up the ethernet for that. You do need to install the multipath stuff and the config file. After everything is up and running and all of your drives are initialized, I could only get the devices displayed after rebooting. I also added another device in the multipath.conf file but I’m not yet sure if it mattered or not. I’ll copy it here for your reference.

    /etc/multipath.conf

    defaults {
    getuid_callout “/lib/udev/scsi_id -g -u -s /block/%n”
    user_friendly_names on
    }
    devices {
    device {
    vendor DELL*
    product MD3000*
    path_grouping_policy failover
    getuid_callout “/lib/udev/scsi_id -g -u –device=/dev/%n”
    features “1 queue_if_no_path”
    path_checker rdac
    prio “/sbin/mpath_prio_rdac /dev/%n”
    hardware_handler “1 rdac”
    failback immediate
    }
    device {
    vendor “DELL”
    product “MD32xxi”

    … (the rest is the same as this blog post) …

    Also, if you copy and paste his multipath.conf file from above you will need to reformat it because the quotations and other characters will be incorrect. Note to OP: you should use a code block for those sections.

    I hope that helps anyone else that may have the same issues as I had. I wish I had these notes a couple of days ago.

    • Gabriel

      Reply

      Thanks for taking the time to provide some feedback and good information on the DAS model. I was curious how it would work for that model, as we have some of those as well I may need to use at some point. Glad this post was able to help you out!


      Gabe

  2. leang

    Reply

    hi! i recreate /etc/multipath.conf , i restart all services, but when i execute multipath -ll, there is nothing, i have directly the prompt.
    When i launch vgscan, i can see good thing…
    please help me! i’m using centos 6.4 with dell md3000i

    thank you

    • Gabe

      Reply

      Hi Leang,

      Your timing asking this question is quite good. I just had the same exact problem pop up on me as we are configuring new hosts with this configuration for a beta product.

      To confirm, in the MDSM console, when you go to add the iSCSI connection, did you see the iqn name in the list when you went to add the adapter or did you add it manually? Check the /etc/iscsi/initiatorname.iscsi name. This is what I had before:

      InitiatorName=iqn.2013-03.xxxxx.local:san.DOMAIN-SERVER01

      I changed it to:

      InitiatorName=iqn.2013-03.xxxxx.local:san.SERVER01

      From 43 characters to 36.

      Try:

      – shortening the name
      – logout (iscsiadm -m node –logout)
      – restart multipathd and iscsid services
      – login (iscsiadm -m node –login)
      – multipath -ll

      If that doesn’t work or you have another issue let me know. I also have an updated /etc/multipath.conf file we are using that is slightly different, but the one in this article at this time does work just the same.


      Gabe

      • leang

        Reply

        thank you! i don’t need to short the name but i restarted services and i have now someting.

        now i’m trying to mkfs.ext4 /dev/mapper/idata

        i have an error (i’m translating from french)
        /dev/mapper/idata is apparently used by system. it will not a system file here

        any idea?

        • Kory

          Reply

          It seems like you already have it mapped. Why not show the output from some of your config files and multipath.conf files? Maybe a ls -l /dev/mapper output as well?

          • leang

            Which config files do you want to see? the multipath.conf is exactly the same as you poster.

            Here is the output for the ls-l /dev/mapper

            crw-rw—- 1 root root 10, 58 25 juil. 17:12 control

            lrwxrwxrwx 1 root root 7 25 juil. 17:12 idata -> ../dm-2

            lrwxrwxrwx 1 root root 7 25 juil. 17:12 idatap1 -> ../dm-3

            lrwxrwxrwx 1 root root 7 25 juil. 17:12 vg_backup-lv_root -> ../dm-0

            lrwxrwxrwx 1 root root 7 25 juil. 17:12 vg_backup-lv_swap -> ../dm-1

            What do you think?

          • Gabe

            try the mkfs.ext4 on the partition, idatap1. Let us know if that works, not at the root.


            Gabe

          • leang

            Same error! it says: can’t evaluate by the stat() /dev/mapper/idata1 —
            the device doesn’t exist apparently. did you specify it correctly?

          • Gabe

            Looks like you missed the “p”, so /dev/mapper/idatap1

            Give that a shot.


            Gabe

          • leang

            that’s it! but now there is another problem because i have a 27TB partition but that’s not concerning the subject!
            Thank you!!

          • Gabe

            That’s awesome Leang! Très heureux d’être de monsieur d’aide ;)

  3. Martin Wilke

    Reply

    Hi,

    First of all, thank you for this documentation this have been help me a lot. I have a small problem and wonder if you have a idea, I did this on a centos 5.9 (final) so far all works execpt that the multipath.conf give me an error.

    /sbin/multipath ll
    multipath.conf line 14, invalid keyword: prio
    multipath.conf line 27, invalid keyword: prio

    these both lines are:

    prio “/sbin/mpath_prio_rdac /dev/%n”

    any idea?

    • Martin Wilke

      Reply

      Hi again, I got it to work. I needed to add prio_callpout “… that works now. Again Thank you :-)

      • Gabe

        Reply

        Hi Martin, that’s awesome. Very happy to hear you got this to work. Hope finding this doc saved you some time ;)

  4. Ge

    Reply

    Hi Gabe,
    Based on your experience, do you have any recommendation on moving a R710 server and MD3000 from Centos 5.10 to 6.5? Should I just follow your guides?

    G

    • Gabe

      Reply

      Hi Ge, I have no specific recommendations other than to ensure you follow the proper Centos upgrade procedures from your 5.x release to the 6.x release. Then follow my blog to leverage the Dell repositories to configure your server. If you are using the DAS MD3000 and not the iSCSI based MD3000i your experience should be far less complicated. This may sound silly, but if you happen to have the resources available, I would highly recommend testing out your migration plan on a non-production system first. Best of luck to you, I’d be interested in your result. If you have any questions that come up during your upgrade or find anything additional worth sharing I’d be happy to hear about it.


      Gabe

      • Ge

        Reply

        Thanks Gabe…
        I’ll let you know how everything goes. Unfortunately I don’t have a non-production server with similar features. It’s just an upgrade that I need in order to install several GIS software (e.g. gdal) to run properly with some new features available only through Centos 6.x.

        One thing… with an upgrade like this, the data from the array (MD3000) will be lost?

        I’ll get back to you once I tackle this task…
        Guillermo

  5. Eduardo Pereira

    Reply

    Hi,
    I have a problem to mount filesystem:

    Look:
    [root@anitta media]# mount /dev/sdc /storage
    mount: /dev/sdc is already mounted or /storage busy

    Disk /dev/sdc: 2047.9 GB, 2047898288128 bytes, 3999801344 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd2639615

    Device Boot Start End Blocks Id System
    /dev/sdc1 2048 3999801343 1999899648 8e Linux LVM

    Disk /dev/mapper/36001e4f0003b44000000ee99527b5c9f: 2047.9 GB, 2047898288128 bytes, 3999801344 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xd2639615

    Device Boot Start End Blocks Id System
    /dev/mapper/36001e4f0003b44000000ee99527b5c9f1 2048 3999801343 1999899648 8e Linux LVM

    Disk /dev/mapper/36001e4f0003b44000000ee99527b5c9f1: 2047.9 GB, 2047897239552 bytes, 3999799296 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

    you have no idea about the problem?
    My system is Centos 7

  6. Pingback: [Stage BTS SIO 1ère année] Homogénéisation des hyperviseurs en version de CentOS – Mehdi Bennouar

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.