OpenSolaris iSCSI Target

After spending a good amount of time on the iSCSI Enterprise Target (IET) solution and running into compatibility issues with VMware ESX 3.5 and 4.0, I decided to move on to other solutions. After reading up on the ZFS file system and how the features inclusive in ZFS would benefit any storage solution, I decided to move into the OpenSolaris world. Additionally, SUN has included their two variations of iSCSI targets in OpenSolaris and SUN provides support options. Needless to say, in my specific situation, OpenSolaris is a better solution to the “free” iSCSI target.

I owe some of my new-found knowledge to my friend Chuck Hechler and the remaining to Mike La Spina (http://blog.laspina.ca/). I don’t pretend to be anywhere as knowledgeable about OpenSolaris, ZFS, iSCSI, SANs, or storage in general as Mike or Chuck, however, I can create SharePoint workflow actions in VisualStudio, can they? =)

The Equipment

If you read my other posts on the IET and RedHat implementation, you would know that I created a custom server housed in a SuperMicro case. During the testing phase of that project, I learned that the IET product does not work well with the LSI MegaRAID 84016 controller, specifically when using RAID5. Unsure why, however, in OpenSolaris, I had an issue with our Adaptec 31605 controller and ended up using the LSI MegaRAID 84016E controller. Works like a charm. I mention this as you will need to verify that the equipment you are going to use with OpenSolaris should be supported by OpenSolaris (http://www.sun.com/bigadmin/hcl/).

If you want more information on the hardware specifications for the system I used, please review my Custom Server Case article.

OpenSolaris Configuration

Using OpenSolaris to serve as an iSCSI target is surprisingly easy to configure. I say surprisingly because the same implementation in RedHat was thoroughly difficult, both due to a learning curve and due to many complications with functionality. On to OpenSolaris!

As I designed this system, I was focused on using ZFS as the sole solution for disk management and the file system. In other words, for my boot disks, I created a ZFS mirror and for the iSCSI LUN disks I created a ZFS raidz2.

Installation

For the most part, the default installation settings were used. Too be honest, the options during installation of OpenSolaris are limited. If you do not want the graphical interface, see my short article, Disable GUI, for disabling it after installation.

ZFS

Boot pool – mirror

Mirroring the boot partition is about as difficult as it gets with this article. I did not figure out how to mirror the boot disk on my own. Using the steps in the following blog post, I managed to mirror my ZFS boot partition, rpool.

http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html
If you do not know the devices names of your drives, you can use “# format” to list the drives/devices on your system.

After the mirroring has been completed, display the status of the pool using the command and output below:

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
 NAME        STATE     READ WRITE CKSUM
 rpool       ONLINE       0     0     0
   mirror    ONLINE       0     0     0
     c8d0s0  ONLINE       0     0     0
     c9d0s0  ONLINE       0     0     0
errors: No known data errors

iSCSI pool – raidz2

After reading quite a few posts on best practices for ZFS pools, I propose that you use a raidz set per controller. In my instance, I had a 16 port controller and all 16 drives for the iSCSI pool were connected to this controller. Using the following commands I created a raidz2 pool with 2 spares:

# zpool create diskpool raidz2 c7t0d0 c7t1d0 c7t2d0 c7t3d0 c7t4d0 c7t5d0 c7t6d0 c7t7d0 c7t8d0 c7t9d0 c7t10d0 c7t11d0 c7t12d0 c7t13d0
# zpool add diskpool spare c7t14d0 c7t15d0

That is it; we just created a large disk pool of 16 disks, 14 in a dual-parity set for volumes and 2 for spares. Easy? Yes.

Volumes

Best practice for creating ZFS volumes depends on your perspective. In my case, the iSCSI target will be used with VMware and I will create a new ZFS volume and iSCSI view for each VMware machine (not host). Follow the command below to create the base mountpoint in the ZFS pool:

# zfs create diskpool/iscsi

Next we will create a volume for sharing as an iSCSI target. As I stated above, I create one volume per VM. Also, notice the “-b 64K”, which helps with partition alignment in VMware. The command “-V 40G” creates a volume with a maximum size of 40 gigabyes – keep in mind that ZFS will thin provision the file system.

# zfs create -s -b 64K -V 40G diskpool/iscsi/lun0_vm

Display the pool and the ZFS volumes as shown below:

# zpool status
  pool: diskpool
 state: ONLINE
 scrub: resilver completed after 0h32m with 0 errors on Wed Sep  2 10:52:54 2009
config:
 NAME           STATE     READ WRITE CKSUM
 diskpool       ONLINE       0     0     0
   raidz2       ONLINE       0     0     0
     c7t0d0     ONLINE       0     0     0
     c7t1d0     ONLINE       0     0     0
     c7t2d0     ONLINE       0     0     0
     c7t3d0     ONLINE       0     0     0
     c7t4d0     ONLINE       0     0     0
     c7t5d0     ONLINE       0     0     0�
     c7t6d0     ONLINE       0     0     0
     c7t7d0     ONLINE       0     0     0
     c7t8d0     ONLINE       0     0     0
     c7t9d0     ONLINE       0     0     0�
     c7t10d0    ONLINE       0     0     0
     c7t11d0    ONLINE       0     0     0�
     c7t12d0    ONLINE       0     0     0
     c7t13d0    ONLINE       0     0     0�
 spares
   c7t14d0      AVAIL
   c7t15d0      AVAIL
errors: No known data errors
  pool: rpool
 state: ONLINE
 scrub: none requested
config:
 NAME        STATE     READ WRITE CKSUM
 rpool       ONLINE       0     0     0
   mirror    ONLINE       0     0     0
     c8d0s0  ONLINE       0     0     0
     c9d0s0  ONLINE       0     0     0
errors: No known data errors
# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
diskpool                         9.9G  6.76T  51.1K  /diskpool
diskpool/iscsi                   9.9G  6.76T  48.5K  /diskpool/iscsi
diskpool/iscsi/lun0_vm     9.9G  6.76T   9.9G  -
rpool                           10.8G  62.1G  77.5K  /rpool
rpool/ROOT                      3.18G  62.1G    19K  legacy
rpool/ROOT/opensolaris          3.18G  62.1G  3.04G  /
rpool/dump                      3.75G  62.1G  3.75G  -
rpool/export                     114M  62.1G    21K  /export
rpool/export/home                114M  62.1G    21K  /export/home
rpool/export/home/vcssan         114M  62.1G   114M  /export/home/vcssan
rpool/swap                      3.75G  65.7G   101M  -

Network

There are a few approaches to spreading the load across multiple links, my favorite is LACP, which is what I used – see my article on LACPBONDING. The other option that may arise in the planning phase is IPMP. However, I was unable to overcome the issue of having more than one inbound interface. If you are knowledgeable with source addressing, you will have better luck than I did.

I aggregated 4 gigabit links together using LACP and a Dell PowerConnect 5448 switch.

iSCSI

During the design phase, I tested both of SUN’s iSCSI targets, the standard iscsitadm and COMSTAR iSCSI. At this point, you can choose one based on word of mouth, or you can try both and use the one that works the best for you. I ended up using COMSTAR.

iscsitadm

The first configuration change we will make defines which interfaces the iSCSI target software will use. We accomplish this by creating a target group:

# iscsitadm create tpgt 1
# iscsitadm modify tpgt –i 192.168.0.101 1

Because I am using link aggregation there is only one IP address to add to the group. If you decide to use multi-pathing as provided by VMware, you will need to add all of those IP addresses to the group.

Are you ready for the next statement, it is very complex…well, maybe not.

# zfs set shareiscsi=on diskpool/iscsi/lun0_vm

That’s it; the ZFS volume is now served through the iSCSI target.

COMSTAR

On a default installation of OpenSolaris, you will need to install the COMSTAR iSCSI software package. Open the Package Manager and search for iSCSI, choose to install all of them, specifically the SUNWiscsit package.

packagemanager

After you have installed the software package, you will need to enable the COMSTAR iSCSI service:

# svcadm enable -r svc:/network/iscsi/target:default

I am having a vague recollection that you may need to restart OpenSolaris before continuing…

The next configuration we will make defines which interfaces the iSCSI target software will use. We accomplish this by creating a target group and target:

# itadm create-tpg iscsi0 192.168.0.101
# itadm create-target –t iscsi0

Now, we will share the ZFS volume through the COMSTAR iSCSI target – this is a two step process:

# sbdadm create-lu /dev/zvol/rdsk/diskpool/iscsi/lun0_vm

Now let’s check for the new logical unit and remember the GUID… as we will use it in the next command:

# sbdadm list-lu
Found 1 LU(s)
       GUID                    DATA SIZE           SOURCE
--------------------------------  -------------------  ----------------
600144f08e6c0f0000004a8331d70002      42949607424      /dev/zvol/rdsk/diskpool/iscsi/lun0_vm

Now we will add the view to share the ZFS volume through iSCSI:

# stmfadm add-view 600144f08e6c0f0000004a8331d70002

That’s it!

I’m out….

3 Comments

  1. Michael said:

    Thanks, for the info. I messed around with this a little, but I found that when using the COMSTAR directions, I was unable to add the storage to ESX. Unable to open datastore as far as I can remember, when trying to add the storage. Any idea as to why that might be? The standard SUN stuff worked fine. Do you have any observations on performance between the two?

    September 27, 2009
    Reply
  2. said:

    Michael,

    I am curious to know what would cause the COMSTAR iSCSI views to not be accessible to ESX. Which version of ESX are you using? I am currently using COMSTAR with ESX 4 – I have not tested it with our ESX 3.5 hosts yet, the plan is to upgrade them soon to gain the thin provisioning VMFS capabilities. Are you able to see the COMSTAR target using a different initiator, such as Microsoft’s iSCSI Initiator?

    Although I have read on other blogs that COMSTAR is significantly quicker than the iscsitadm package, I do not have an opinion either way. The reality is that I am only seeing 400mbits/sec throughput on either of targets (iscsitadm or comstar), which is less than I should potentially achieve with gigabit Ethernet.

    October 4, 2009
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *