iSCSI or NFS for ESXi storage?

Since I like to use cheap Dell R200 boxes (with even cheaper Crucial RAM!) for my ESXi boxes, I end up using different boxes for raid storage. Linux and either software or hardware (3ware) RAID. I’m not sure which is better, but software raid is definitely a lot cheaper, and doesn’t seem to be any worse.

Anyway, I’m not sure whether to use NFS or ISCSI export storage to my ESXi (ESX 3i) boxes.

Currently, I use both.

The advantages of ISCSI with VMFS on top are:

  • Potentially faster, since iscsi is a block-level protocol … but I haven’t noticed any difference, and I’m not sure how to measure the differences in a meaningful way
  • Apparently I can share a VMFS between two ESXi boxes with no problem. This is probably not true of NFS … but I haven’t tried it.
  • I trust iSCSI more. No reason for this, mind you, except that I’ve tried it out only very recently and have found it to be totally rock solid. I reall like it. (I use “ietd”, which I build from subversion on debian.) On the other hand, I’ve been using nfs for years and years and years (like: since last century), and, well, let’s just say that I haven’t always found linux’s nfs to be entirely solid. But I’m prepared to believe that things have changed.

The advantages of NFS are:

  • I can use my own filesystem on the linux box (I use xfs)
  • I can access (clone, backup) the files that make up a given virtual machine, instead of it being a great big VMFS blob;
  • I pretty much have to use the OS’ disk buffer/cache. I probably can sidestep it, but I don’t. I go straight to disk with iSCSI though (“Type=blockio” in /etc/ietd.conf).
  • Much smaller storage overhead. If I carve out an iSCSI chunk for an ESX box, I carve it out in, like, 256GB chunks. And it mostly goes unused. With NFS, I can carve it out really small, and extend it as I need it. (Of course I use LVM on the linux box. Duh.)

So, I’m finding I tend to use nfs for all new ESXi boxes right now, and it seems to be fine.

Apparently it’s best for the ESX host not to swap its guests to the same place it stores the guest’s disks. (This is, I imagine, ESX’s swapping, rather than the guest’s swapping, which happens inside the host’s virtual disk.) This is the default in ESX, because without it you can’t do vmotion. But we don’t use VI Server, so we don’t do vmotion. So we don’t care. So I’m going to do ESX swapping to the internal 7200rpm sata drive.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: