Archive for May, 2008

Debian Lenny XFS/LVM problem

May 16, 2008

So there I was trying to create and mount an xfs filesystem on a debian testing (Lenny) box.

$ sudo mkfs.xfs /dev/scsi0/production
Warning - device mapper device, but no dmsetup(8) found
Warning - device mapper device, but no dmsetup(8) found
meta-data=/dev/scsi0/production  isize=256    agcount=4, agsize=8388608 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=33554432, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
$ sudo mount /production/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/scsi0-production,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
$ dmesg | tail
......
XFS: bad version
XFS: SB validate failed
$

The solution, it turned out, was just to upgrade xfsprogs…

$ sudo apt-get update && sudo apt-get install xfsprogs

And, of course, to reformat the filesystem…

$ sudo mkfs.xfs -f /dev/scsi0/production

I also installed dmsetup…

$ sudo apt-get install dmsetup

…to silence those annoying “Warning – device mapper device, but no dmsetup(8) found” messages. I may yet come to regret this. Also, it might not actually have silenced them – I didn’t notice.

Install rsync on an rpath appliance

May 15, 2008

I have an rpath-based appliance that we use. It’s sugarcrm, fyi. This is the mysterious incantation I had to utter to install rsync on it:

conary update rsync=conary.rpath.com@rpl:1

I’m really starting to hate rpath and this conary crap. I mean, there’s nothing on this appliance? Rsync, strace, lsof … nothing. WTF?

Anyway, at least I’ve got rsync installed. But I’ve learnt my lesson – stay the fuck away from rpath-based appliances in future.

Do cores matter in ESX?

May 15, 2008

I wonder if the number of cores matter in your ESX server?

It’s not completely obvious to me that they do. I mean, sure, they obviously should, but I wonder if they actually matter in practice.

We have a dual core opteron (F2216, I think) and some quad core Xeons. I really don’t see any difference between the two. I should really try to come up with some scientific tests that purport to represent our usage. But for now, number of cores doesn’t seem to matter so much.

That said, if I have a choice between a dual- and a quad-core CPU in a box for ESX, I’ll tend to choose the quad-core, even if it’s a little dearer.

I’ll also tend to favour a little more L2 cache, but again, this is for historical, emotional reasons, and I don’t  have evidence to support this mildly irrational preference.

iSCSI or NFS for ESXi storage?

May 15, 2008

Since I like to use cheap Dell R200 boxes (with even cheaper Crucial RAM!) for my ESXi boxes, I end up using different boxes for raid storage. Linux and either software or hardware (3ware) RAID. I’m not sure which is better, but software raid is definitely a lot cheaper, and doesn’t seem to be any worse.

Anyway, I’m not sure whether to use NFS or ISCSI export storage to my ESXi (ESX 3i) boxes.

Currently, I use both.

The advantages of ISCSI with VMFS on top are:

  • Potentially faster, since iscsi is a block-level protocol … but I haven’t noticed any difference, and I’m not sure how to measure the differences in a meaningful way
  • Apparently I can share a VMFS between two ESXi boxes with no problem. This is probably not true of NFS … but I haven’t tried it.
  • I trust iSCSI more. No reason for this, mind you, except that I’ve tried it out only very recently and have found it to be totally rock solid. I reall like it. (I use “ietd”, which I build from subversion on debian.) On the other hand, I’ve been using nfs for years and years and years (like: since last century), and, well, let’s just say that I haven’t always found linux’s nfs to be entirely solid. But I’m prepared to believe that things have changed.

The advantages of NFS are:

  • I can use my own filesystem on the linux box (I use xfs)
  • I can access (clone, backup) the files that make up a given virtual machine, instead of it being a great big VMFS blob;
  • I pretty much have to use the OS’ disk buffer/cache. I probably can sidestep it, but I don’t. I go straight to disk with iSCSI though (“Type=blockio” in /etc/ietd.conf).
  • Much smaller storage overhead. If I carve out an iSCSI chunk for an ESX box, I carve it out in, like, 256GB chunks. And it mostly goes unused. With NFS, I can carve it out really small, and extend it as I need it. (Of course I use LVM on the linux box. Duh.)

So, I’m finding I tend to use nfs for all new ESXi boxes right now, and it seems to be fine.

Apparently it’s best for the ESX host not to swap its guests to the same place it stores the guest’s disks. (This is, I imagine, ESX’s swapping, rather than the guest’s swapping, which happens inside the host’s virtual disk.) This is the default in ESX, because without it you can’t do vmotion. But we don’t use VI Server, so we don’t do vmotion. So we don’t care. So I’m going to do ESX swapping to the internal 7200rpm sata drive.

Scanning my life with the Fujitsu ScanSnap S510

May 15, 2008

I’ve fallen in love with the Fujitsu ScanSnap S510. It is an absolutely wonderful piece of hardware. Possibly my favourite-est piece of hardware I own right now.

I’m slowly scanning my life. I used to have lots of bits of paper I don’t want to keep, but I don’t want to throw them out either. Old bank statements. Old credit card statements.

So I scan them. This ScanSnap makes it easy and relatively painless to do. The software is a little bit clunky, but it works pretty well.

I then store all my scans in a subversion repository. Someday I’ll use git, but not yet.

Why don’t Dell do cheap disk subsystems?

May 15, 2008

When I buy disk subsystems right now, I buy generic 1U or 2U machines with four or eight sata bays. I don’t even care if they’re hot-swap. I might even buy my own 500GB drives. (500GB seems to have the best £/GB ratio, and really it’s spindles I want, not TBs!)

I just wish Dell did a 1U or a 2U machine with lots of drive bays that they’d leave empty.

It irritates me that Dell don’t:

  • sell a cheap machine with lots of space for drives
  • charge a reasonable price for disk drives (instead they totally take the piss!)
  • ship a machine with all their (non-standard!) caddies and cables in place so I can put in my own drives.

Oh well, it’s not like I don’t understand. They have expensive disk subsystems that they don’t want to compete with. In the meantime, I’ll keep buying non-Dell kit to use for storage. I’ll live!

ESXi hardware support – disk subsystems

May 15, 2008

I’m trying to find a really good machine to run ESXi on. It has to be cheap (£500 max, excluding hard drives), it has to support lots of cheap ram (£35 per 2GB DIMM, max), and it has to have four cores.

Currently we use Dell R200 boxes for vmware. They only support 8GB of ram and one cpu, but it’s a quad core cpu. And the ram is cheap (£140 plus VAT from crucial for the full 8GB). And the servers only cost about £300 each. (You have to haggle, they’re not always available at this price, and all prices exclude vat).

So we end up buying other machines for disk, and we use ISCSI and NFS to make disk available to the ESX boxes. (I’ll write again later on which is better).

The problem is that ESXi doesn’t support any reasonably priced 3ware raid cards. Or, indeed, any reasonably priced raid cards. Which is a shame. I’d really like to have a couple of ESX boxes that cost well under a grand all in, that are completely self-contained, and have shit-hot disk subsystems.  Or even shit-warm: four 500GB drives with decent hardware raid5 would make me very happy.

ESXi hardware support

May 15, 2008

One of the really great things about vmware ESXi is that it’s very happy to support older hardware.

Microsoft Hyper-V and Citrix XenServer insist that you run only the very latest hardware – amd64/em64t and hardware virtualization. Which isn’t completely unreasonable, really. But ESXi supports older boxes, with boring old i386 processors.

Sure, you can’t fit as many virtual machines on these older machines as you can on a modern box with hardware virtualization, but you can pick up older machines with shit-hot disk subsystems for peanuts on ebay. They make great build machines, for example, since cpu speed matters a lot less than disk speed for building, and you can stick a couple of VMs on (say) a Dell 2650 or something. (These older machines are probably also pretty good as nfs/iscsi boxes too, but I’m not doing that right now.)

In fact, the only draw back of ESXi’s inclusive approach to cpu support is that it’ll run on a modern machine that you forgot to enable hardware virtualization on – like pretty much anything from Dell! I keep being bitten by this!

Things that annoy me about the vi client

May 15, 2008

The virtual infrastructure client (the “vi client”) is a grand little program, but it has some niggles:

  • We have lots of independent ESXi boxes. We don’t want to use VI server to join them together. (We’re a software dev shop.) I’d like the Vi client to remember usernames and passwords or allow them be specified on the command line so I can have a big long line of vi client shortcuts, each that take me into a different machine.
  • I’d like the VI client to show whether hardware virtualization is enabled on a given cpu.

Don’t forget to enable hardware virtualization in the BIOS…

May 15, 2008

It seems that all hardware that Dell ships has hardware virtualization disabled in the BIOS. Laptops and servers alike. I’m sure there’s a good reason for it, but it’s … weird!

Anyway, I’ve been bitten by this at least half a dozen times now. That is, I’ve forgotten to enable it, and only found out later. (Which means I have to physically be at the machine and reboot it, and F2 into the BIOS. Not a huge problem for a laptop, but irritating for a server.)

The symptoms with a laptop and vmware server are more obvious than with ESX/ESXi/3i. With vmware server on a laptop, if you’re running an i386 OS, you won’t be able to run AMD64 VMs. So I recall, at least. They’ll just crash on boot.

ESXi is more annoying though – you might just never notice that you forgot to enable it, because it’s so forgiving. It just boots up and just runs. But it’s slow. You have to really pay attention to notice it, but it does seem to be slower. Without hardware virtualization enabled, if a machine hasn’t been used in a while, it’s as if it’s been paged out or something.

I could be imagining this, and totally over-emphasising the impact of forgetting to enable hw virt (hardware virtualization), but it does seem that forgetting it is very easily done, annoying to fix, and only moderately serious in terms of performance.

Also, I can’t see any way to tell from looking at the VI (Virtual Infrastructure) Client whether hw virt is enabled or not for each CPU. That’d be up high on the list of things I’d like to see fixed in the vi client.