Business Internet and Telephony, since 1985

How to break the 256GB disk barrier on ESXi 4.1

Printer-friendly versionPrinter-friendly versionSend by emailSend by email Share this

How to create disks larger than 256GB on ESXi 4.1

Not so long ago, storage wasn't so cheap. I remember when we used to talk about how cheap storage had become when you could buy a Connor 80MByte IDE HDD for about $80 - a buck a meg...

We may laugh at those prices now, but storage demands in the data center and home alike are increasing, and so are the options for implementing stable, and inexpensive solutions. In this article we discuss using OpenMediaVault to get the most out of a fully warranted and redundant, reliable, refurbished server you should be able to pick up for under $400 that will far exceed the combined computing power of any laptop and USB NAS device you can pick up at your local Best Buy store - Carrier grade equipment that only 5 years ago might cost as much as the car your driving now...

So without further ado, here's our quick little tutorial on how to exceed the 256 GigaByte disk size barrier on ESXi 4.1.

Why? Because you want to install OpenMediaVault and serve up iSCSI to your VMware cluster, but if you go through all the trouble to install OMV you'll quickly discover that the largest capacity disk size you can create is 256GB.

Considering that you probably already have hardware RAID, with 500GB or maybe 2TB already on the host itself, that means you might have been tempted to just create 8 disks of 256GB each and then RAID them under OMV - but why? When you already have a single RAID array on the host?

And why can't you create a disk larger than 256GB? 

The answer isn't really that you can't create a disk larger than 256GB, but rather, that you can't create a file larger than that when your block size is only 1MB (The default for ESXi 4.1). It may appear to you from the perspective of OMV that it's a disk, but from the ESXi host it's a file.

So, what we need to do is decide what block size would be best. I'm going to use the example of a 546GB RAID array on the host, because that's roughly what it would work out to w/five 146GB Ultra320 SCSI drives configured as a RAID 5 array w/a single hot spare for fail-over. In such a scenario, you could lose a drive and your array wouldn't be operating in a degraded mode while you have time to replace the bad physical drive. That, and you could still lose another drive and keep running (Not unheard of with RAID 5).

Of course, when you break the 2TB barrier RAID 5 is all but useless - you need RAID 6 or RAID 10 (much better performance w/RAID 10 or RAID 0+1, actually).

Now, you may be tempted to just make the block size 4MBytes and be done with it, but the thing to keep in mind, is that whatever the block size is, that's the minumum space each file is going to take up. Hm... I'll leave you to consider those ramifications, but suffice it to say there's some economy lost with larger block sizes (But remember, your file is going to be 256GB, with 1MB block size - the max size *disk* you can create under OMV, or your "file" is going to be 512GB, with a 2MB block size).

Okay so with this size physical RAID array, we get more economy out of a 2MB block size than with a 4MB block size (the latter of which could yield file, or OMV disk, sizes of up to 1TB).

Here's how that works out:

  • 1MB = 256GB max file size
  • 2MB block size = 512GB max file size
  • 4MB block size = 1TB max file size
  • 8MB block size = 2TB max file size

With ESXi 5 you don't have this issue, but you can get older 1U Proliant or PowerEdge servers in great condition for a song and a dance and build yourself a really nice iSCSI SAN for your home. But many of those older servers don't have the virtualization capability to enable in their BIOS, even though many of those Opterons and Xeons kick way more major bootie than you really need to get the job done.

Okay then. So you're asking yourself, but what about the 40GB or so left over? Well, part of that's going to be taken up by the OMV OS, and with the remaining space... put another VM or two there silly! 

Let's get down to the nitty gritty now...

If you have any VMs already created,or any ISOs on the existing datastore, you need to move/migrate them somewhere else - coz this is a lot like fdisking that old workstation HDD w/wYNd0z3 on it - poof! If you need to move anything off of your ESXi host... well I'll just leave that as an excercise for you to figure out (Converter, Veeam, copy the vmx and vmdk's, etc...).

First, let's select the host from your vSphere client or vCenter (This is going to be easy peasy boiz and gulls), then click on the "Configuration" tab. Now in the left pane click on storage, and then select your datastore (it's prolly called datastore1, huh?). Right above that you'll see some links - Refresh, Delete, Add Storage,... You're only going to need to use the latter two: "Delete" and "Add Storage". If Delete doesn't whack the entire datastore because you get a message that it's in use, then try right clicking on the datastore and deleting it that way - I know you don't have any Guest OSes running - at least I would hope not wink

Give it a minute, and let things settle down. Eat a carrot or a Snickers bar.....

k. Now what's next? Yup! Add storage, and the wizard will allow you to choose your block size (2MB for this one, right?). And this time give it a kewl name like, "datastore-bobby", or "ds-bobby" (or something else descriptive, in case your ESXi host's name isn't *bobby*.

This time, try a Butterfinger, or make a cup of tea, giving it some time to settle in. This happens almost immediately, but were you to ssh into the ESXi box you'll notice that you prolly can't just create your iso and other dirs just yet with a:

# mkdir -p /vmfs/volumes/ds-bobby/iso/omv; cd /vmfs/volumes/ds-bobby/iso/omv

When you've allowed for a bit of housekeeping, then you can create your dirs, then do a:

# wget

And your ready to icreate a new VM with OMV!!!

Make your partition adequate for your needs, down the OMV VM, and add a whopper disk size of 512GB, start your VM and add that disk to OMV and continue on with creating your SAN.

Now, I don't know why so many of the docs you find in the support forums say to do all of this from the command line, or that it can ONLY be done on the command line on the ESXi host, but there's really no need to. I've done quite a few like this on some pretty screamin' machines w/o a hitch, and thought I'd pass on the tip to you. Um... speaking of tips  My hats there on the sidewalk next to my saxophone 

Easy Peasy! Right?

The main video accompanying this article focuses on ESXi w/FreeNAS, which is a little bit different (Much more involved) than simply installing the iSCSI plugin and configuring your targets on OMV, so I've included a couple of links to other videos (One dealing specifically with OMV, but very fast and w/o volume so you may need to pause it a lot and take notes) on how to set up iSCSI on Linux: (Easy ISCSI on SuSE Linux). (The OMV specific video showing how to setup your targets after you install the plugin).

I hope that helps, and May the Fourth be with you! Or... Happy Quatro de Mayo! Whichever floats your boat. cool