jra at baylink.com
Thu Feb 20 03:46:22 UTC 2014
Why bother with a clustering FS, then, if you cannot actually /use it/ as one?
On February 19, 2014 10:44:22 PM EST, Jimmy Hess <mysidia at gmail.com> wrote:
>On Wed, Feb 19, 2014 at 2:06 PM, Jay Ashworth <jra at baylink.com> wrote:
>> ----- Original Message -----
>> > From: "Eugeniu Patrascu" <eugen at imacandi.net>
>> My understanding of "cluster-aware filesystem" was "can be mounted at
>> physical block level by multiple operating system instances with
>> safety". That seems to conflict with what you suggest, Eugeniu; am I
>> missing something (as I often do)?
>When one of the hosts has a virtual disk file open for write access on
>VMFS cluster-aware filesystem, it is locked to that particular host,
> and a process on a different host is denied the ability write to the
>file, or even open the file for read access.
>Another host cannot even read/write metadata about the file's directory
>Attempts to do so, get rejected with an error.
>So you don't really have to worry all that much about "as long you
>access the same files", although: certainly you should not try to,
>Only the software in ESXi can access the VMFS --- there is no ability
>run arbitrary applications.
>(Which is also, why I like NFS more than shared block storage; you can
>conceptually use the likes of a storage array feature such as
>to make a copy-on-write clone of a file, take a storage level
>and then do a granular restore of a specific VM; without having to
>restore the entire volume as a unit.
>You can't pull that off with a clustered filesystem on a block target!)
>Also, the VMFS filesystem is cluster aware by method of exclusion
>Reservations) and separate journaling.
>Metadata locks are global in the VMFS cluster-aware filesystem. Only
>host is allowed to write to
>any of the metadata -on the entire volume a- time, unless you have
>VMFS extensions, and your storage vendor supports the ATS (atomic
>resulting in a performance bottleneck.
>For that reason, while VMFS is cluster aware, you cannot necessarily
>a large number of cluster nodes,
>or more than a few dozen open files, before performance degrades due
>the metadata bottleneck.
>Another consideration is that; in the event that you have a power
>which simultaneously impacts your storage array and all your hosts:
> may very well be unable to regain access to any of your files,
>until the specific host that had that file locked comes back up, or
>wait out a ~30 to ~60 minute timeout period.
>> -- jra
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
More information about the NANOG