VMware Training

Jimmy Hess mysidia at gmail.com
Thu Feb 20 03:24:30 UTC 2014


On Wed, Feb 19, 2014 at 12:14 PM, Phil Gardner <phil.gardnerjr at gmail.com>wrote:

Seeing you are a Linux admin;    VMware's prof. training offerings are
basic "point and click" things,  not very Linux-admin friendly;  no
advanced subjects or even CLI usage in "Install, Configure, Manage".   If
you are already at the level of doing scripted ESXi installs and
configuring hosts for SAN storage and networking  according to VMware's
best practices,   then you should be able to work out the little that is
left  by reading the  ample documentation and a few whitepapers,    unless
you need  proof of completing a class as a certification pre-requisite.

One way to get the extra experiences would be  to start  by putting
together the simplest two-node or three node cluster you can muster;  try
various configurations, put it through its paces:  make it break in every
conceivable way,  fix it....

There is almost nothing extra to do for DRS/HA config,  other than to
design the networking, storage, compute,  and DNS  properly to be resilient
and support them.

You literally just check a box to turn on DRS, and a box to turn on HA,
 select an admission policy,  and select automation level and migration
threshold.

Of course, there are advanced options, and 'exotic'  clusters where you
need to know the magic option names.   You may also need to specify
additional isolation IP addresses,  or  tweak timeouts for VMware tools
heartbeat monitoring,  to cut down on unwanted false HA restarts.

These are not things you will find in the training classes;  you need to
read the documentation  and literature contained on various blogs ---  it
would probably be best to read some of Duncan Epping and Scott Lowe's
books;  if you have the time, and to further solidify understanding.


Ultimately; you are not going to be able to do this realistically, without
real servers comparable to the real world,  so a laptop running ESXi  may
not be enough.

You could also  find a company to lease  you some lab hours to tinker with
other storage technology;  i'm sure by now there are online cloud-based
Rent-A-Labs   with the EMC VNX/Dell Equallogic/HP storage hardware.

>vswitches/SAN config (but only with NFS datastores backed by a NetApp,
unfortunately,

Also... with uh... NetApp units running current software at least can  very
easily create an extra block-based lun on top of a volume, to be served out
as a block target.    You might  want to ask your storage vendor  support
what it would take  to get  the  keycode  to turn on   FC or iSCSI
 licenses,   so you can present an extra  40gb scratch volume......    Or
you could download the  Netapp  simulator to play with  :-O


All the ESXi documentation is online,   and all the relevant software has a
60-day evaluation grace period after install. You just need to work through
it.
   Get things working in the lab,  then  start trying out more complicated
scenarios and trying the advanced knobs later,  read the installation
directions;
see how things work.

Buying or scavenging a used server is probably easiest to do for long-term
playing;  look for something with 32GB of RAM,  and  4 or more 2.5"  SAS
drives. Try to have 100GB of  total disk space in a hardware RAID10  or
RAID0  with 256MB or so controller writeback cache,  or a SSD;    the idea
is to have enough space to install vCenter and operations manager and a few
VMs.

A 3 year old  Dell 11G R610 or  HP DL360 G6  likely falls into this
category.
Install ESXi on the server,  and    create  3  virtual machines  that will
be  "Nested" ESXi servers;  OS of the VMs will be ESXi.

See:
http://www.virtuallyghetto.com/2012/08/how-to-enable-nested-esxi-other.html

If you would rather build a desktop tower for ESXi; look for a desktop
motherboard with a 64-bit Intel Proc  with DDR2  ECC Memory support  in at
least 32GB of RAM,  VT-d support,  and   onboard Broadcom or Intel
 networking.
Network controller and Storage controller choices are key;  exotic hardware
won't work

Considering  vCenter itself  wants a minimum 12GB of RAM:  in case you want
to test out _both_
 the vCenter virtual appliance, and the standard  install on Windows....
 about 32GB RAM is great.

In competition against the VMware HCL, there's a "white box" HCL:
http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php

I would look to something such as the  Iomega Storcenter PX6, PX4 or
Synology DS1512+ as an  inexpensive shared storage    solution for playing
around with iSCSI-based block targets.   I think the Iomegas may be the
least-cost physical arrays on the official Vmware HCL,  with VAAI support.

You can also use a virtual machine running on the local disks of your ESXi
server to present shared storage,
as another VM If you run your  cluster's   ESXi servers as  nested virtual
machines, on  one server.

Some software options are   Linux...  Nexenta....  FreeNAS...  Open-e.
HP Lefthand.... Isilon... Falconstor....Nutanix   (I would look at the
first 3 primarily)

Or
You can also use a spare Linux machine for shared storage; I would suggest
 SSD for this,   or  when
using disks:  something  with enough spindles in appropriate RAID level to
give you at least 400 or so
hundred sustained random IOPS, so you can run  3 or 4  active VMs   to play
with  without the whole
thing being appallingly slow.


FreeNas / Nexenta / ZFS  are also great options,  64-bit system with  16gb+
of RAM  to give to Solaris.
Finding a hardware configuration on which Solaris X86 will run properly
 out of the box can be challenging.

Of course,  if you have a spare Linux machine,  you can also use that too,
in order to play around with VMs on NFS or iSCSI.




> Not sure if this list is the best place, but it is probably the only list
> that I'm on that won't give me a bunch of grief about the chosen technology.
> I looked at VMware's site, and there are a ton of options. I'm wondering
> if anyone has some basic suggestions or experiences.
>
> I'm a Linux admin by trade (RH based), with "ok" networking ability. I'm
> sufficiently versed in deploying scripted ESXi (including 5.x)
> installations for a specific environment, including vswitches/SAN config
> (but only with NFS datastores backed by a NetApp, unfortunately, no
> blockbased stores).
>
> --
-JH



More information about the NANOG mailing list