2011-11-06

New version of pseudo Lab Manager scripts (aka NLclonePodsofVapp.ps1)

[UPDATE NOTE: The work creating, providing and supporting scripts for ESX/ESXi/vCenter has been discontinued. Apologies for the inconvenience.]

I have done some improvements on usability and features based on my needs.

To see a video of the version 3 click here. Click here to read more information about these scripts.



CHANGELOG

------ version 4

  • Promiscuous Mode at the vSwitch level is now enabled automatically (this is needed if you have any ESX/ESXi in a VM).
  • The vSwitch has now 1016 Ports by default as opposed to the standard number which is 56.
  • The limitation of <15 characters for the common name for the Pods has been changed to <23 characters.


------ version 5

  • Program detects if VMs in source vApp are not in the same host (this is a requirement).
  • Ability to define the cluster as workspace.
  • Affinity rules are created for every Pod if DRS is enabled in the cluster (it it exists), independently of whether you select one/multiple hosts or a cluster.





EXAMPLES:

You can define the hosts it will work on manually
.\NLclonePodsofVapp.ps1 -source_vapp SRM-Parent-Pod -cloneName SRM-Test -endN 1 -startN 1 -hosts_list ("10.10.10.20","10.10.10.30")


You can define the hosts it will work on using the cluster the vApp is in
.\NLclonePodsofVapp.ps1 -source_vapp SRM-Parent-Pod -cloneName SRM-Test -endN 2 -startN 2 -use_cluster $true


Or you can just use the host the vApp is in (no need to specify anything)
.\NLclonePodsofVapp.ps1 -source_vapp SRM-Parent-Pod -cloneName SRM-Test -endN 3 -startN 3


You can create N Pods in these two ways (same result)
.\NLclonePodsofVapp.ps1 -source_vapp View-Parent-Pod -cloneName View-Test -endN 8
.\NLclonePodsofVapp.ps1 -source_vapp View-Parent-Pod -cloneName View-Test 8


If later on you want more Pods with the same root name you have to specify the starting point. This creates Pod09, Pod10,..., Pod15
.\NLclonePodsofVapp.ps1 -source_vapp View-Parent-Pod -cloneName View-Test -endN 15 -startN 9


If start and end is the same number, only one Pod with that number is created
.\NLclonePodsofVapp.ps1 -source_vapp VSA-Parent-Pod -cloneName VSA-Test -endN 7 -startN 7


The affinity rules are created automatically whenever it is possible. You don't need to request it.

This means now that if DRS is enabled, you can create multiple Pods that at power on will be distributed across the hosts by DRS and the affinity rules will keep the VMs in each Pod in the same host.



As VMware vCenter Lab Manager will not see further major releases, users have no option but to move to vCloud Director. But as Mike clearly says, vCD is not designed to be a replacement for Lab Manager. For those who see vCD as 'too big for what I need' this scripted solution comes handy.

Using pseudo Lab Manager



An example of how in a few minutes you can have this script cloning vApps for testing/teaching/etc.

I suggest you watch it in fullscreen or large size.

2011-06-26

Scripted linked-cloning & pseudo-LabManager with PowerCLI

[UPDATE NOTE: The work creating, providing and supporting scripts for ESX/ESXi/vCenter has been discontinued. Apologies for the inconvenience.]


I have created a group of scripts that allow you to:

  • Create a linked-clone of a VM
  • Create a linked-clone of a vApp (creates a vApp and then puts inside linked-clones of the VMs on the original vApp. This new vApp does NOT inherit the vApp configuration from the parent)
  • Create N linked-clone vApps, putting the VMs inside each vApp in an isolated network (PortGroup).



Basic Linked clones:

One of the many features of VMware View is the automated creation of pools of Desktops that share the same parent VM (linked clones).

I have created a PowerCLI script that allows you to create linked clones from a Parent VM in a matter of seconds. These VMs are pure linked clones, so the script does not perform any reconfiguration tasks on the OS.

Each one of those linked-clones know that it is a linked-clone, and therefore, when you delete it from disk it will not delete the virtual disk(s) of the parent VM.

I must say that I reused the code of Keshav Attrey [0] for the basic linked clone operation.




Pseudo LabManager Pod cloning + Isolation:

VMware LabManager allows you to clone multiple times virtual environments (groups of VMs in a can, isolated from other VMs from the network perspective, also know as Pod of VMs). And the best of all, is that those clones are linked-clones, which reduces drastically the time and space needed for their creation. For more information see its features [1].


Some of the benefits [2] of Vmware Lab Manager are,

- Reproduce bugs and reduce time spent in the debug phase

- Better management of joint resource across teams


Indirect benefits are,

- Delivering better product support

- Easier troubleshooting for customer production problems

- Improved productivity and efficiency

- Reduce time finding spare servers

- No need to hoard servers and storage

- Save power, space & HVAC


The technical benefits are,

- Provision systems quickly.

- Restore previous configurations.

- Quickly make changes to a configuration, possibly via user self-service.

- Recycle system resources for other uses.



However LabManager has some requirements like a LabManager server and a Database. Those and other requirements may not allow you to use LabManager or it may not be the best idea if you want a simple & quick cloning of a Pod. Actually you can put the VMs of the group in a vApp and clone it, but it will take quite some time and storage consumption even if you are using thin provision disks.


For that reason I have created another PowerCLI script that does the same core 'magic' that LabManager does but with very few requirements, and without LabManager expertise needed. Just one single PowerCLI script to create the Pods and another to delete them (you can also delete them manually). The resulting VMs are more flexible than LabManager Pods because you can directly from vSphere Client add/change/remove network/CDs/vDisks/etc while they are running. And they are thin-provisioned whenever you want, so you don't have to do some extra magic [4] to convert them to thin.


This other script does the following things:

  • Creates an internal only vSwitch
  • Creates a Port group on that vSwitch for every Pod
  • Creates the N linked clones of the specified vApp

LabManager had its own user interface to access the Pods. In this case you use the vSphere Client, which allows multiple concurrent access/users. And because each Pod is a vApp, you can modify the permissions/rights on each one. For example, a user may be able to log in into vCenter, but he/she may only be able to use the vApp/Pod he/she has been assigned to. In other words, you stay in control, while you get the benefits and simplicity of this solution.





========= Commands help (using PowerShell help) =========

Just type:
# help .\commandname   (if you are in the folder of the program)

Every script has documentation and examples.


======== General Instructions ================

1. Install PowerCLI (in vCenter or a computer able to reach vCenter)
http://www.vmware.com/support/developer/PowerCLI/index.html

2. Open PowerCLI
Start > Programs > VMware > VMware vSphere PowerCLI > VMware vSphere PowerCLI.

3. Connect to vCenter with:
# connect-viserver -server
or
# connect-viserver -server localhost  (if you installed the PowerCLI on the vCenter)

4. By default, Microsoft has prevented the running of custom PowerShell scripts. Run this to change that behavior.
# Set-executionpolicy unrestricted
The change/setting will stay even after reboot, so don't need to set again.

5. Copy the scripts to some location and move to it from PowerCLI/PowerShell

6. In vCenter
Put some VMs in a vApp. Minimum two.
Take a snapshot of each one of those VMs. Ideally they would be powered off, but it is possible to have them running (don't snapshot the memory).
Make sure they are all in the same host.

7. Run (example):
# .\NLclonePodsofVapp.ps1  SourceVappName  Student  3

This will create Student-Pod01, Student-Pod02, Student-Pod03 and a vSwitch called vSwitch-Internal-Student

8. You can power on the Pods/vApps and verify that they are linked clones of the original VM, and that the VMs inside each Pod are in an isolated group.

9. When you are done with your experiments/tests and want to destroy the Pods and the vSwitch, run:
# .\removeLclonePods.ps1 Student
This will remove
vApps: Student-Pod01, Student-Pod02, Student-Pod03
vSwitch: vSwitch-Internal-Student

The parent vApp/VMs won't be touched, as these linked-clones are aware of their nature and therefore won't delete the disks of the parent VM when they are removed from disk.




======= Requirements =========
Note that these scripts have gone through limited testing. For certain environments more requirements may apply. If you find any not listed here please let me know.

  • PowerCLI

  • The parent VMs must have an snapshot taken prior to the cloning operation

  • All the VMs in the vApp must be in the same host. It makes no sense to have the VMs forming a vApp/Pod spread across hosts because the vSwitch they are connected to has no physical NIC, so they must be together. The script NLclonePodsofVapp.ps1 allows to specify multiple hosts for the creation of the Pods, but you still need to ensure they are together. Specifying multiple hosts only ensures that the networking needed is created in all the hosts provided, but does not move the vApps/VMs.

  • The VMs inside the vApp must have unique names in vCenter inventory, even across Datacenters. If there are duplicated names, you will get an error like:
          Method invocation failed because [System.Object[]] doesn't contain a method named 'CloneVM_Task'

  • The size of the common name for the Pods has to be <15 characters.

======== FAQ =================
Q: Do I need vCenter to use this?
A: Yes. ESX/ESXi doesn't know anything about cloning a VM or about what a vApp is.

Q: Which version of PowerCLI should I use?
A: Latest. 4.x or higher.

Q: Will this work with ESXi 5.0 / PowerCLI 5.0?
A:Yes.


Q: Can I make a linked-clone of a linked-clone?
A: Yes you can. You just need to ensure it has a snapshot before you clone it.

With VMware and these scripts you can do incredible deployments with very very little space.

See here an example of an complete virtualized SRM deployment (SRM in a box):

Parent-W2008
    -> L_with_vSphereClient (Installed vSphere Client on the windows and took a snapshot)
        -> L_VC_Protected (Changed hostname/IP and installed vCenter Server)
        -> L_VC_Recovery  (Changed hostname/IP and installed vCenter Server)
    -> L_SRM_Protected (Changed hostname/IP and installed SRM Server)
    -> L_SRM_Recovery  (Changed hostname/IP and installed SRM Server)
Parent-vESX
    -> L_vESX_Protected (Reset Defaults on DCUI (Direct Console User Interface) and configured networking)
    -> L_vESX_Recovery  (Reset Defaults on DCUI (Direct Console User Interface) and configured networking)
Parent-StorageSimulator
    -> L_Storage_Protected (Reconfigure networking)
    -> L_Storage_Recovery  (Reconfigure networking)

Parent = Real/Full VM
L_ = Linked clone

So here you have 8 VMs created from only 3 real/full VMs. And 2 of those 8 are linked-clones of another linked-clone.

Once all these L-VMs have a snapshot, you can put them all in a vApp and create N SRM boxes in a box using the script NLclonePodsofVapp.ps1. Minimum space consumption with  maximum flexibility. They will need quite some memory, but the memory sharing will be very high. I have been able to run 12 VMs of 4GB RAM each one in a physical ESXi with only 16GB RAM and I wasn't using it all yet.



======== Notes ===============
  • If you are going to have VMs running ESX/ESXi, you NEED to enable Promiscuous mode on the vSwitch they are connected to.

  • If you will connect any of the linked clones to the external network in a way that they face a clone of itself, ensure the NIC(s) of the Parent VM is set to auto, otherwise there will be a MAC conflict once they see each other. That Auto setting generates new MACs on the NICs of the clones of that VM.

======== Source code  ===========
http://communities.vmware.com/thread/324193

    ======== References ===========

    [0] Creating a linked clone from a snapshot point in PowerCLI : http://www.vmdev.info/?p=40

    [1] VMware vCenter Lab Manager : http://www.vmware.com/products/labmanager/features.html

    [2] Why Developers & Testers will LOVE Vmware's Lab Manager : http://geekswithblogs.net/SabotsShell/archive/2008/07/13/why-developers-amp-testers-will-love-vmwares-lab-manager.aspx

    [3] VMware vSphere PowerCLI : http://www.vmware.com/support/developer/PowerCLI/index.html

    [4] Automatic thinning of virtual disks with makeThin : http://vmutils.blogspot.com/2011/06/automatic-thinning-of-virtual-disks.html


    ======= Versions =============

    When I saw the Post on [0] I barely knew how to use PowerShell/PowerCLI. It took me 1.5 days to get this script (v1) together. The part of the vApps came later.

    # v1 All basic functionality working perfectly. The clones are placed inside vApps.

    # v2 Added ability to link-clone a vApp so it is not needed to reference the list of VMs individually.

    # v3 Added optional posibility of giving a list of hosts on which the networking infrastructure will be created.

    ======= keywords =============
    LabManager, Lab Manager, replacement, alternative, vCD, vCloud Director, less complexity

    Screenshots (of the version 1)




    2011-06-09

    Automatic thinning of virtual disks with makeThin

    Here I present my latest tool for ESX. An script that automatically converts thick virtual disks to thin.

    It has been used for almost a year in my team across different centers and the storage savings are >50%.


    makeThin Documentation

    Happy thinning!

    [Update 8/January/2013]
    Victor was kind enough to create a fork of this program for ESXi. You can find all the info on https://github.com/terreActive/makeThin
    [End of update]


    PS: If you link the documentation/script, please link to this post or the blog. The location of the document on the back may change, the blog will not.