Deploying Nutanix AHV Community Edition in vSphere\ESXi

If you’re searching for a new Hypervisor for use and want to take a look at Nutanix’s Acropolis Hypervisor then they have a Community Edition that can be used in your own homelab. Whilst it’s not quite a full blown hypervisor, with all the bells and whistles, it does give you a good idea on how it all works. Here’s how to deploy it to a vSphere ESXI host.

 

A few notes before we begin

 

CPU: Use Intel CPUs with Sandy Bridge microarchitecture or later generations (VT-x and AVX) and AMD CPUs with Zen microarchitecture or later generations.You’ll need at least four cores, with two cores dedicated to the Controller VM (CVM). CE doesn’t support Intel Efficient cores (E-cores).

 

RAM: Minimum system memory is 32 GB but it is recommended to use 64 GB especially if using deduplication and\or compression.

 

STORAGE: Use a maximum of four SSD or HDD drives per node. Cold tier storage: Use at least 500 GB. Hot tier storage: Use a single non-NVMe SSD of at least 200 GB.Hypervisor boot device: Must have at least 32 GB of capacity.

 

NIC: Intel: 1 GbE (e1000e), 2.5 GbE (igc) or Realtek: 1 GbE and 2.5 GbE (r8169) supported

 

Cluster Size: Community Edition allows you to create one-, three, and four-node clusters in your environment. You can’t expand a single-node cluster. You must destroy it and create a new three- or four-node cluster.

 

Downloading the AHV ISO

 

First of all you’ll need to get the download of the ISO file to use. You’ll need to register to the community first by visiting https://www.nutanix.com/uk/products/community-edition/register

Once registered you can then access the link and download the latest ISO file.

Save the ISO to one of your datatstores.

 

VM & Network Configuration

 

There are a few different ways to deploy AHV. We will be deploying a single node.

We will need the following spec VM:

Compatibility: ESXi 7.0 U2 and later
Guest OS Family: Linux
Guest OS Version: CentOS 7 (64-bit)

CPU: 4 vCPUs (Expose hardware assisted virtualization to the guest OS)
Memory: 64 GB (as stated above you can use 32 GB)
Hard disk 1 32 GB (Boot drive)
Hard disk 2 200 GB (Hot tier storage)
Hard disk 3 500 GB (Cold tier storage)

Network Adapter (VMXNET3)

VM Adv config setting: disk.EnableUUID = TRUE (see here to do this)

We will also need 2 ip addresses:

One for the AHV Host
One for the Controller VM (Used for storage IO)

You will also need to enable the following on your vSwitch

MAC Address Changes
Forged Transmits 

 

Deployment

 

1. In vSphere\ESXi, Click “Create / Register VM”

 

Step 0

 

2. Click “Create a new virtual machine” and click “Next”

 

Step 1

 

3. Click into “Name” and enter a name for the VM. Under “Compatibility” select “ESXi 7.0 U2 virtual machine”. Under “Guest OS family” select “Linux”, Under “Guest OS version” select “CentOS 7 (64-bit)”, then click “Next”

 

Step 2

 

4. Select your datastore and when ready click “Next”

 

Step 3

 

5. Amend the “CPU” count to 4 and Expand the CPU dropdown.

 

Step 4

 

6. Under “Hardware virtualization”, select the checkbox option “Expose hardware assisted virtualization to the guest OS”

 

Step 5

 

7. Change the “Memory” to “GB” and set it to 64 GB.

 

Step 6

 

8. Amend the 1st Hard disk to 32 GB (this will be our boot disk). Click the drop down menu icon.

 

Step 7

 

9. Select the option for “Thin provisioned”

 

Step 8

 

10. Click “Add hard disk” to add more storage drives

 

Step 9

 

11. Click “New Standard hard disk”

 

Step 10

 

12. Amend the new hard disk to be 200 GB (hot storage tier), set to “thin provisioned”. Repeat the process to add another hard disk, sized at 500 GB (cold storage tier) and also thin provisioned.

 

Step 11

 

13. Under CD/DVD Drive 1, Click and select “Datastore ISO file”. Browse to the datastore you saved your ISO file on and select it.

 

Step 12

 

14. Click “VM Options”

 

Step 13

 

15. Expand “Boot Options” and ensure that “BIOS” is selected

 

Step 14

 

16. Expand “Advanced”

 

Step 15

 

17. Under “Configuration Parameters”, click “Edit Configuration”

 

Step 16

 

18. Click “Add parameter”

 

Step 17

 

19. Click “Click to edit key” under “Key”

 

Step 18

 

20. Add the text “disk.EnableUUID” and add the value of “TRUE” under “Value”. When ready click “Next”.

 

Step 19

 

21. Click “Next”

 

Step 20

 

22. Confirm your configuration and click “Finish”

 

Step 21

 

23. Select your newly created VM and get a remote console to it. Click “Power on this virtual machine.

 

Step 22

 

24. Once booted you should see this screen. Press tab until you get to each section adding the “Host IP Address”, “CVM IP Address”, “Subnet Mask”, and “Gateway”. Tab to “Next Page” and press Enter.

 

Step 23

 

25. Press the down arrow to scroll down through the EULA (You have to go through it fully before you can continue). Tab down to the acceptance and press the spacebar. Tab to “Start” and press Enter.

 

Step 24

 

26. The next part of the installation will take around 30 or so mins to complete. You should see the numbers (highlighted) on the left side increasing as it installs.

 

Step 25

 

27. Once you get the below message, the initial installation is complete. Type “y” and press “Enter”

 

Step 26

 

28. Run a test ping to both the host and to the CVM IP addresses (once the VM has rebooted.) Then SSH to it using the credentials:

 

Username: nutanix

Password: nutanix/4u

 

Once connected you need to run the following command to build the cluster:

 

cluster —dns_servers= <DNS IP> –redundancy_factor=1 -s <CVM IP> —cluster_name= <Cluster name> create

 

Step 27
 

This will create a single node cluster. Press Enter to start the build process.

 

29. As the install progresses, you will see the list of services reduce.

 

Step 28

 

30. Once you get to this screen the install has finished.

 

Step 29

 

31. Open a browser and go to https://<CVM IP>:9440. You’ll be prompted with a security warning you’ll need to accept.

 

Step 30

 

32. Login with the default credentials:

 

Username: admin

Password: nutanix/4u

 

Step 31

 

33. You will then get prompted to change the admin password.

 

Step 32

 

34. Once changed you will be prompted to log back in using the new credentials you set.

 

Step 33

 

35. Login to Nutanix using the credentials you used to download AHV CE.

 

Step 34

 

36. Once logged in you should get connected to the main dashboard. Because we only have a single node, we would expect to see the errors and alerts shown on this screen.

 

Step 35

 

37. If you shut down your AHV cluster in your home lab, then when you fire it back up, you can SSH to the CVM IP address & run the command:

 

cluster status

 

If the status of all services show as UP then you should be good to connect via the browser.

 

Step 36