If you use napp-it, please contribute to the further development of a free napp-it ZFS Server.
Buy extensions via PayPal:
Do not forget to enter or email the hostname!
We do not request or pay sales-tax due to our state as a small enterprise according to § 19 UStG
See also details and how to place a add at this place..
All-In-One (ESXi + virtualized SAN/NAS storage server)
A lot of ESXi users keep their VM's on a dedicated shared SAN storage server. For high performance, low latency and reliability a redundant FC or IP network is used to connect them together with a second SAN storage server for redundancy with Clustering/ High Availabilty. But quite often, you do not need such a complex and expensive high-end solution. You only want shared storage with Highend-SAN features and high performance and low latency between ESXi and storage. Your ESXi servers are not working to capacity. You have enough CPU and especially RAM left for another VM.
This is where All-In-One can be the solution. ALL-In-One means, integrate the two functions ESXi server and NAS/SAN storage server into one box and connect them in Software. You can reach quite the same performance like with two dedicated servers (same RAM on storage server) connected via highend 10 GBe networks. Usually you keep all VM's on their 'lokal' SAN storage. Each All-In-One works completely independent from others (SAN is no longer s single point of failure) while you have all options like flexible storage allocation, boot a VM from another SAN, move, clone and backup between your SAN's.
If you have enough disk bays free in your All-In-Ones (highly recommended) you can even move a pool physically in case of problems or wanted updates to another All-In-One, import the pool and start the VM's there or from there.
- ESXi server not loaded to capacity
- Mainboard with hardware virtualisation vt-d (Intel) or IOMMU (AMD) for high-performance storage access
- Two independent storage controller (example onboard Sata + SAS controller)
- Ability to handle some extra complexity regarding updates of ESXi or the storage VM
Setup All-In-One with OmniOS/ OpenIndiana on a ZFS mirror
- Verify that your mainboard + Bios + CPU supports vt-d (best: mainboard with Intel serverchipset and a Xeon)
- Set all onboard Sata ports to AHCI and enable vt-d in Bios settings
- Disable Active State Power Management (SuperMicro, can cause problems) in Bios settings
- Insert a second SAS controller like a LSI 9211 or a IBM 1015 flashed to IT firmware
- Add a boot disk to Onbaord Sata (best a 40+ GB SSD),
use a second Sata disk (same size or larger) with a second ESXi datastore to ZFS mirror Omni/OI
- Install ESXi 5.1 to your first Sata boot disk (option here is an USB stick, much slower on booting
but adds some extra flexibility). If you need ESXi to be fast and independent: use a third Sata disk (small SSD).
- Connect your ESXi box from a Windows machine via Browser https://ip of your box
- Install Vsphere to Windows and connect your ESXi box via vsphere
- Enable pass-through within ESXi for your SAS controller,
Create a second datastore on your second Sata disk, reboot
- Use ESXi file-browser to upload Omni or OpenIndiana live ISO image to your local ESXi datastore
- Create a new VM (Solaris 10-64Bit, min 20 GB systemdisk, 4 GB RAM+ the more the faster, single core, with ESXi5 dualcore,
VMCI enabled, added SAS-PCI adapter, DVD connected to your uploaded Omni/ OpenIndiana ISO
- Add a second virtual disk on the second datastore (second physical disk) with the same size like your first systemdisk
- Start VM and install Omni or OpenIndiana + napp-it (the same like on real hardware, see NAS/SAN setup)
setup ESXi autostart to start this VM always the first
- Mirror Omni/OI bootdisk with napp-it menu Disk - Mirror bootdisk
Setup VM-Bios to boot from the second disk first and the first disk (with ESXi on it) at second.
You can the replace/update the ESXi bootdisk without a problem for Omni/OI and remirror Omni/OI afterwards.
- Install VMware tools (you can use the ESXi included tools and start the perl installer)
You can now use the Vmware VMXnet3 network adapter that can improve network performance.
(Some reported problems with VMXnet3, use then E1000g that can also provide several Gbit/s)
For Omni, you need ESXi 5.1 with newest patches, read http://napp-it.org/doc/ESXi-OmniOS Installation HOWTO en.pdf
- Share NAS storage (use SMB for Windows compatible File sharing)
- Share SAN storage (use NFS), share this dataset also per SMB for easy access (snapshots, clone, backup)
- In ESXi settings, add shared NFS storage, connect the NFS SAN share.
- Create new VMs on this NFS datastore
If you reboot ESXi, be aware of some delay until these VM's are booted. (need to wait until the storage VM is up)
but they connect/come up automatically with NFS when you enable autostart for these VM's
Optimal ZFS Pool layout for ESXi datastores
- With several VM's, you have a lot of concurrent small read and writes. For a good performance with such a workload, you need good I/O values. Best is to build a pool from mirrored vdevs (2way mirrors or 3 way mirrors for extra security/performance). Avoid Raid-Z configs. They may have good sequential pereformance but I/O is the same like one disk (all heads must be positioned on every read/write).
- When ESXi writes data to a NFS datastore it requests sync writes for security reasons. The default setting of ZFS is to follow this and to do sync writes only. This is a very secure default setting but can lower performance (compared to normal writes) dramatically. Sometimes regular writes are 100x faster than sync writes where each single write must be done and commited immediatly until the next one can occur. (Very heavy I/O with small data, bad fore every file-system).
You have now two options:
1. Ignore sync write demands (=disable sync property on your NFS shared dataset) with the effect of a dataloss on powerloss.
2. Add an extra ZIL-device to log all sync writes. They can then written to disk sequentially with full speed like normal writes
If you add a ZIL, you must use one with high write values and low latency. Usually SSD's are bad on this. Best are DRAM based ZIL drives
like a ZeusRam or a DDRDrive. Sad to say, they are really expensive. But a good SSD can help a little.
- Add Pools build from ZFS Z1-3 vdevs for backup or if you need a SMB filer.