Startseite
Bild
Bild
  • ready to use and comfortable ZFS storage appliance for iSCSI/FC, NFS and SMB
  • Active Directory support with Snaps as Previous Version
  • user friendly Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
  • commercial use allowed
  • no capacity limit
  • free download for End-User


Bild
  • Individual support and consulting
  • increased GUI performance/ background agents
  • bugfix/ updates/ access to bugfixes
  • extensions like comfortable ACL handling, disk and realtime monitoring or remote replication
  • appliance diskmap, security and tuning (Pro complete)
  • Redistribution/Bundling/setup on customers demand optional
please request a quotation.
Details: Featuresheet.pdf
All in One Server

Concept
and how to setup All-In-One (please read first!!)

VMware esxi-Server + extra ZFS NAS/SAN-Server Concept with AFP, SMB and NFS Shares, connected via a LAN/SAN/VLAN Network
but not with two physical servers and a hardware switch but all realized in one physical box with free software.
  • vt-d capable Server-Hardware with Baremetal Type 1 Hypervisor VMware esxi 4.1
  • virtual network switch within esxi (LAN,SAN,vlans)
  • virtualized ZFS Storage SAN-OS on top
  • ZFS Datapools managed via PCI-passthrough within ZFS-OS and shared via NFS back to ESXi
  • WWW-shares, NAS shares (AFP, SMB) and SAN shares (NFS, iSCSI), managed by ZFS OS
Howto
  • use a vt-d capable mainboard/server (best are current intel server-chipsets 3420 and 5520)
    see http://wiki.xensource.com/xenwiki/VTdHowTo
    vt-d capable CPU's see http://ark.intel.com/VTList.aspx - needed or best are XEON's
  • use a sata boot-disk for ESXi and your ZFS-VM
  • add a LSI 1068 SAS or LSI 2008 storage adapter with IT Firmware, manage disks from ZFS-OS via pass-through (do not use raid-firmware)
  • all other VM's are stored on NFS shared from ZFS-OS
  • add as much RAM as possible, ECC is always recomended for server use
  • keep it simple !

i prefer the following setup:

  • use a 2x2,5" 160GB hardware raid-1 enclosure and attach it to onboard sata ( Raidsonic SR2760-2S-S2B)
  • install esxi on this hardware raid-1 and use the remaining space as a local datastore
  • setup software switch in esxi, create needed LANs, SAN, vlans
  • enable vt-d (pci-passthrough) in mainboard bios
  • enable passthrough in esxi, reboot and set SAS 1068 or 2008 Controller to passthrough in esxi
    see http://www.servethehome.com/configure-passthrough-vmdirectpath-vmware-esxi-raid-hba-usb-drive/
  • upload Nexentacore iso image to local datastore
  • create a Nexentacore/OI/SE11 vm on local datastore, best with 4GB+ RAM, HD > 4GB,
    add your 1068e/ 2008 as a pci-adapter with passthrough in vm-settings
  • use only 2 vCPU vor SAN-Server on ESXi 4.1 or you may have boot problems!
  • add napp-it and vmware-tools and set this vm to autostart first

all disks, connected to your 1068/ 2008 are now managed by Nexenta

  • create a pool from these disks and share it via nfs
  • share it also via cifs to a have a simple method of move/copy/ backup/snapshots via win-previous version
  • create a datastore from this nfs-share (care about permissions)

create other vms

  • store them all on one nfs-datastore if they are small and access snaps for clone/ backup from snap-folder
  • store each on its own nfs-datastore if they need a large datadrive, access snaps via snap roll back
  • if you need hot-snaps without paying money, do snaps within esxi, then with ZFS on Nexenta and delete esxi snaps
  • in case or problems, you can restore such a zfs-snap and restore this hot-running state within esxi

For test-installations, you coud install ESXi and your ZFS-OS on a good SLC- usb3-Stick and use your SATA-Controller for pass-through

Hardware
my Mainboards:
  • Supermicro X8SIL-F (3 x pci-e x8 slots, vga, microatx, Xeon) ( 500+ euro server)
  • or Supermicro X8DTH-6F, needs Biosupdate >=1.1 (7 x pci-e x8 slots, vga, SAS-II, Dual Xeon) 1000+ euro server)
  • Update Firmware of SAS2 Controller to IT-mode (ftp://ftp.supermicro.com/driver/SAS/LSI/2008/IT/Firmware/)
  • 1-2 QuadCore, 8 GB min RAM, best 32GB+ ECC
  • Bootdrive Sata (2 x 2,5" 160 GB 24/7 WD) HW-Raid1 enclosure Raidsonic SR2760-2S-S2B
Software
  • VMware esxi 4.1 (free)
  • NexentaCore, OpenIndiana or Solaris Express 11 + Napp-it (free)
State of Project

Vmware Server is running stable with virtualized Nexentacore and PCI-passthrough to SAS Adapter.
Nexenta, OI or SE11 is started automatically, other virtuel servers have to be started with a delay.

High-Spped VMXNET3 net-driver in esxi for better internal net-performance is only available in OI and SE11
not available in Nexenta via apt-get install vmware-tools,


HD-Performance of a virtualized ZFS-Server Supermicro X8DTH-6F, 2 x Xeon Quadcore, 24GB RAM,
SAS-II PCI-Passthrough to Nexenta ZFS pools

SAS Controller LSI 1068E (PCI-Passthrough to Nexenta ZFS pools)
Pool of 6 x wd raptor 300 GB:

pool4 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
 c3t0d0 ONLINE 0 0 0
 c3t1d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
 c3t2d0 ONLINE 0 0 0
 c3t3d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
 c3t4d0 ONLINE 0 0 0
 c3t5d0 ONLINE 0 0 0


Bonnie+ reports via napp-it
Seq-Out (Char): 178 MB/s
Seq-Out (Block): 221MB/s
Seq-Out (Rewrite): 159MB/s
Seq-In (Char): 198 MB/s
Seq-In (Block): 557 MB/s

new config: (only 4 x Solidata K8 SSD 120GB with Sandforce 1200 Controller)
pool4 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
 c3t0d0 ONLINE 0 0 0
 c3t1d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
 c3t2d0 ONLINE 0 0 0
 c3t3d0 ONLINE 0 0 0

Seq-Out (Char): 172 MB/s
Seq-Out (Block): 327 MB/s
Seq-Out (Rewrite): 195 MB/s
Seq-In (Char): 196 MB/s
Seq-In (Block): 835 MB/s
napp-it 27.12.2023