Startseite
Bild
Bild
  • ready to use and comfortable ZFS storage appliance for iSCSI/FC, NFS and SMB
  • Active Directory support with Snaps as Previous Version
  • user friendly Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
  • commercial use allowed
  • no capacity limit
  • free download for End-User


Bild
  • Individual support and consulting
  • increased GUI performance/ background agents
  • bugfix/ updates/ access to bugfixes
  • extensions like comfortable ACL handling, disk and realtime monitoring or remote replication
  • appliance diskmap, security and tuning (Pro complete)
  • Redistribution/Bundling/setup on customers demand optional
please request a quotation.
Details: Featuresheet.pdf

Async highspeed/ network replication (Solarish and Linux)

  • Async Replication between appliances (near realtime) with remote appliance management and monitoring
  • Based on ZFS send/ receive and snapshots
  • After an initial full transfer, only modified datablocks are transferred
  • High speed transport via (buffered on OmniOS/OI) netcat
    (unencrypted transfer, intended for secure LANs)
  • Replication is always pull data. You only need a key on the target server, not on sources


How to setup


    • You need a licence key only on a target server (you can request evaluation keys)
    • To request a license key, you need the machine-ID from menu Extensions > Get Machine ID
    • Register the machine ID key: copy/paste the whole key-line into menu extension-register, example:
      complete h:m123..4m90 - 20.06.2022::VcqmhqsmVsdcnetqsmVsTTDVsK

      Usage of a Pro license is restricted to subscription time or perpetual with an unlimited edition.

    • Group your appliances from the target system with menu extension -appliance group. Klick on ++ add to add members to the group
    • Create a replication job with menu Jobs - replicate - create replication job
    • Start the job manually or timer based
    • After the initial transfer (can last some time), all following transfers are copying only modifies blocks
    • You can setup transfers down to every minute (near realtime)

    • If one of your server is on an unsecure network like Internet: buld a secure VPN tunnel between appliances
    • If you use a firewall with deep inspection: This may block netcat, set a firewall rule to allow port 81 and replication ports


    Use ZFS replication for

    • Highspeed inhouse replication on secure networks
    • External replication over VPN links with fixed ip's and a common DNS server (or manual host entries)


    How replication works

    • Unlike other sync mechanism like rsync that compares source and target files on every run, ZFS replication generates a simple data stream that transfers a ZFS snap. On an initial send, this snap contains the whole filesystem, on next run the following incremental differential snap contains only the modified datablocks since the last run. To make this working, you always need an identical base snap from last run on both sides (the target filesystem is rolled back to this prior a send). Without this common base snap a replication cannot continue. This is a serious restriction compared to methods like rsync but unlike rsync a ZFS replication can keep two ZFS filesystem in sync even with open files and even on a Petabyte server under high load down to a minute delay..
    • On first run, replication creates a source snap jobid...nr_1 and transfers the complete dataset over a netcat highspeed connection
      When the transfer is completed successfully, a target snap jobid.._nr_1 is created.
    • The next replication run is incremental and based on this snap-pair with same number. A source snap ex jobid.._nr_2 with modified datablocks is created and transfered. When the transfer is completed successfully, a target snap jobid.._nr_2 is created.
    • And so on. Only modified datablocks are transfered to provide near realtime syncs when run every few minutes.
    • If a replication fails for whatever reason, you have a higher source than target snapnumber. This does not matter. A new source snap is recreated on next run.
    • For single filesystem replications, you can use the keep and hold job parameter to preserve target snaps.
      This will not work with recursive replications as this always replicate the same snap state and delete all others.


    In case of problems


    Basics

    • Check if basic communication/ remote control (appliance-group) via webserver on port 81 is working.
      Click on ZFS beside a hostname in menu extension - appliance group (results in a ZFS listing or an error)
      Optionally delete/ rebuild the group  from the target machine.
    • Check group members in menu Extension >> Appliance group for double entries. Especially when you rename a host it can happen that the then invalid old entry (hostname-ip) remains. You can delete this invalid group member then in submenu "delete group members".
    • Check if you have a snap-pair with same max jobid_nr_n numbers on source or target. 
(If you have deleted, you must restart with an initial replication).
    • Check if you have enough space on source and target (check also reservations and quota settings)
    • Check network settings (avoid redundant paths due two nics in same subnet) or Jumboframes/ MTU 9000.
      I have seen cases where Jumbo works with small files (build a group) but hangs with large transfers, disable Jumbo then.


    Actions

    • If the receiver and sender is starting but no data is transferred, check receive logs in menu Jobs on the target machine
      - last execution: Click on date in the line of the job (shows details of last run)
      - joblog: click on replicate in the line of the jog (shows last executions of this job)
    • Check send log on the source machine
      - see menu Jobs >> Remote log
    • If you have a former snap pair, delete the last target snap on the receiver side in case this got corrupted during last transfer. Then retry replication to check if the  replication works based on the former highest snap pair number.
    • If you need to restart a replication and if you have enough space, rename the old target filesystem (and delete it after a successful new replication). Then restart the replication with an initial run
    • Use menu jobs - replicate - monitor on both sides to monitor transfers
    • If you delete a replication job, you may need to delete remaining snaps of this job manually.
    • Try a reboot of source and target server (in case any other service is blocking a filesystem)


    Support


    If you need help, send an email to support@napp-it.org  with the following infos

    • OS release ex OmniOS 151026
    • napp-it release ex 18.03 Pro, April.02
    • napp-it Pro key


    add infos like

    • job settings (recursive and other settings)
    • last log (screenshot)
    • job log  (screenshot)
    • infos whether you have checked pool fillrate, appliance group working (checked remote ZFS list) and a reboot tried
    • other infos that can help
    napp-it 27.12.2023