Virtual Disk Transport Methods
VMware supports file-based or image-level backups of virtual machines hosted on an ESXi host with SAN or NAS storage. Virtual machines read data directly from a shared VMFS LUN, so backups are efficient and do not put significant load on production ESXi hosts or the virtual network.
VMware offers interfaces for integration of storage-aware applications, including backup, with efficient access to storage clusters. Developers can use VDDK advanced transports, which provide efficient I/O methods to maximize backup performance. VMware supports four access methods: local file, NBD (network block device) over LAN with SSL encryption (NBDSSL), SAN, and SCSI HotAdd.
Local File Access
The virtual disk library can read virtual disk data from /vmfs/volumes on ESXi hosts, or from the local file system on hosted products. This file access method is built into VixDiskLib, so it is always available on local storage. However it is not a network transport method, and is seldom used for vSphere backup.
SAN Transport
SAN mode requires applications to run on a backup server with access to SAN storage (Fibre Channel, iSCSI, or SAS connected) containing the virtual disks to be accessed. As shown in SAN Transport Mode for Virtual Disk, this method is efficient because no data needs to be transferred through the production ESXi host. A SAN backup proxy must be a physical machine. If it has optical media or tape drive connected, backups can be made entirely LAN-free.
SAN Transport Mode for Virtual Disk
In SAN transport mode, the virtual disk library obtains information from an ESXi host about the layout of VMFS LUNs, and using this information, reads data directly from the storage LUN where a virtual disk resides. This is the fastest transport method for software deployed on SAN-connected ESXi hosts.
In general, SAN transport works with any storage device that appears at the driver level as a LUN (as opposed to a file system such as NTFS or EXT). SAN mode must be able to access the LUN as a raw device. The real key is whether the device behaves like a direct raw connection to the underlying LUN. SAN transport is supported in Fibre Channel, iSCSI, and SAS based storage arrays (SAS means serial attached SCSI). SAN storage devices can contain SATA drives, but currently there are no SATA connected SAN devices on the VMware hardware compatibility list.
SAN transport is not supported for backup or restore of virtual machines residing on VVol datastores.
VMware Virtual SAN (VSAN), a network based storage solution with direct attached disks, does not support SAN transport. Because VSAN uses modes that are incompatible with SAN transport, if the virtual disk library detects the presence of vSAN, it disables SAN mode. Other advanced transports do work.
HotAdd Transport
HotAdd is a VMware feature where devices can be added “hot” while a virtual machine is running. Besides SCSI disk, virtual machines can add additional CPUs and memory capacity.
If backup software runs in a virtual appliance, it can take a snapshot and create a linked clone of the target virtual machine, then attach and read the linked clone’s virtual disks for backup. This involves a SCSI HotAdd on the ESXi host where the target VM and backup proxy are running. Virtual disks of the linked clone are HotAdded to the backup proxy. The target virtual machine continues to run during backup.
VixTransport handles the temporary linked clone and hot attachment of virtual disks. VixDiskLib opens and reads the HotAdded disks as a “whole disk” VMDK (virtual disk on the local host). This strategy works only on virtual machines with SCSI disks and is not supported for backing up virtual IDE disks. HotAdd transport also works with virtual machines stored on NFS partitions.
HotAdd is a good way to get virtual disk data from a virtual machine to a backup appliance (or backup proxy) for sending to the media server. The attached HotAdd disk is shown in HotAdd Transport Mode for Virtual Disk.
HotAdd Transport Mode for Virtual Disk
Running the backup proxy as a virtual machine has two advantages: it is easy to move a virtual machine to a new media server, and it can back up local storage without using the LAN, although this incurs more overhead on the physical ESXi host than when using SAN transport mode.
About the HotAdd Proxy
The HotAdd backup proxy must be a virtual machine. HotAdd involves attaching a virtual disk to the backup proxy, like attaching disk to a virtual machine. In typical implementations, a HotAdd proxy backs up either Windows or Linux virtual machines, but not both. For parallel backup, sites can deploy multiple proxies.
The HotAdd proxy must have access to the same datastore as the target virtual machine, and the VMFS version and data block sizes for the target VM must be the same as the datastore where the HotAdd proxy resides.
If the HotAdd proxy is a virtual machine that resides on a VMFS-3 volume, choose a volume with block size appropriate for the maximum virtual disk size of virtual machines that customers want to back up, as shown in VMFS-3 Block Size for HotAdd Backup Proxy. This caveat does not apply to VMFS-5 volumes, which always have 1MB file block size.
NBDSSL Transport
When no other transport is available, networked storage applications can use LAN transport for data access, with NBD (network block device) protocol and SSL encryption, called NBDSSL. NBD is a Linux-style kernel module that treats storage on a remote host as a block device. NBDSSL is the VMware variant that uses SSL to encrypt all data passed over the TCP connection. The NBDSSL transport method is built into the virtual disk library, so it is always available, and is the fall-back when no other transport method is available.
VMware libraries, and backup applications, often fall back to NBDSSL when other transports are unavailable.
LAN (NBDSSL) Transport Mode for Virtual Disk
In this mode, the ESXi host reads data from storage and sends it across a network to the backup server. With LAN transport, large virtual disks can take a long time to transmit. This transport mode adds traffic to the LAN, unlike SAN and HotAdd transport, but NBDSSL transport offers the following advantages:
When VDDK opens a non-snapshot disk for NBDSSL transfer (read-only or read/write) it selects the ESXi host where the disk’s virtual machine currently resides.
However when VDDK opens a snapshot for NBDSSL transfer, the common backup case, VDDK passes the datastore to vCenter Server, which consults its list of ESXi hosts with access to the datastore; vCenter picks the first host with read/write access. The list of hosts is unordered, so the host chosen for NBDSSL transfer of the snapshot is not necessarily the ESXi host where the snapshot’s virtual machine resides.
NBDSSL Performance
When reading disk data using NBDSSL transport, VDDK makes synchronous calls. That is, VDDK requests a block of data and waits for a response. The block is read from disk and copied into a buffer on the server side, then sent over the network. Meanwhile, no data gets copied over the network, adding to wait time. To some extent, you can overcome this limitation by using multiple streams to simultaneously read from a single disk or multiple disks, taking advantage of parallelism.
As of vSphere 6.5, NBDSSL performance can be significantly improved using data compression. Three types of compression are available – zlib, fastlz, and skipz – specified as flags when opening virtual disks with the VixDiskLib_Open() call. See Open a Local or Remote Disk.
NFC Session Limits
NBDSSL employs the network file copy (NFC) protocol. NFC Session Connection Limits shows limits on the number of connections for various host types. These are host limits, not per process limits. Additionally vCenter Server imposes a limit of 52 connections. VixDiskLib_Open() uses one connection for every virtual disk that it accesses on an ESXi host. Clone with VixDiskLib_Clone() also requires a connection. It is not possible to share a connection across physical disks. These NFC session limits do not apply to SAN or HotAdd transport.
SSL Certificates and Security
The VDDK 5.1 release and later were security hardened, with virtual machines set to check SSL certificates.
On Windows VDDK 5.1 and 5.5 required the VerifySSLCertificates and InstallPath registry keys under HKEY_LOCAL_MACHINE\SOFTWARE to check SSL certificates. On Linux VDDK 5.1 and 5.5 required adding a line to the VixDiskLib_InitEx configuration file to set linuxSSL.verifyCertificates = 1.
As of VDDK 6.0 both SSL certificate verification and SSL thumbprint checking are mandatory and cannot be disabled. The Windows registry and Linux SSL setting are no longer checked, so neither has any effect.
Specifically VDDK 6.0 and later use X.509 certificates with TLS cryptography, replacing SSLv3.
The following library functions enforce SSL certificate checking: InitEx, PrepareForAccess, EndAccess, GetNfcTicket, and the GetRpcConnection interface that is used by all advanced transports. SSL verification may use thumbprints to check if two certificates are the same. The vSphere thumbprint is a cryptographic hash of a certificate obtained from a trusted source such as vCenter Server, and passed in the SSLVerifyParam structure of the NFC ticket.