Virtual Disk Transport Methods
VMware supports file-based or image-level backups of virtual machines hosted on an ESX/ESXi host with SAN or NAS storage. Virtual machines read data directly from a shared VMFS LUN, so backups are efficient and do not put significant load on production ESX/ESXi hosts or the virtual network.
VMware offers interfaces for integration of storage-aware applications, including backup, with efficient access to storage clusters. Developers can use VDDK advanced transports, which provide efficient I/O methods to maximize backup performance. VMware supports five access methods: local file, NBD (network block device) over LAN, NBD with encryption (NBDSSL), SAN, and SCSI HotAdd.
Local File Access
The virtual disk library reads virtual disk data from /vmfs/volumes on ESX/ESXi hosts, or from the local file system on hosted products. This file access method is built into VixDiskLib, so it is always available on local storage. However it is not a network transport method, and is seldom used for vSphere backup.
SAN Transport
SAN mode requires applications to run on a backup server with access to SAN storage (Fibre Channel, iSCSI, or SAS connected) containing the virtual disks to be accessed. As shown in SAN Transport Mode for Virtual Disk, this method is efficient because no data needs to be transferred through the production ESX/ESXi host. A SAN backup proxy must be a physical machine. If it has optical media or tape drive connected, backups can be made entirely LAN-free.
SAN Transport Mode for Virtual Disk
In SAN transport mode, the virtual disk library obtains information from an ESX/ESXi host about the layout of VMFS LUNs, and using this information, reads data directly from the storage LUN where a virtual disk resides. This is the fastest transport method for software deployed on SAN-connected ESX/ESXi hosts.
SAN storage devices can contain SATA drives, but currently there are no SATA connected SAN devices on the VMware hardware compatibility list.
In general, SAN transport works with any storage device that appears at the driver level as a LUN (as opposed to a file system such as NTFS or EXT). SAN mode must be able to access the LUN as a raw device. The real key is whether the device behaves like a direct raw connection to the underlying LUN. SAN transport is supported in Fibre Channel, iSCSI, and SAS based storage arrays (SAS means serial attached SCSI).
VMware vSAN, a network based storage solution with direct attached disks, does not support SAN transport. Because vSAN uses modes and methods that are incompatible with SAN transport, if the virtual disk library detects the presence of vSAN, it disables SAN mode. Other advanced transports do work.
HotAdd Transport
HotAdd is a VMware feature where devices can be added “hot” while a virtual machine is running. Besides SCSI disk, virtual machines can add additional CPUs and memory capacity.
If backup software runs in a virtual appliance, it can take a snapshot and create a linked clone of the target virtual machine, then attach and read the linked clone’s virtual disks for backup. This involves a SCSI HotAdd on the ESXi host where the target VM and backup proxy are running. Virtual disks of the linked clone are HotAdded to the backup proxy. The target virtual machine continues to run during backup.
VixTransport handles the temporary linked clone and hot attachment of virtual disks. VixDiskLib opens and reads the HotAdded disks as a “whole disk” VMDK (virtual disk on the local host). This strategy works only on virtual machines with SCSI disks and is not supported for backing up virtual IDE disks. HotAdd transport also works with virtual machines stored on NFS partitions.
HotAdd is a good way to get virtual disk data from a virtual machine to a backup appliance (or backup proxy) for sending to the media server. The attached HotAdd disk is shown in HotAdd Transport Mode for Virtual Disk.
HotAdd Transport Mode for Virtual Disk
Running the backup proxy as a virtual machine has two advantages: it is easy to move a virtual machine to a new media server, and it can back up local storage without using the LAN, although this incurs more overhead on the physical ESX/ESXi host than when using SAN transport mode.
About the HotAdd Proxy
The HotAdd backup proxy must be a virtual machine. HotAdd involves attaching a virtual disk to the backup proxy, like attaching disk to a virtual machine. In typical implementations, a HotAdd proxy backs up either Windows or Linux virtual machines, but not both. For parallel backup, sites can deploy multiple proxies.
The HotAdd proxy must have access to the same datastore as the target virtual machine, and the VMFS version and data block sizes for the target VM must be the same as the datastore where the HotAdd proxy resides.
If the HotAdd proxy is a virtual machine that resides on a VMFS-3 volume, choose a volume with block size appropriate for the maximum virtual disk size of virtual machines that customers want to back up, as shown in VMFS-3 Block Size for HotAdd Backup Proxy. This caveat does not apply to VMFS-5 volumes, which always have 1MB file block size.
NBD and NBDSSL Transport
When no other transport is available, networked storage applications can use LAN transport for data access, either NBD (network block device) or NBDSSL (encrypted). NBD is a Linux-style kernel module that treats storage on a remote host as a block device. NBDSSL is similar but uses SSL to encrypt all data passed over the TCP connection. The NBD transport method is built into the virtual disk library, so it is always available, and is the fall-back when other transport methods are unavailable.
VMware libraries, and backup applications, often fall back to NBD when other transports are unavailable.
LAN (NBD) Transport Mode for Virtual Disk
In this mode, the ESX/ESXi host reads data from storage and sends it across a network to the backup server. With LAN transport, large virtual disks can take a long time to transmit. This transport mode adds traffic to the LAN, unlike SAN and HotAdd transport, but NBD transport offers the following advantages:
SSL Certificates and Security
The VDDK 5.1 release has been security hardened, and virtual machines can be set to verify SSL certificates.
On Windows, the keys shown in Windows Registry Keys for VDDK are required at the following Windows registry path:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware Virtual Disk Development Kit
To support registry redirection, registry entries needed by VDDK on 64-bit Windows must be placed under registry path Wow6432Node. This is the correct location for both 32-bit and 64-bit binaries on 64-bit Windows.
On Linux, SSL certificate verification requires the use of thumbprints – there is no mechanism to validate an SSL certificate without a thumbprint. On vSphere the thumbprint is a hash obtained from a trusted source such as vCenter Server, and passed in the SSLVerifyParam structure from the NFC ticket. If you add the following line to the VixDiskLib_InitEx configuration file, Linux virtual machines will check the SSL thumbprint:
vixDiskLib.linuxSSL.verifyCertificates = 1
The following library functions enforce SSL thumbprint on Linux: InitEx, PrepareForAccess, EndAccess, GetNfcTicket, and the GetRpcConnection interface that is used by the advanced transports.
NFC Session Limits
NBD employs the VMware network file copy (NFC) protocol. NFC Session Connection Limits shows limits on the number of network connections for various host types. VixDiskLib_Open() uses one connection for every virtual disk that it accesses on an ESX/ESXi host. VixDiskLib_Clone() also requires a connection. It is not possible to share a connection across disks. These are host limits, not per process limits, and do not apply to SAN or HotAdd.