Virtual Disk Transport Methods
VMware supports file-based or image-level backups of virtual machines hosted on an ESX/ESXi host with SAN or NAS storage. Virtual machines read data directly from a shared VMFS LUN, so backups are efficient and do not put significant load on production ESX/ESXi hosts or the virtual network.
VMware offers interfaces for integration of storage-aware applications, including backup, with efficient access to storage clusters. Developers can use VDDK advanced transports, which provide efficient I/O methods to maximize backup performance. VMware supports five access methods: local file, NBD (network block device) over LAN, NBD with encryption (NBDSSL), SAN, and SCSI HotAdd.
Local File Access
The virtual disk library reads virtual disk data from /vmfs/volumes on ESX/ESXi hosts, or from the local filesystem on hosted products. This file access method is built into VixDiskLib, so it is always available on local storage. However it is not a network transport method.
SAN Transport
SAN mode requires applications to run on a backup server with access to SAN storage (Fibre Channel, iSCSI, or SAS connected) containing the virtual disks to be accessed. As shown in SAN Transport Mode for Virtual Disk, this is an efficient method because no data needs to be transferred through the production ESX/ESXi host. If the backup server is a physical machine with optical media or tape drive connected, backups can be made entirely LAN-free.
SAN Transport Mode for Virtual Disk
In SAN transport mode, the virtual disk library obtains information from an ESX/ESXi host about the layout of VMFS LUNs, and using this information, reads data directly from the storage LUN where a virtual disk resides. This is the fastest transport method for software deployed on SAN-connected ESX/ESXi hosts.
SAN storage devices can contain SATA drives, but currently there are no SATA connected SAN devices on the VMware hardware compatibility list.
HotAdd Transport
If your backup application runs in an appliance, it can create a linked-clone virtual machine starting from the backup snapshot and read the linked clone’s virtual disks for backup. This involves a SCSI HotAdd on the host where the backup application is running – disks associated with the linked clone are HotAdded on the backup application’s machine, also called the backup proxy.
HotAdd is a VMware feature where devices can be added “hot” while a virtual machine is running. Besides SCSI disk, virtual machines can add additional CPUs and memory capacity.
VixTransport handles the temporary linked clone and hot attachment of virtual disks. VixDiskLib opens and reads the HotAdded disks as a “whole disk” VMDK (virtual disk on the local host). This strategy works only on virtual machines with SCSI disks and is not supported for backing up virtual IDE disks.
SCSI HotAdd is a good way to get virtual disk data from a virtual machine to a backup virtual appliance (or backup proxy) for sending to the media server. The HotAdd disk is shown as local storage in HotAdd Transport Mode for Virtual Disk.
HotAdd Transport Mode for Virtual Disk
Running the backup proxy as a virtual machine has two advantages: it is easy to move a virtual machine to a new media server, and it can back up local storage without using the LAN, although this incurs more overhead on the physical ESX/ESXi host than when using SAN transport mode.
About the HotAdd Proxy
The HotAdd proxy can be a virtual machine or a physical machine, although virtual proxies are preferable. In typical implementations, a HotAdd proxy backs up either Windows or Linux virtual machines, but not both.
For parallel backup, sites can deploy multiple proxies.
If the HotAdd proxy is a virtual machine that resides on a VMFS-3 volume, choose a volume with block size appropriate for the maximum virtual disk size of virtual machines that customers want to back up, as shown in VMFS-3 Block Size for HotAdd Backup Proxy. This caveat does not apply to VMFS-5 volumes, which always have 1MB file block size.
NBD and NBDSSL Transport
When no other transport is available, networked storage applications can use LAN transport for data access, either NBD (network block device) or NBDSSL (encrypted). NBD is a Linux-style kernel module that treats storage on a remote host as a block device. NBDSSL is similar but uses SSL to encrypt all data passed over the TCP connection. The NBD transport method is built into the virtual disk library, so it is always available, and is the fall-back when other transport methods are unavailable.
LAN (NBD) Transport Mode for Virtual Disk
In this mode, the ESX/ESXi host reads data from storage and sends it across a network to the backup server. With LAN transport, large virtual disks can take a long time to transmit. This transport mode adds traffic to the LAN, unlike SAN and HotAdd transport, but NBD transport offers the following advantages:
The backup proxy can be a virtual machine, so customers can use vSphere resource pools to minimize the performance impact of backup. For example, the backup proxy can be in a lower-priority resource pool than the production ESX/ESXi hosts.
If virtual machines and the backup proxy are on a private network, customers can choose unencrypted data transfer. NBD is faster and consumes fewer resources than NBDSSL. However VMware recommends encryption for sensitive information, even on a private network.
SSL Certificates and Security
The VDDK 5.1 release has been security hardened, and virtual machines can be set to verify SSL certificates.
On Windows, the three keys shown in Windows Registry Keys for VDDK are required at one of the following Windows registry paths:
HKEY_LOCAL_MACHINE\SOFTWARE\VMware, Inc.\VMware Virtual Disk Development Kit
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware Virtual Disk Development Kit
To support registry redirection, registry entries needed by VDDK on 64-bit Windows must be placed under registry path Wow6432Node. This is the correct location for both 32-bit and 64-bit binaries on 64-bit Windows.
the thumbprint of the target machine’s SSL certificate must match the thumbprint provided in the communication configuration structure.
On Linux, SSL certificate verification requires the use of thumbprints – there is no mechanism to validate an SSL certificate without a thumbprint. On vSphere the thumbprint is a hash obtained from a trusted source such as vCenter Server, and passed in the SSLVerifyParam structure from the NFC ticket. If you add the following line to the VixDiskLib_InitEx configuration file, Linux virtual machines will check the SSL thumbprint:
vixDiskLib.linuxSSL.verifyCertificates = 1
The following library functions enforce SSL thumbprint on Linux: InitEx, PrepareForAccess, EndAccess, GetNfcTicket, and the GetRpcConnection interface that is used by the advanced transports.
NFC Session Limits
NBD employs the VMware network file copy (NFC) protocol. NFC Session Connection Limits shows limits on the number of network connections for various host types. VixDiskLib_Open() uses one connection for every virtual disk that it accesses on an ESX/ESXi host. VixDiskLib_Clone() also requires a connection. It is not possible to share a connection across disks. These are host limits, not per process limits, and do not apply to SAN or HotAdd.
Limited by a transfer buffer for all NFC connections, enforced by the host; the sum of all NFC connection buffers to an ESXi host cannot exceed 32MB.