System Administration > Configuration > Fabric > Nodes > Transport Nodes

Associated URIs:

API Description API Path

List Host Transport Nodes


Returns information about all host transport nodes along with underlying host details.
A transport node is a host that contains hostswitches.
A hostswitch can have virtual machines connected to them.

Because each transport node has hostswitches, transport nodes can also have
virtual tunnel endpoints, which means that they can be part of the overlay.
GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes

Delete a Transport Node


Deletes the specified transport node. Query param force can be used to
force delete the host nodes. Force delete is not supported if transport node
is part of a cluster on which Transport node profile is applied.

It also removes the specified host node from system.
If unprepare_host option is set to false, then host will be deleted
without uninstalling the NSX components from the host.
If transport node delete is called with query param force not being set
or set to false and uninstall of NSX components in the host fails,
TransportNodeState object will be retained.
If transport node delete is called with query param force set to true
and uninstall of NSX components in the host fails, TransportNodeState
object will be deleted.
DELETE /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>

Get a Host Transport Node


Returns information about a specified transport node.
GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>

Patch a Host Transport Node


Transport nodes are hypervisor hosts that will participate
in an NSX-T overlay. For a hypervisor host, this means that it hosts
VMs that will communicate over NSX-T logical switches.

This API creates transport node for a host node (hypervisor) in the transport network.

When you run this command for a host, NSX Manager attempts to install the
NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the
installation to succeed, you must provide the host login credentials and the
host thumbprint.

To get the ESXi host thumbprint, SSH to the host and run the
openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
command.

To generate host key thumbprint using SHA-256 algorithm please follow the
steps below.

Log into the host, making sure that the connection is not vulnerable to a
man in the middle attack. Check whether a public key already exists.
Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'.
If the key is not present then generate a new key by running the following
command and follow the instructions.

ssh-keygen -t rsa

Now generate a SHA256 hash of the key using the following command. Please
make sure to pass the appropriate file name if the public key is stored with
a different file name other than the default 'id_rsa.pub'.

awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64

Additional documentation on creating a transport node can be found
in the NSX-T Installation Guide.

In order for the transport node to forward packets,
the host_switch_spec property must be specified.

Host switches (called bridges in OVS on KVM hypervisors) are the
individual switches within the host virtual switch. Virtual machines
are connected to the host switches.

When creating a transport node, you need to specify if the host switches
are already manually preconfigured on the node, or if NSX should create
and manage the host switches. You specify this choice by the type
of host switches you pass in the host_switch_spec property of the
TransportNode request payload.

For a KVM host, you can preconfigure the host switch, or you can have
NSX Manager perform the configuration. For an ESXi host NSX Manager always
configures the host switch.

To preconfigure the host switches on a KVM host, pass an array
of PreconfiguredHostSwitchSpec objects that describes those host
switches. In the current NSX-T release, only one prefonfigured host
switch can be specified. See the PreconfiguredHostSwitchSpec schema
definition for documentation on the properties that must be provided.
Preconfigured host switches are only supported on KVM hosts, not on
ESXi hosts.

To allow NSX to manage the host switch configuration on KVM hosts,
ESXi hosts, pass an array of StandardHostSwitchSpec objects in the
host_switch_spec property, and NSX will automatically
create host switches with the properties you provide. In the current
NSX-T release, up to 16 host switches can be automatically managed.
See the StandardHostSwitchSpec schema definition for documentation on
the properties that must be provided.

The request should provide node_deployement_info.
PATCH /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>

Update transport node maintenance mode


Put transport node into maintenance mode or exit from maintenance mode. When HostTransportNode is in maintenance mode no configuration changes are allowed
POST /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>

Resync a Host Transport Node


Resync the TransportNode configuration on a host.
It is similar to updating the TransportNode with existing configuration,
but force synce these configurations to the host (no backend optimizations).
POST /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>?action=resync_host_config

Apply cluster level Transport Node Profile on overridden host


A host can be overridden to have different configuration than Transport
Node Profile(TNP) on cluster. This action will restore such overridden host
back to cluster level TNP.

This API can be used in other case. When TNP is applied to a cluster,
if any validation fails (e.g. VMs running on host) then existing transport
node (TN) is not updated. In that case after the issue is resolved manually
(e.g. VMs powered off), you can call this API to update TN as per cluster
level TNP.
PUT /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>?action=restore_cluster_config

Create or update a Host Transport Node


Transport nodes are hypervisor hosts that will participate
in an NSX-T overlay. For a hypervisor host, this means that it hosts
VMs that will communicate over NSX-T logical switches.

This API creates transport node for a host node (hypervisor) in the transport network.

When you run this command for a host, NSX Manager attempts to install the
NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the
installation to succeed, you must provide the host login credentials and the
host thumbprint.

To get the ESXi host thumbprint, SSH to the host and run the
openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
command.

To generate host key thumbprint using SHA-256 algorithm please follow the
steps below.

Log into the host, making sure that the connection is not vulnerable to a
man in the middle attack. Check whether a public key already exists.
Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'.
If the key is not present then generate a new key by running the following
command and follow the instructions.

ssh-keygen -t rsa

Now generate a SHA256 hash of the key using the following command. Please
make sure to pass the appropriate file name if the public key is stored with
a different file name other than the default 'id_rsa.pub'.

awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64

Additional documentation on creating a transport node can be found
in the NSX-T Installation Guide.

In order for the transport node to forward packets,
the host_switch_spec property must be specified.

Host switches (called bridges in OVS on KVM hypervisors) are the
individual switches within the host virtual switch. Virtual machines
are connected to the host switches.

When creating a transport node, you need to specify if the host switches
are already manually preconfigured on the node, or if NSX should create
and manage the host switches. You specify this choice by the type
of host switches you pass in the host_switch_spec property of the
TransportNode request payload.

For a KVM host, you can preconfigure the host switch, or you can have
NSX Manager perform the configuration. For an ESXi host NSX Manager always
configures the host switch.

To preconfigure the host switches on a KVM host, pass an array
of PreconfiguredHostSwitchSpec objects that describes those host
switches. In the current NSX-T release, only one prefonfigured host
switch can be specified. See the PreconfiguredHostSwitchSpec schema
definition for documentation on the properties that must be provided.
Preconfigured host switches are only supported on KVM hosts, not on
ESXi hosts.

To allow NSX to manage the host switch configuration on KVM hosts,
ESXi hosts, pass an array of StandardHostSwitchSpec objects in the
host_switch_spec property, and NSX will automatically
create host switches with the properties you provide. In the current
NSX-T release, up to 16 host switches can be automatically managed.
See the StandardHostSwitchSpec schema definition for documentation on
the properties that must be provided.

The request should provide node_deployement_info.
PUT /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>

Fetch Discovered VIF State on given TransportNode


For the given TransportNode, fetch all the VIF info from VC and
return the corresponding state. Only host switch configured for
security will be considered.
GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>/discovered-vifs

Get the module details of a host transport node


GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>/modules

Get a Host Transport Node's State


Returns information about the current state of the transport node
configuration and information about the associated hostswitch.
GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>/state

List transport nodes by realized state


Returns a list of transport node states that have realized state as provided
as query parameter
GET /policy/api/v1/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/state

Clean up all nvds upgrade related configurations


This API needs to be invoked before another precheck and upgrade is
requested.
This will clean up precheck configuration, vds topology from last
request.
POST /api/v1/nvds-urt?action=cleanup (Deprecated)

Set the migrate status key of ExtraConfigProfile of all Transport Nodes to IGNORE


POST /api/v1/nvds-urt?action=ignore_migrate_status (Deprecated)

Retrieve latest precheck ID of the N-VDS to VDS migration


GET /api/v1/nvds-urt/precheck (Deprecated)

Start precheck for N-VDS to VDS migration


Precheck is peformed at a global level across all NVDSes present in the
system. It is expected to check the status once the precheck API is
invoked via GetNvdsUpgradeReadinessCheckSummary API. If NVDS configuration
like HostSwitchProfiles differs across TransportNodes, precheck will fail
and status will be FAILED and error will be reported via the
status API. Once the reported errors are fixed, precheck API is expected
to be invoked again to rerun precheck. Once NVDS configuration is
consistent across all TransportNodes, precheck will pass and a topology
will be generated and status will be PENDING_TOPOLOGY. Generated toplogy
can be retrieved via GetRecommendedVdsTopology API. User can apply the
recommended topology via SetTargetVdsTopology API.
POST /api/v1/nvds-urt/precheck (Deprecated)

Retrieve latest precheck ID of the N-VDS to VDS migration for the cluster


GET /api/v1/nvds-urt/precheck-by-cluster/<cluster_id> (Deprecated)

Start precheck for N-VDS to VDS migration by cluster


POST /api/v1/nvds-urt/precheck-by-cluster/<cluster_id> (Deprecated)

Start precheck for N-VDS to VDS migration by clusters


POST /api/v1/nvds-urt/precheck-by-clusters (Deprecated)

Get summary of N-VDS to VDS migration


GET /api/v1/nvds-urt/status-summary-by-cluster/<precheck-id> (Deprecated)

Get summary of N-VDS to VDS migration


Provides overall status for Precheck as well as actual NVDS to CVDS
upgrade status per host.
Precheck statuses are as follows
1. IN_PROGRESS: Precheck is in progress
2. FAILED: Precheck is failed, error can be found in precheck_issue field
3. PENDING_TOPOLOGY: Precheck is successful, recommended topology is generated
4. APPLYING_TOPOLOGY: SetTargetToplogy is in progress
5. APPLY_TOPOLOGY_FAILED: SetTargetTopology is failed
6. REDAY: SetTargetTopology is successful and TransportNodes provided as
part of topology are ready for upgrade from NVDS to CVDS
GET /api/v1/nvds-urt/status-summary/<precheck-id> (Deprecated)

Unset VDS configuration and remove it from vCenter


This will revert corresponding VDS to PENDING_TOPOLOGY state. User can
revert the entire topology all together or can revert partially depending
on which TrasportNodes user does not want to upgrade to VDS.
POST /api/v1/nvds-urt/topology?action=revert (Deprecated)

Set VDS configuration and create it in vCenter


Upon successful preheck status goes to PENDING_TOPOLOGY and global
recommended topology is generated which can be retrieved via
GetRecommendedVdsTopology API. User can apply the entire recommeneded
topology all together or can apply partial depending on which
TrasportNodes user wants to be upgraded from NVDS to CVDS.
User can change system generated vds_name field, all other fields cannot
be changed when applying topology.
POST /api/v1/nvds-urt/topology?action=apply (Deprecated)

Recommmended topology


GET /api/v1/nvds-urt/topology-by-cluster/<precheck-id> (Deprecated)

Set VDS configuration and create it in vCenter


POST /api/v1/nvds-urt/topology-by-cluster/<precheck-id>?action=apply (Deprecated)

Recommmended topology


This returns global recommended topology generated when precheck is
successful.
GET /api/v1/nvds-urt/topology/<precheck-id> (Deprecated)

List Transport Nodes


Returns information about all transport nodes along with underlying host or
edge details. A transport node is a host or edge that contains hostswitches.
A hostswitch can have virtual machines connected to them.

Because each transport node has hostswitches, transport nodes can also have
virtual tunnel endpoints, which means that they can be part of the overlay.
This api is now deprecated. Please use new api -
/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes
GET /api/v1/transport-nodes

Create a Transport Node


Transport nodes are hypervisor hosts and NSX Edges that will participate
in an NSX-T overlay. For a hypervisor host, this means that it hosts
VMs that will communicate over NSX-T logical switches. For NSX Edges,
this means that it will have logical router uplinks and downlinks.

This API creates transport node for a host node (hypervisor) or edge node
(router) in the transport network.

When you run this command for a host, NSX Manager attempts to install the
NSX kernel modules, which are packaged as VIB, RPM, or DEB files. For the
installation to succeed, you must provide the host login credentials and the
host thumbprint.

To get the ESXi host thumbprint, SSH to the host and run the
openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
command.

To generate host key thumbprint using SHA-256 algorithm please follow the
steps below.

Log into the host, making sure that the connection is not vulnerable to a
man in the middle attack. Check whether a public key already exists.
Host public key is generally located at '/etc/ssh/ssh_host_rsa_key.pub'.
If the key is not present then generate a new key by running the following
command and follow the instructions.

ssh-keygen -t rsa

Now generate a SHA256 hash of the key using the following command. Please
make sure to pass the appropriate file name if the public key is stored with
a different file name other than the default 'id_rsa.pub'.

awk '{print $2}' id_rsa.pub | base64 -d | sha256sum -b | sed 's/ .*$//' | xxd -r -p | base64
This api is deprecated as part of FN+TN unification. Please use Transport Node API
to install NSX components on a node.

Additional documentation on creating a transport node can be found
in the NSX-T Installation Guide.

In order for the transport node to forward packets,
the host_switch_spec property must be specified.

Host switches (called bridges in OVS on KVM hypervisors) are the
individual switches within the host virtual switch. Virtual machines
are connected to the host switches.

When creating a transport node, you need to specify if the host switches
are already manually preconfigured on the node, or if NSX should create
and manage the host switches. You specify this choice by the type
of host switches you pass in the host_switch_spec property of the
TransportNode request payload.

For a KVM host, you can preconfigure the host switch, or you can have
NSX Manager perform the configuration. For an ESXi host or NSX Edge
node, NSX Manager always configures the host switch.

To preconfigure the host switches on a KVM host, pass an array
of PreconfiguredHostSwitchSpec objects that describes those host
switches. In the current NSX-T release, only one prefonfigured host
switch can be specified. See the PreconfiguredHostSwitchSpec schema
definition for documentation on the properties that must be provided.
Preconfigured host switches are only supported on KVM hosts, not on
ESXi hosts or NSX Edge nodes.

To allow NSX to manage the host switch configuration on KVM hosts,
ESXi hosts, or NSX Edge nodes, pass an array of StandardHostSwitchSpec
objects in the host_switch_spec property, and NSX will automatically
create host switches with the properties you provide. In the current
NSX-T release, up to 16 host switches can be automatically managed.
See the StandardHostSwitchSpec schema definition for documentation on
the properties that must be provided.

Note: Previous versions of NSX-T also used a property named
transport_zone_endpoints at TransportNode level. This property is
deprecated which creates some combinations of new client along with
old client payloads. Examples [1] & [2] show old/existing client
request and response by populating transport_zone_endpoints property
at TransportNode level. Example [3] shows TransportNode creation
request/response by populating transport_zone_endpoints property
at StandardHostSwitch level and other new properties.

The request should either provide node_deployement_info or node_id.

If the host node (hypervisor) or edge node (router) is already added in
system then it can be converted to transport node by providing node_id in
request.

If host node (hypervisor) or edge node (router) is not already present in
system then information should be provided under node_deployment_info.
This api is now deprecated. Please use new api -
/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>
POST /api/v1/transport-nodes

Clear edge transport node stale entries


Edge transport node maintains its entry in many internal tables.
In some cases a few of these entries might not get cleaned up during edge
transport node deletion.
This api cleans up any stale entries that may exist in the internal tables
that store the Edge Transport Node data.
POST /api/v1/transport-nodes?action=clean_stale_entries

Redeploys a new node that replaces the specified edge node.


Redeploys an edge node at NSX Manager that replaces the edge node with
identifier <node-id>. If NSX Manager can access the specified edge node,
then the node is put into maintenance mode and then the associated VM is
deleted. This is a means to reset all configuration on the edge node.
The communication channel between NSX Manager and edge is established after
this operation.
POST /api/v1/transport-nodes/<node-id>?action=redeploy

Get the module details of a transport node


GET /api/v1/transport-nodes/<node-id>/modules (Deprecated)

Invoke DELETE request on target transport node


DELETE /api/v1/transport-nodes/<target-node-id>/<target-uri>

Invoke GET request on target transport node


GET /api/v1/transport-nodes/<target-node-id>/<target-uri>

Invoke POST request on target transport node


POST /api/v1/transport-nodes/<target-node-id>/<target-uri>

Invoke PUT request on target transport node


PUT /api/v1/transport-nodes/<target-node-id>/<target-uri>

Delete a Transport Node


Deletes the specified transport node. Query param force can be used to
force delete the host nodes. Force deletion of edge and public cloud
gateway nodes is not supported. Force delete is not supported if transport node
is part of a cluster on which Transport node profile is applied.

If transport node delete is called with query param force not being set
or set to false and uninstall of NSX components in the host fails,
TransportNodeState object will be retained.
If transport node delete is called with query param force set to true
and uninstall of NSX components in the host fails, TransportNodeState
object will be deleted.

It also removes the specified node (host or edge) from system.
If unprepare_host option is set to false, then host will be deleted
without uninstalling the NSX components from the host.
This api is now deprecated. Please use new api -
/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>
DELETE /api/v1/transport-nodes/<transport-node-id>

Get a Transport Node


Returns information about a specified transport node. This api is now deprecated. Please use new api - /infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>
GET /api/v1/transport-nodes/<transport-node-id>

Apply cluster level Transport Node Profile on overridden host


A host can be overridden to have different configuration than Transport
Node Profile(TNP) on cluster. This action will restore such overridden host
back to cluster level TNP.

This API can be used in other case. When TNP is applied to a cluster,
if any validation fails (e.g. VMs running on host) then existing transport
node (TN) is not updated. In that case after the issue is resolved manually
(e.g. VMs powered off), you can call this API to update TN as per cluster
level TNP.
POST /api/v1/transport-nodes/<transport-node-id>?action=restore_cluster_config (Deprecated)

Enable flow cache for an edge transport node


Enable flow cache for edge transport node.
Caution: This involves restart of the edge
dataplane and hence may lead to network disruption.
POST /api/v1/transport-nodes/<transport-node-id>?action=enable_flow_cache

Refresh the node configuration for the Edge node.


The API is applicable for Edge transport nodes. If you update the edge
configuration and find a discrepancy in Edge configuration at NSX Manager
in compare with realized, then use this API to refresh configuration at NSX Manager.
It refreshes the Edge configuration from sources external to NSX Manager like
vSphere Server or the Edge node CLI. After this action, Edge configuration at NSX Manager
is updated and the API GET api/v1/transport-nodes will show refreshed data.
From 3.2 release onwards, refresh API updates the MP intent by default.
POST /api/v1/transport-nodes/<transport-node-id>?action=refresh_node_configuration&resource_type=EdgeNode

Restart the inventory sync for the node if it is paused currently.


Restart the inventory sync for the node if it is currently internally paused.
After this action the next inventory sync coming from the node is processed.
POST /api/v1/transport-nodes/<transport-node-id>?action=restart_inventory_sync

Disable flow cache for an edge transport node


Disable flow cache for edge transport node.
Caution: This involves restart of the edge
dataplane and hence may lead to network disruption.
POST /api/v1/transport-nodes/<transport-node-id>?action=disable_flow_cache

Update a Transport Node


Modifies the transport node information. The host_switch_name field
must match the host_switch_name value specified in the transport zone
(API: transport-zones). You must create the associated uplink profile
(API: host-switch-profiles) before you can specify an uplink_name here.
If the host is an ESX and has only one physical NIC being used by a vSphere
standard switch, TransportNodeUpdateParameters should be used to migrate
the management interface and the physical NIC into a logical switch that
is in a transport zone this transport node will join or has already joined.
If the migration is already done, TransportNodeUpdateParameters can also be
used to migrate the management interface and the physical NIC back to a
vSphere standard switch.
In other cases, the TransportNodeUpdateParameters should NOT be used.
When updating transport node you should follow pattern where you should
fetch the existing transport node and then only modify the required
properties keeping other properties as is.

It also modifies attributes of node (host or edge).

Note: Previous versions of NSX-T also used a property named
transport_zone_endpoints at TransportNode level. This property is
deprecated which creates some combinations of new client along with
old client payloads. Examples [1] shows old/existing client
request and response by populating transport_zone_endpoints property
at TransportNode level. Example [2] shows TransportNode updating
TransportNode from exmaple [1] request/response by adding a
new StandardHostSwitch by populating transport_zone_endpoints at
StandardHostSwitch level. TransportNode level transport_zone_endpoints
will ONLY have TransportZoneEndpoints that were originally specified
here during create/update operation and does not include
TransportZoneEndpoints that were directly specified at
StandardHostSwitch level.
This api is now deprecated. Please use new api -
/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>
PUT /api/v1/transport-nodes/<transport-node-id>

Return the list of capabilities of transport node


Returns information about capabilities of transport host node. Edge nodes do not have capabilities.
GET /api/v1/transport-nodes/<transport-node-id>/capabilities

Get a Transport Node's State


Returns information about the current state of the transport node
configuration and information about the associated hostswitch.
This api is now deprecated. Please use new api -
/infra/sites/<site-id>/enforcement-points/<enforcementpoint-id>/host-transport-nodes/<host-transport-node-id>/state
GET /api/v1/transport-nodes/<transport-node-id>/state

Resync a Transport Node


Resync the TransportNode configuration on a host.
It is similar to updating the TransportNode with existing configuration,
but force synce these configurations to the host (no backend optimizations).
POST /api/v1/transport-nodes/<transportnode-id>?action=resync_host_config

Update transport node maintenance mode


Put transport node into maintenance mode or exit from maintenance mode.
POST /api/v1/transport-nodes/<transportnode-id>

List transport nodes by realized state


Returns a list of transport node states that have realized state as provided
as query parameter. This api will be deprecated in future. For Host, please use
new api - /infra/sites/<site-id>/enforcement-points/<enforcementpoint-
id>/host-transport-nodes/state
GET /api/v1/transport-nodes/state