Skip to main content
Version: main 🚧

Private Nodes Quick Start

You're building a platform where tenants need real hardware isolation. Think GPU workloads that can't share a node pool, regulated environments that require dedicated infrastructure, or AI cloud customers who expect hyperscaler-grade separation.

This guide shows you how to deploy a tenant cluster backed by dedicated worker nodes. There's no cross-tenant scheduling, and each tenant gets a separate CNI and separate storage. At the end, you'll see exactly what is and isn't visible from the Control Plane Cluster.

Private nodes require vCluster Platform. Free mode is supported, so no paid license is needed to get started.

Prerequisites​

You need a Control Plane Cluster with vCluster Platform running on it.

  • Have Platform already? Continue below.
  • No cluster yet? Start with the Standalone quick start to bootstrap a Control Plane Cluster and install Platform in one pass.
  • Have a cluster but no Platform? Run vcluster platform start against it. See the Platform install guide.

You also need at least one Linux machine to join as a worker node.

  • Ubuntu 22.04 or 24.04 recommended
  • Root access on the worker node
  • Outbound connectivity to the Control Plane Cluster

See the full node requirements.

Create the tenant cluster​

Private nodes must be enabled at creation time

An existing tenant cluster cannot be migrated from shared to private nodes. This configuration must be set when the cluster is first created.

  1. Make sure your kube context is pointed at the Control Plane Cluster.

    kubectl config current-context
  2. Create a vcluster.yaml with private nodes and networking configured.

    vcluster.yaml
    privateNodes:
    enabled: true

    controlPlane:
    distro:
    k8s:
    image:
    tag: v1.35.0
    service:
    spec:
    type: LoadBalancer

    networking:
    podCIDR: 10.64.0.0/16
    serviceCIDR: 10.128.0.0/16
  3. Deploy the tenant cluster.

    vcluster create my-vcluster --namespace team-x --values vcluster.yaml

    When deployment finishes, the CLI connects and switches your kube context into the tenant cluster. The cluster has no worker nodes yet. Workloads cannot be scheduled until at least one node joins.

  4. Confirm the tenant cluster is running with no nodes.

    kubectl get nodes

    Expected output:

    No resources found.

    This is expected. Private nodes join explicitly. The control plane is running. Compute comes next.

Join a worker node​

  1. Generate a join token. Run this from your local machine with the kube context still inside the tenant cluster.

    vcluster token create --expires=1h

    The output is a curl command to run on the worker node:

    curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -
  2. SSH into the worker node and run the join command as root.

    sudo su -
    curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -

    The script installs Kubernetes node components (containerd, kubelet) and joins the node to the tenant cluster. Output ends with:

    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The kubelet was informed of the new secure connection details.
  3. Back on your workstation, verify the node joined.

    kubectl get nodes

    Output is similar to:

    NAME           STATUS   ROLES    AGE   VERSION
    worker-node-1 Ready <none> 30s v1.35.0

Deploy a workload​

With a worker node ready, schedule a workload to confirm the tenant cluster is fully operational.

kubectl create deployment hello --image=nginx
kubectl get pods --watch

The pod schedules onto worker-node-1. No other tenant can see this node or the workloads running on it.

The isolation boundary​

From the Control Plane Cluster, the worker node that joined the tenant cluster is not visible as a Kubernetes node:

vcluster disconnect
kubectl get nodes

The worker node does not appear here. It exists only within the tenant cluster's namespace. No other tenant cluster can schedule onto it. No operator on the Control Plane Cluster can inspect its workloads through the Kubernetes API.

This is what makes private nodes the right model for GPU workloads. Each tenant gets hardware-level isolation without needing a separate physical cluster.

Clean up​

vcluster delete my-vcluster --namespace team-x

The join token is invalidated and the node components on the worker machine remain installed but inactive. To fully reset a worker node, re-run the join script with --reset-only:

curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -s -- --reset-only

Next steps​