Skip to main content
Version: main 🚧

Shared Nodes Quick Start

You're running a Kubernetes cluster and want to give teams, tenants, or CI pipelines their own isolated Kubernetes environments. You don't want to spin up new infrastructure for each one.

This guide shows you how to deploy a tenant cluster onto your existing cluster in under two minutes. You'll also see the isolation boundary in action. From inside the tenant cluster, everything looks like a normal Kubernetes cluster. From the outside, each tenant fits in a single namespace with no visibility into anyone else's workloads.

By the end you'll have a working tenant cluster and a clear picture of how the isolation model works. There's also an optional step to connect it to vCluster Platform for self-service access and auto-sleep.

Prerequisites​

  • A Kubernetes cluster with kubectl configured to access it
  • At least 2 vCPU and 2 GB RAM available on the cluster

Install the vCluster CLI​

brew install loft-sh/tap/vcluster

The binaries in the tap are signed using the Sigstore framework for enhanced security.

Verify the CLI installed successfully.

vcluster --version

Output is similar to:

vCluster version 0.x.x

Deploy a tenant cluster​

  1. Deploy a tenant cluster named my-vcluster into a new namespace team-x.

    vcluster create my-vcluster --namespace team-x

    When the deployment finishes, the CLI automatically connects and switches your kube context into the tenant cluster.

  2. Confirm you are inside the tenant cluster.

    kubectl get namespaces

    You see standard Kubernetes system namespaces. This is the tenant cluster's own namespace view, separate from the Control Plane Cluster.

See the isolation​

  1. Deploy a test workload inside the tenant cluster.

    kubectl create deployment nginx --image=nginx
    kubectl get pods

    The pod is running inside the tenant cluster.

  2. Disconnect from the tenant cluster and inspect the same namespace from the Control Plane Cluster.

    vcluster disconnect
    kubectl get pods -n team-x

    Output is similar to:

    NAME                                                     READY   STATUS    RESTARTS   AGE
    coredns-79cf5f4c56-nkgdr-x-kube-system-x-my-vcluster 1/1 Running 0 4m
    my-vcluster-0 1/1 Running 0 5m
    nginx-56c45fd5ff-kp74s-x-default-x-my-vcluster 1/1 Running 0 2m

    The nginx pod is here, but with a rewritten name. vCluster syncs tenant workloads into this namespace so the underlying cluster can schedule them. The suffix (-x-default-x-my-vcluster) encodes the source namespace and cluster. From inside the tenant cluster, the pod appeared as plain nginx in namespace default.

    This is the isolation boundary. The tenant gets a clean Kubernetes API with no visibility into the Control Plane Cluster's structure. Other tenant clusters have their own dedicated namespaces, so neither tenant can see the other's workloads.

Add management with vCluster Platform​

The cluster you just deployed works standalone. Connect it to vCluster Platform to add a self-service layer for your team: auto-sleep, access controls, and lifecycle management across your entire fleet.

Auto-sleep pauses tenant clusters when idle and wakes them on access. For internal developer platforms, this eliminates idle resource consumption without requiring any action from tenants. Clusters wake in seconds when someone connects.

vcluster platform add vcluster my-vcluster --namespace team-x

Once connected to Platform:

  • Tenant clusters appear in the Platform UI. Your team accesses them without needing kubectl access to the Control Plane Cluster.
  • Auto-sleep activates based on inactivity, cutting resource usage during off-hours
  • Access controls, templates, and project quotas apply across the full fleet

Install vCluster Platform → Configure auto-sleep →

Delete the tenant cluster​

warning

Deleting a tenant cluster removes all resources within it. This cannot be undone.

vcluster delete my-vcluster --namespace team-x

If the namespace was created by the vCluster CLI, it is also deleted.

Next steps​

  • Private nodes — dedicated infrastructure for GPU tenants and regulated workloads
  • Architecture — control plane internals, syncer behavior, and networking
  • Deploy with Helm — production deployment options including Helm, Terraform, and ArgoCD
  • vcluster.yaml reference — full configuration reference