Fleet constraints

Preview: Fleet constraints are currently in private preview. To enable them on your account, please reach out to Cloudfleet support with your requirements. We are actively working on making this self-service through the API, the console, and the Terraform provider in the very near future.

Fleet constraints let you narrow the set of infrastructure a Fleet is allowed to provision. Instead of accepting every cloud, region, and instance type that the underlying provider offers, you can pin a Fleet to a specific subset, and node auto-provisioning will only consider that subset when it picks nodes for your workloads.

Why use Fleet constraints

Without Fleet constraints, the node auto-provisioner chooses nodes from everything a Fleet exposes. Customers who need to keep workloads in a particular cloud, region, or instance family typically encode those requirements in every pod specification using nodeSelector, nodeAffinity, or tolerations. This works, but it has two drawbacks: the rules must be repeated across every workload, and a missing or misconfigured selector silently lands a pod on infrastructure it should not run on.

Fleet constraints move that policy out of the pod spec and onto the Fleet itself. Once a Fleet is constrained, every workload scheduled on it inherits the constraint by default, so:

  • Pod specifications stay simple. You no longer need to repeat the same node selectors and affinities on every Deployment, StatefulSet, or Job.
  • Misconfiguration is harder. A pod with no scheduling hints still lands on infrastructure that matches your policy.
  • The cluster setup is easier to reason about. The Fleet is a single place to declare “this group of workloads must run in eu-central-1 on AWS only” rather than spreading the same intent across many manifests.

What you can constrain

A Fleet can be constrained on attributes including:

  • Cloud provider: restrict to a specific cloud, for example AWS only or Hetzner Cloud only.
  • Region or zone: restrict to one or more regions or availability zones, for example eu-central-1 only, or a specific Hetzner Cloud location.
  • Instance family: restrict to one or more families, for example only AWS m7i and c7i instances, or only Hetzner Cloud CPX servers.
  • CPU architecture: restrict to amd64 or arm64.
  • Purchase type: restrict to on-demand or spot instances.
  • Accelerator type: restrict to nodes with a specific GPU or AI accelerator.

These constraints compose: a single Fleet can be limited to, for example, AWS in eu-central-1 using the m7i family on amd64 only.

How constraints interact with pod scheduling

Pod-level scheduling rules continue to work and stack on top of Fleet constraints. A pod’s selectors and affinities must fall within the Fleet’s constraints; if a pod requests an instance type, region, or cloud that the Fleet does not allow, the pod stays in Pending because no Fleet can satisfy it.

When a cluster has multiple Fleets, the auto-provisioner only considers Fleets whose constraints are compatible with the pod’s scheduling rules. This makes it natural to run a cluster where, for example, one Fleet is constrained to AWS GPU instances for ML workloads and another Fleet is constrained to Hetzner Cloud for general-purpose services, and pods land on the right Fleet automatically based on their requests.

Enabling Fleet constraints

Fleet constraints are configured by Cloudfleet during the private preview. To enable them, contact Cloudfleet support and share:

  • The Fleet you want to constrain.
  • The attributes you want to lock down (cloud, region, instance family, architecture, purchase type, accelerator).
  • Any workloads currently running on the Fleet that may be affected, so we can validate that the constraints will not break existing pods.

Self-service configuration through the API, the console, and the Terraform provider is on the roadmap and will follow shortly.

On this page