Proxmox
Cloudfleet supports adding Proxmox VE virtual machines as nodes in your CFKE cluster. The recommended approach uses the Cloudfleet Terraform provider combined with the Telmate Proxmox provider to automate VM creation with cloud-init integration. This method provisions Proxmox VMs via Terraform while using cloud-init configuration generated by Cloudfleet to automatically register the VMs with your CFKE cluster.
Prerequisites
- Terraform CLI installed on your local machine.
- Cloudfleet CLI configured following these setup instructions, or API token authentication configured per these guidelines.
- A running Proxmox VE server with:
- API access enabled and an API token created. See the Proxmox API documentation for details.
- SSH access to the Proxmox host for uploading cloud-init configuration files.
- The snippets directory enabled on your storage (typically
/var/lib/vz/snippets/).
- An existing CFKE cluster. This example references a cluster with ID
CLUSTER_ID, though you can create a new cluster using Terraform following the Terraform setup documentation.
Creating a cloud-init template
Before deploying nodes with Terraform, you need to create a VM template with cloud-init support. Cloud-init handles early initialization of virtual machines, including hostname configuration, SSH key injection, and package installation.
SSH into your Proxmox node to run the following commands:
Ubuntu 24.04 is currently the only supported operating system for CFKE nodes on Proxmox. Download the cloud image:
wget https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
Create a new VM that will become your template:
qm create 9000 --name ubuntu-cloudinit
Import the cloud image as the boot disk:
qm set 9000 --scsi0 local-lvm:0,import-from=/root/noble-server-cloudimg-amd64.img
Convert the VM to a template:
qm template 9000
The template is now ready for Terraform to clone when creating new nodes.
Adding Proxmox VMs to your CFKE cluster
Deploy and connect Proxmox VMs to your CFKE cluster using the following Terraform configuration. This configuration uploads the cloud-init file to Proxmox via SSH and creates a VM that automatically joins your cluster on boot.
The configuration uses the Telmate Proxmox provider. The version shown below was the latest at the time of writing. Check the registry for the most recent version and refer to the provider documentation for additional configuration options.
Terraform providers
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.2-rc06"
}
cloudfleet = {
source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
}
}
}
# Cloudfleet provider authenticates using one of these methods (in priority order):
# 1. Environment variables: CLOUDFLEET_ORGANIZATION_ID, CLOUDFLEET_ACCESS_TOKEN_ID, CLOUDFLEET_ACCESS_TOKEN_SECRET
# 2. Direct configuration: organization_id, access_token_id, access_token_secret
# 3. CLI profile: Uses 'default' profile from ~/.cloudfleet/config (set up via 'cloudfleet auth login')
provider "cloudfleet" {}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
}
The Cloudfleet provider supports multiple authentication methods. The simplest approach is to authenticate using the Cloudfleet CLI (cloudfleet auth login), which stores credentials in ~/.cloudfleet/config. For CI/CD pipelines, use environment variables or API tokens. See the Terraform provider documentation for more details.
Variables
The configuration requires variables for connecting to both Cloudfleet and Proxmox. Create a terraform.tfvars file or pass these values via command line when running terraform apply.
# Cloudfleet configuration
variable "cfke_cluster_id" {
type = string
description = "The unique identifier of your CFKE cluster. Find this in the Cloudfleet console or via 'cloudfleet clusters list'."
}
variable "node_region" {
type = string
default = "on-premises"
description = "Geographic region label for the node. Maps to the topology.kubernetes.io/region Kubernetes label. Use a meaningful identifier like your datacenter location (e.g., 'frankfurt', 'us-east')."
}
variable "node_zone" {
type = string
default = "proxmox"
description = "Availability zone label for the node. Maps to the topology.kubernetes.io/zone Kubernetes label. Use a meaningful identifier like your rack or failure domain (e.g., 'rack-1', 'building-a')."
}
# Proxmox connection
variable "proxmox_api_url" {
type = string
description = "Full URL to the Proxmox API endpoint, including port and path (e.g., 'https://proxmox.example.com:8006/api2/json')."
}
variable "proxmox_api_token_id" {
type = string
sensitive = true
description = "Proxmox API token ID in the format 'user@realm!tokenname' (e.g., 'root@pam!terraform'). Create this in Proxmox under Datacenter > Permissions > API Tokens."
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
description = "The secret value of your Proxmox API token. This is shown only once when creating the token."
}
variable "proxmox_ssh_host" {
type = string
description = "Hostname or IP address of your Proxmox server for SSH connections. Used to upload cloud-init configuration files to the snippets directory."
}
variable "proxmox_ssh_private_key" {
type = string
default = "~/.ssh/id_ed25519"
description = "Path to the SSH private key for connecting to the Proxmox host."
}
variable "proxmox_target_node" {
type = string
description = "Name of the Proxmox node where the VM will be created. This is the node name shown in the Proxmox UI, not the hostname."
}
variable "proxmox_storage" {
type = string
default = "local-lvm"
description = "Proxmox storage pool for VM disks and cloud-init drive. Must have sufficient space for the VM disk size."
}
# VM configuration
variable "vm_clone_template" {
type = string
default = "ubuntu-cloudinit"
description = "Name of the VM template to clone. Must match the template created in the 'Creating a cloud-init template' section."
}
variable "node_count" {
type = number
default = 1
description = "Number of CFKE nodes to create."
}
Cloudfleet node join configuration
With the providers configured, the next step is to generate the cloud-init configuration that tells the VM how to join your CFKE cluster. The cloudfleet_cfke_node_join_information resource fetches the necessary certificates, tokens, and bootstrap scripts from Cloudfleet and renders them as a cloud-init YAML file.
The region and zone values become Kubernetes node labels (topology.kubernetes.io/region and topology.kubernetes.io/zone), which you can use for workload placement and topology-aware scheduling.
data "cloudfleet_cfke_cluster" "cluster" {
id = var.cfke_cluster_id
}
resource "cloudfleet_cfke_node_join_information" "proxmox" {
cluster_id = data.cloudfleet_cfke_cluster.cluster.id
region = var.node_region
zone = var.node_zone
gzip = false
base64_encode = false
node_labels = {
"cfke.io/provider" = "proxmox"
}
}
Cloud-init file upload
Unlike cloud providers that accept cloud-init data directly via API, Proxmox requires cloud-init configuration files to be stored on the host filesystem in the snippets directory. This resource uses SSH to upload the generated configuration to your Proxmox server.
The triggers block ensures the file is re-uploaded whenever the cloud-init content changes, such as when certificates are rotated or cluster configuration is updated.
resource "null_resource" "cloud_init_user_data" {
triggers = {
content = cloudfleet_cfke_node_join_information.proxmox.rendered
}
connection {
type = "ssh"
host = var.proxmox_ssh_host
user = "root"
private_key = file(var.proxmox_ssh_private_key)
}
provisioner "file" {
content = cloudfleet_cfke_node_join_information.proxmox.rendered
destination = "/var/lib/vz/snippets/${var.cfke_cluster_id}.yaml"
}
}
Proxmox VMs
Finally, create the VMs by cloning the template you prepared earlier. Each node gets a unique random suffix to ensure consistent naming even when scaling up or down. The random_id resource generates stable identifiers that persist across Terraform runs.
The cicustom parameter references the cloud-init file uploaded in the previous step, which Proxmox will inject into the VM during boot. When the VM starts, cloud-init runs the bootstrap script, installs the necessary Kubernetes components, and registers the node with your CFKE cluster. This process typically takes 2-3 minutes.
Adjust the CPU, memory, and disk size to match your workload requirements.
resource "random_id" "node" {
for_each = toset([for i in range(1, var.node_count + 1) : tostring(i)])
byte_length = 4
keepers = {
cluster_id = cloudfleet_cfke_node_join_information.proxmox.cluster_id
}
}
resource "proxmox_vm_qemu" "cfke_node" {
for_each = random_id.node
depends_on = [null_resource.cloud_init_user_data]
name = "cfke-node-${each.value.hex}"
target_node = var.proxmox_target_node
agent = 1
# Compute resources
cpu {
cores = 4
sockets = 1
}
memory = 8096
# Clone from template
clone = var.vm_clone_template
boot = "order=scsi0"
scsihw = "virtio-scsi-single"
# Cloud-init configuration
cicustom = "vendor=local:snippets/${var.cfke_cluster_id}.yaml"
ipconfig0 = "ip=dhcp,ip6=auto"
serial {
id = 0
}
# Storage
disks {
scsi {
scsi0 {
disk {
storage = var.proxmox_storage
size = "20G"
}
}
}
ide {
ide1 {
cloudinit {
storage = var.proxmox_storage
}
}
}
}
# Network
network {
id = 0
bridge = "vmbr0"
model = "virtio"
}
}
Execute these commands to provision your Proxmox infrastructure and integrate the VMs with your CFKE cluster:
terraform init
terraform apply
Once provisioning is complete, validate that your new nodes have successfully joined the cluster:
kubectl get nodes
Proxmox-specific considerations
Cloud-init snippet storage
Proxmox requires cloud-init files to be stored in the snippets directory. Ensure the snippets content type is enabled on your storage:
- In the Proxmox UI, go to Datacenter > Storage
- Select your storage pool
- Under Content, ensure Snippets is enabled
Alternatively, enable it via command line:
pvesm set local --content images,rootdir,vztmpl,backup,iso,snippets
SSH key configuration
The Terraform configuration uses SSH to upload cloud-init files to Proxmox. Ensure your SSH key is authorized on the Proxmox host. The default path is ~/.ssh/id_ed25519, but you can override this by setting the proxmox_ssh_private_key variable.
Scaling nodes
To add more nodes, update the node_count variable in your terraform.tfvars:
node_count = 3
Then apply the changes:
terraform apply
Each new node receives a unique random suffix (e.g., cfke-node-a1b2c3d4). The random_id resource ensures these suffixes remain stable across Terraform runs, so existing nodes are not affected when you scale up or down.
Firewall configuration
Configure your Proxmox firewall and network security to allow:
- Outbound HTTPS (443): Required for nodes to communicate with the CFKE control plane
- UDP 41641: Required for WireGuard VPN tunnel between nodes (if nodes need to communicate across networks)
Nodes do not require inbound access from the internet. The CFKE control plane uses secure peer-to-peer tunnels for node communication.
Next steps
- Learn about self-managed nodes and their requirements
- Configure on-premises load balancing with BGP for exposing services
- Explore the Terraform provider documentation for advanced configurations
← Exoscale