GitHub Actions

This guide explains how to access a Cloudfleet Kubernetes Engine (CFKE) cluster from GitHub Actions using OpenID Connect (OIDC) workload identity federation. The workflow mints a short-lived OIDC token at runtime, the cluster trusts that token directly, and kubectl authenticates as the workflow itself. No API keys, kubeconfigs, or service-account secrets need to be stored in GitHub.

How it works

The CFKE control plane is configured to trust the GitHub Action’s identity providers out of the box. When a workflow requests an OIDC token from GitHub with the cluster’s API URL as the audience, the cluster’s kube-apiserver accepts that token directly as a Kubernetes user.

  1. The GitHub Actions runner asks GitHub for an OIDC token, scoped to the audience https://api.cloudfleet.ai/v1/clusters/<cluster-id>.
  2. GitHub issues a short-lived JWT signed by https://token.actions.githubusercontent.com.
  3. kubectl sends the token to the cluster’s API server as a bearer token.
  4. The API server verifies the signature against GitHub’s JWKS, validates the audience, and authenticates the request as the user github: + the token’s sub claim (for example, github:repo:my-org/my-app:ref:refs/heads/main).
  5. RBAC (and any ValidatingAdmissionPolicy you configure) decides what that user is allowed to do.

The token lifetime is five minutes. Long-running jobs that span that window will see authentication failures on later kubectl calls and must re-run the configuration step. There are no secrets or kubeconfigs to rotate.

Prerequisites

  • A running CFKE cluster. If you do not have one yet, follow the getting started guide.
  • The cluster ID (a UUID) and region (for example, europe-central-1a). You can read both from the Cloudfleet console or with cloudfleet clusters list.
  • A GitHub repository where you can edit workflow files and configure repository variables.
  • Administrator access to the cluster (or someone who has it) so RBAC can be bound to the GitHub identity.

Quick start

The Cloudfleet configure-kubectl action handles the OIDC token request, fetches the cluster CA, and writes a kubeconfig that subsequent steps can use.

name: Deploy

on:
  push:
    branches: [main]

permissions:
  id-token: write   # required: lets the workflow mint a GitHub OIDC token
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: azure/setup-kubectl@v5
      - uses: cloudfleet-actions/configure-kubectl@v1
        with:
          cluster-id: ${{ vars.CFKE_CLUSTER_ID }}
          region: ${{ vars.CFKE_REGION }}
      - run: kubectl get nodes

Two repository variables (not secrets — neither value is sensitive):

VariableExample
CFKE_CLUSTER_ID95cc1ef4-2122-4b51-97d9-b35b531c3c45
CFKE_REGIONeurope-central-1a

To set up variables, see the GitHub documentation on storing information in variables.

The first run will succeed at authentication but fail at authorization, because no RBAC has been bound to the workflow’s identity yet. The error message will print the exact User the cluster authenticated. The next section explains how to grant permissions to that user.

Granting permissions with RBAC

Authentication and authorization are independent. A workflow that has been authenticated still has zero permissions until the cluster admin creates a RoleBinding or ClusterRoleBinding that names the workflow’s identity.

Identifying the workflow’s user

CFKE constructs the Kubernetes username by prefixing the GitHub OIDC token’s sub claim with github:. The shape of the sub depends on what triggered the workflow:

TriggerRBAC subject
push to a branchgithub:repo:OWNER/NAME:ref:refs/heads/BRANCH
push of a taggithub:repo:OWNER/NAME:ref:refs/tags/TAG
pull_requestgithub:repo:OWNER/NAME:pull_request
Job using a deployment environmentgithub:repo:OWNER/NAME:environment:ENV
Reusable workflowgithub:repo:OWNER/NAME:job_workflow_ref:OTHER/REPO/.github/workflows/X.yml@REF

GitHub’s OIDC documentation lists the full set of sub formats and how they can be customised per repository.

If you are unsure what subject your workflow produces, run kubectl auth whoami from the workflow once and read the value from the logs.

Cluster-wide read access

Use a ClusterRoleBinding when the workflow needs to read resources across all namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: gha-myapp-view
subjects:
  - kind: User
    name: "github:repo:my-org/my-app:ref:refs/heads/main"
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: view              # built-in read-only role
  apiGroup: rbac.authorization.k8s.io

Namespace-scoped deploy access

Most deployments only need write access to a single namespace. A namespaced RoleBinding keeps the blast radius small:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: gha-myapp-deployer
  namespace: production
subjects:
  - kind: User
    name: "github:repo:my-org/my-app:ref:refs/heads/main"
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: edit              # built-in read/write role
  apiGroup: rbac.authorization.k8s.io

Granting permissions to all GitHub workflows

Every authenticated GitHub workflow is also a member of two groups:

  • cfke.io:third-party-idp — every CI/CD identity (GitHub or GitLab)
  • cfke.io:third-party-idp:github — every GitHub Actions identity

Bind a role to one of these groups to grant a baseline permission to all CI workflows. For example, to let every GitHub workflow read its own job’s events from the kube-system namespace, replace kind: User with kind: Group and the group name in the binding above. Use this with care: any repository in any organization that points its workflows at your cluster ID will pick up these permissions.

Example: deploying to production from main

The workflow below pushes a manifest update on every commit to main. The cluster trusts pushes to main as a deployer in the production namespace via the RoleBinding above.

name: Deploy production

on:
  push:
    branches: [main]

permissions:
  id-token: write
  contents: read

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: azure/setup-kubectl@v5

      - name: Configure kubectl for Cloudfleet
        uses: cloudfleet-actions/configure-kubectl@v1
        with:
          cluster-id: ${{ vars.CFKE_CLUSTER_ID }}
          region: ${{ vars.CFKE_REGION }}

      - name: Confirm identity
        run: kubectl auth whoami

      - name: Apply manifests
        run: kubectl apply -n production -f k8s/

      - name: Wait for rollout
        run: kubectl rollout status -n production deployment/myapp --timeout=5m

The permissions: id-token: write block at the workflow (or job) level is what allows GitHub to mint an OIDC token for this run. Without it, the action will fail at the token request step with a permission error.

Token claims available for admission policies

In addition to the username, CFKE attaches the most useful claims from the GitHub OIDC token to the authenticated user as Kubernetes extras. These are visible to ValidatingAdmissionPolicy and can express checks RBAC cannot:

Extra keySource claimExample
cfke.io/github-repositoryclaims.repositorymy-org/my-app
cfke.io/github-refclaims.refrefs/heads/main
cfke.io/github-actorclaims.actoroctocat
cfke.io/github-run-idclaims.run_id8472103594
cfke.io/github-workflowclaims.workflowDeploy production
cfke.io/github-eventclaims.event_namepush

The example below denies any modification to resources in the production namespace unless the request comes from a workflow run on refs/heads/main:

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
  name: prod-only-from-main
spec:
  matchConstraints:
    resourceRules:
      - apiGroups: ["*"]
        apiVersions: ["*"]
        operations: ["CREATE", "UPDATE", "DELETE"]
        resources: ["*"]
  validations:
    - expression: |
        !request.userInfo.username.startsWith('github:') ||
        ('cfke.io/github-ref' in request.userInfo.extra &&
         request.userInfo.extra['cfke.io/github-ref'][0] == 'refs/heads/main')        
      message: "Production resources may only be modified from refs/heads/main"

Bind this policy to the production namespace with a ValidatingAdmissionPolicyBinding to enforce it.

Verifying the integration

Add these steps to a workflow during initial setup to confirm authentication and authorization separately:

- name: Identity (proves authentication)
  run: kubectl auth whoami

- name: Permissions (proves authorization)
  run: kubectl auth can-i --list

kubectl auth whoami prints the username and any extras the cluster sees. If this fails with a TLS or token error, authentication itself is broken (wrong cluster ID, wrong region, or missing id-token: write permission).

kubectl auth can-i --list prints the verbs and resources the current user is allowed to act on. An empty or near-empty list means RBAC has not been bound yet to this identity.

Security considerations

Bind RBAC to the most specific subject your workflow produces. The general rule is “what triggered this run is also what the cluster trusts.”

  • Never bind cluster-admin to a pull_request subject. Every pull request, including from forks if you allow them, will run with that subject.
  • Branch-scoped subjects (ref:refs/heads/main) are the safest production target. Only commits already merged to main can act with that identity.
  • Tag-scoped subjects (ref:refs/tags/*) suit release pipelines, but anyone who can push tags to the repo can produce that identity. Restrict tag pushes if the role is privileged.
  • Environment-scoped subjects (environment:production) compose well with GitHub deployment environment protection rules (required reviewers, wait timers, branch filters) for an extra approval gate before the cluster trusts the run.
  • Reusable workflow subjects let many repositories share one centrally-reviewed deployment workflow without each repository getting its own RBAC binding.

If you accept pull requests from forks and grant any cluster permissions to the pull_request subject, treat the cluster as compromised by anyone who can open a PR.

CFKE rejects OIDC tokens (from your organization’s identity provider) whose preferred_username would collide with the github: or gitlab: prefixes, so a regular Cloudfleet user cannot impersonate a CI identity.

Reference

configure-kubectl inputs

NameRequiredDescription
cluster-idyesCFKE cluster ID (UUID).
regionyesCluster region (for example, europe-central-1a).

configure-kubectl outputs

NameDescription
kubeconfigPath to the kubeconfig file written by the action. Also exported as the KUBECONFIG env var.

The action does not install kubectl. Add an installer step (such as azure/setup-kubectl@v5) before it.

Source code

The action is open source. Browse or fork it at github.com/cloudfleet-actions/configure-kubectl.

Troubleshooting

Error: Unable to get ACTIONS_ID_TOKEN_REQUEST_URL env variable

The workflow or job is missing permissions: id-token: write. GitHub only mints OIDC tokens when the workflow explicitly requests them.

error: You must be logged in to the server (Unauthorized)

The token reached the API server but was rejected. Common causes:

  • Wrong cluster-id or region. The audience the action requests must match the cluster’s own issuer URL exactly.
  • Cluster is in a different region than configured. Double-check the region string against cloudfleet clusters list.

Error from server (Forbidden): ... User "github:repo:..." cannot ...

Authentication succeeded; authorization failed. The error message contains the exact User the cluster saw. Bind a RoleBinding or ClusterRoleBinding to that user (see Granting permissions with RBAC).

kubectl succeeds early in the job but fails later

GitHub OIDC tokens expire after five minutes. Re-run the cloudfleet-actions/configure-kubectl step before any kubectl call that may run past the token’s lifetime, or split long-running operations into separate jobs.

Next steps

  • Configure a GitLab CI integration for projects hosted on GitLab.
  • Review API tokens for cases where OIDC federation is not an option (for example, calling the Cloudfleet API to manage clusters or fleets from CI).
On this page