<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Next generation multi-cloud Kubernetes on Cloudfleet</title><link>https://cloudfleet.ai/</link><description>Recent content in Next generation multi-cloud Kubernetes on Cloudfleet</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 01 Nov 2025 00:00:00 -0700</lastBuildDate><atom:link href="https://cloudfleet.ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Amazon Elastic Kubernetes Service (EKS) vs Google Kubernetes Engine (GKE)</title><link>https://cloudfleet.ai/compare/amazon-eks-google-kubernetes-engine/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/compare/amazon-eks-google-kubernetes-engine/</guid><description/></item><item><title>Book your free Cloudfleet demo</title><link>https://cloudfleet.ai/contact/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/contact/</guid><description>&lt;p style="margin-bottom: 40px;">Schedule a call with the Cloudfleet team for a guided tour of the platform, where we’ll walk you through key features, answer all your questions, and discuss funding options available through the Cloudfleet Pilot Program. Whether you’re exploring cloud, edge, or on-prem Kubernetes, we’ll help you understand how Cloudfleet can best support your infrastructure needs.&lt;/p></description></item><item><title>Cloudfleet vs Skypilot</title><link>https://cloudfleet.ai/compare/skypilot/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/compare/skypilot/</guid><description/></item><item><title>Support</title><link>https://cloudfleet.ai/support/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/support/</guid><description>&lt;p>Have a question about the product or a technical issue?
Talk with Cloudfleet engineers.&lt;/p>

&lt;script charset="utf-8" type="text/javascript" src="//js-eu1.hsforms.net/forms/embed/v2.js">&lt;/script>
&lt;script>
 hbspt.forms.create({
 region: "eu1",
 portalId: "143377415",
 formId: "daea449f-67c2-4118-ab93-d01a6626872f"
 });
&lt;/script></description></item><item><title>Cloudfleet vs. Rancher</title><link>https://cloudfleet.ai/compare/rancher/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/compare/rancher/</guid><description/></item><item><title>User management</title><link>https://cloudfleet.ai/docs/organization/users/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/organization/users/</guid><description>&lt;p>You can manage users in the Cloudfleet console. To access the user management page, navigate to the &lt;a href="https://console.cloudfleet.ai/users">user management page&lt;/a>.&lt;/p>
&lt;p>You can invite your team members to your organization by entering their email addresses. You can also assign roles to users to control their access to your organization&amp;rsquo;s resources. For organizations requiring two-factor authentication (2FA) or centralized identity management, see &lt;a href="https://cloudfleet.ai/docs/organization/sso/">Single Sign-On (SSO)&lt;/a>.&lt;/p></description></item><item><title>API tokens</title><link>https://cloudfleet.ai/docs/organization/api-tokens/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/organization/api-tokens/</guid><description>&lt;p>API tokens are used to authenticate requests to the Cloudfleet API. API Tokens are designed to allow programmatic access to Cloudfleet API and CFKE clusters. You may find API tokens useful for scripts, or CI pipelines. Tokens consist of an access token ID and a secret.&lt;/p></description></item><item><title>Cloudfleet vs. OpenShift</title><link>https://cloudfleet.ai/compare/openshift/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/compare/openshift/</guid><description/></item><item><title>Service quotas</title><link>https://cloudfleet.ai/docs/organization/quotas/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/organization/quotas/</guid><description>&lt;p>Quotas in the Cloudfleet platform are implemented as limitations on the types of resources that users can create and utilize. These quotas are applied at the organization level and play an important role in managing resource allocation and ensuring efficient usage within the Cloudfleet environment.&lt;/p></description></item><item><title>Billing and pricing</title><link>https://cloudfleet.ai/docs/organization/billing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/organization/billing/</guid><description>&lt;p>You can manage your payment settings in the Cloudfleet console. To access the payment settings, navigate to the &lt;a href="https://console.cloudfleet.ai/billing/payment">payment settings page&lt;/a>.&lt;/p></description></item><item><title>Single Sign-On (SSO)</title><link>https://cloudfleet.ai/docs/organization/sso/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/organization/sso/</guid><description>&lt;p>Cloudfleet supports Single Sign-On (SSO) for organizations. This allows you to use your organization&amp;rsquo;s identity provider to authenticate users in Cloudfleet. Cloudfleet supports SAML 2.0 and OIDC as SSO protocols.&lt;/p></description></item><item><title>Expose HTTP applications with NGINX Ingress</title><link>https://cloudfleet.ai/tutorials/cloud/install-nginx-ingress-controller-and-cert-manager/</link><pubDate>Sat, 01 Nov 2025 00:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/install-nginx-ingress-controller-and-cert-manager/</guid><description>&lt;p>When you deploy HTTP applications in Kubernetes, you need a way to expose them to the internet with proper routing, domain names, and secure connections. While Cloudfleet provides robust &lt;a href="https://cloudfleet.ai/docs/networking/load-balancing/">L4 (TCP/UDP) load balancing&lt;/a> out of the box through LoadBalancer services, HTTP applications require L7 routing capabilities like host-based routing, path-based routing, and TLS termination.&lt;/p></description></item><item><title>Fleets</title><link>https://cloudfleet.ai/docs/cloud-infrastructure/fleets/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cloud-infrastructure/fleets/</guid><description>&lt;p>Fleets automatically add or remove nodes in your Cloudfleet Kubernetes Engine (CFKE) cluster as your workload demands change. CFKE lets you deploy, run, and monitor applications in your own cloud provider account using a unified Kubernetes cluster. You can take advantage of existing discounts and credits while avoiding vendor lock-in. CFKE automatically selects optimal instance types and orchestrates workloads within your account - or bursts to other providers when needed.&lt;/p></description></item><item><title>Handle single-platform container images</title><link>https://cloudfleet.ai/docs/troubleshooting/handle-single-platform-images/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/troubleshooting/handle-single-platform-images/</guid><description>&lt;p>When using node auto-provisioning with Fleets, Cloudfleet always picks the cheapest available node to run your container. Many cloud providers offer ARM architecture nodes, which are often more cost-effective than amd64 nodes. However, before scheduling a node, Cloudfleet does not know if your container image only supports a single architecture. If your container image is only available for the amd64 architecture but Cloudfleet decides to provision a ARM node, your container will not run on ARM nodes, resulting in a crash loop.&lt;/p></description></item><item><title>Introduction</title><link>https://cloudfleet.ai/docs/terraform/introduction/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/terraform/introduction/</guid><description>&lt;p>The Cloudfleet Terraform provider lets you manage your entire Kubernetes infrastructure stack as code, including clusters, &lt;a href="https://cloudfleet.ai/docs/cloud-infrastructure/fleet-configuration/">multi-cloud fleets&lt;/a> for node auto-provisioning, and &lt;a href="https://cloudfleet.ai/docs/hybrid-and-on-premises/self-managed-nodes/">self-managed nodes&lt;/a> that extend your reach to any platform. Whether you&amp;rsquo;re running on AWS, managing on-premises VMware, or scaling across a dozen different cloud providers, everything is managed through consistent Terraform resources.&lt;/p></description></item><item><title>Networking architecture</title><link>https://cloudfleet.ai/docs/networking/architecture/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/networking/architecture/</guid><description>&lt;p>Kubernetes networking can be complex, but its networking model simplifies things by assigning each pod its own IP address. Containers within a pod share the same IP, enabling seamless communication between pods in a cluster without needing NAT. This means pods can directly communicate with each other using their IP addresses, making it easier to manage networking within the cluster.&lt;/p></description></item><item><title>Overview</title><link>https://cloudfleet.ai/docs/container-registry/overview/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/overview/</guid><description>&lt;p>Cloudfleet Container Registry (CFCR) is a fully managed OCI-compliant registry that enables you to store, share, and manage container images. CFCR leverages Cloudfleet&amp;rsquo;s highly available infrastructure, so you do not have to worry about scaling or operational overhead. The registry is included with your Cloudfleet organization and requires no additional provisioning.&lt;/p></description></item><item><title>Self-managed nodes</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/self-managed-nodes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/self-managed-nodes/</guid><description>&lt;p>For supported cloud providers (AWS, GCP, and Hetzner Cloud), Cloudfleet provides &lt;a href="https://cloudfleet.ai/docs/workload-management/node-provisioner/">node auto-provisioning&lt;/a> that automatically creates and manages compute nodes based on your workload requirements. For on-premises infrastructure and other cloud providers, you manually provision compute nodes and register them with Cloudfleet. This guide explains how to add self-managed nodes to your CFKE cluster.&lt;/p></description></item><item><title>What is Cloudfleet?</title><link>https://cloudfleet.ai/docs/introduction/what-is-cloudfleet/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/introduction/what-is-cloudfleet/</guid><description>&lt;p>Cloudfleet&amp;rsquo;s mission is to transform infrastructure management by delivering seamless, automated, and scalable solutions that unify resources across datacenters, clouds, and edge environments. We aim to provide just-in-time infrastructure, automated upgrades, and advanced permissions management, all through a single, intuitive interface.&lt;/p></description></item><item><title>Install and Configure Istio service mesh</title><link>https://cloudfleet.ai/tutorials/cloud/install-istio/</link><pubDate>Thu, 03 Jul 2025 00:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/install-istio/</guid><description>&lt;p>In the world of microservices, managing the complexities of service-to-service communication, security, and observability can be a significant challenge. This is where &lt;strong>Istio&lt;/strong>, an open-source service mesh, comes in. Istio provides a transparent and language-independent way to connect, secure, control, and observe services.&lt;/p></description></item><item><title>Node auto-provisioning</title><link>https://cloudfleet.ai/docs/workload-management/node-provisioner/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/workload-management/node-provisioner/</guid><description>&lt;p>CFKE manages the infrastructure provisioning on behalf of its users. This means that users are not required to create and right-size new compute nodes, deleting unused ones, or managing upgrades. Cloudfleet has a built-in node auto-provisioner that calculates the optimal number and size of nodes based on the current workload. This allows users to focus on their applications rather than on the infrastructure. Customers can influence the node provisioner&amp;rsquo;s decisions through Kubernetes Pod labels, Pod resource requests, and affinity rules. The node provisioner determines the required node topology and the optimal node sizes, and provisions them accordingly. When the nodes are no longer needed, the node provisioner deletes them. If a node becomes unhealthy or is destroyed by the cloud provider due to spot instance preemption, the node auto-provisioner replaces it with a new one. For a comparison with traditional Kubernetes node pools, see &lt;a href="https://cloudfleet.ai/docs/introduction/how-nodes-work/">How Cloudfleet provisions nodes&lt;/a>.&lt;/p></description></item><item><title>Resource monitoring and pod autoscaling</title><link>https://cloudfleet.ai/docs/workload-management/pod-autoscaling/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/workload-management/pod-autoscaling/</guid><description>&lt;p>Effectively managing application resources is critical for ensuring performance, stability, and cost-efficiency. Cloudfleet provides powerful tools for this: the &lt;code>kubectl top&lt;/code> command for real-time resource monitoring, and &lt;strong>Horizontal Pod Autoscaling (HPA)&lt;/strong>, a standard Kubernetes feature, for automatically adjusting your application&amp;rsquo;s scale to meet demand.&lt;/p></description></item><item><title>Concepts</title><link>https://cloudfleet.ai/docs/container-registry/concepts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/concepts/</guid><description>&lt;p>This topic describes key concepts you need to understand when working with Cloudfleet Container Registry (CFCR).&lt;/p>
&lt;h2 id="images">Images &lt;a href="#images">&lt;svg height="22px" viewBox="0 0 24 24" width="22px" xmlns="http://www.w3.org/2000/svg">&lt;path d="M0 0h24v24H0z" fill="none"/>&lt;path d="M3.9 12c0-1.71 1.39-3.1 3.1-3.1h4V7H7c-2.76 0-5 2.24-5 5s2.24 5 5 5h4v-1.9H7c-1.71 0-3.1-1.39-3.1-3.1zM8 13h8v-2H8v2zm9-6h-4v1.9h4c1.71 0 3.1 1.39 3.1 3.1s-1.39 3.1-3.1 3.1h-4V17h4c2.76 0 5-2.24 5-5s-2.24-5-5-5z"/>&lt;/svg>&lt;/a>&lt;/h2>
&lt;p>An image is a read-only template containing instructions for creating a container. Images include application code, runtime, libraries, and dependencies required to run your application. CFCR stores images that comply with Open Container Initiative specifications, including Docker images using the Docker Image Manifest V2 Schema 2 format.&lt;/p></description></item><item><title>How Cloudfleet provisions nodes</title><link>https://cloudfleet.ai/docs/introduction/how-nodes-work/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/introduction/how-nodes-work/</guid><description>&lt;p>If you&amp;rsquo;re coming from traditional Kubernetes, you&amp;rsquo;re probably familiar with node pools and cluster autoscaling. Cloudfleet takes a fundamentally different approach that eliminates upfront capacity planning entirely. This guide explains both provisioning models, highlights the key differences, and helps you choose the right approach for your infrastructure.&lt;/p></description></item><item><title>Manage Kubernetes secrets with External Secrets Operator</title><link>https://cloudfleet.ai/tutorials/cloud/manage-kubernetes-secrets-with-external-secrets-operator/</link><pubDate>Tue, 15 Apr 2025 00:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/manage-kubernetes-secrets-with-external-secrets-operator/</guid><description>&lt;p>Managing secrets in Kubernetes can quickly become a headache. You might start by copying database passwords into YAML files, which end up in Git. Then you realize that&amp;rsquo;s not secure, so you base64-encode them (which doesn&amp;rsquo;t actually encrypt anything). Eventually you&amp;rsquo;re managing dozens of secrets across multiple environments, manually updating them whenever a password changes, and wondering if you just accidentally pushed credentials to your public repository.&lt;/p></description></item><item><title>Use persistent volumes with Cloudfleet on Hetzner</title><link>https://cloudfleet.ai/tutorials/cloud/use-persistent-volumes-with-cloudfleet-on-hetzner/</link><pubDate>Sun, 09 Mar 2025 00:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/use-persistent-volumes-with-cloudfleet-on-hetzner/</guid><description>&lt;p>This guide will walk you through the process of creating persistent storage for your applications running on a Cloudfleet Kubernetes Engine (CFKE) cluster with nodes hosted in Hetzner. You&amp;rsquo;ll learn how to install the &lt;a href="https://github.com/hetznercloud/csi-driver/" rel="noopener nofollow">Hetzner Cloud CSI Driver&lt;/a> and set up a Persistent Volume Claim (PVC) that your pods can use for storing data that needs to persist across container restarts.&lt;/p></description></item><item><title>Cluster types</title><link>https://cloudfleet.ai/docs/cluster-management/cluster-types/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cluster-management/cluster-types/</guid><description>&lt;p>Cloudfleet Kubernetes Engine (CFKE) offers two types of clusters: Basic and Pro. Each cluster type provides a different set of features and support levels to meet your needs. For pricing details, see the &lt;a href="https://cloudfleet.ai/docs/organization/billing/">billing documentation&lt;/a>.&lt;/p></description></item><item><title>Getting started</title><link>https://cloudfleet.ai/docs/container-registry/getting-started/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/getting-started/</guid><description>&lt;p>This guide walks you through pushing your first container image to Cloudfleet Container Registry (CFCR) and deploying it to a CFKE cluster. There is no registry to create or configure. You push an image, and CFCR stores it.&lt;/p></description></item><item><title>Getting started</title><link>https://cloudfleet.ai/docs/introduction/getting-started/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/introduction/getting-started/</guid><description>&lt;p>This guide is designed to walk new users through the essential first steps of setting up and utilizing the Cloudfleet platform.&lt;/p></description></item><item><title>Kubernetes versions and upgrades</title><link>https://cloudfleet.ai/docs/cluster-management/kubernetes-versions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cluster-management/kubernetes-versions/</guid><description>&lt;p>Cloudfleet takes care of cluster upgrades on your behalf. We frequently upgrade the clusters as we add new Cloudfleet features without the underlying Kubernetes version not changing. We call these versions &lt;em>CFKE patch versions&lt;/em>. In case of upgrading your cluster to a new &lt;em>CFKE patch version&lt;/em>, most of the time the underlying Kubernetes version stays same.&lt;/p></description></item><item><title>Public cloud load balancing</title><link>https://cloudfleet.ai/docs/networking/load-balancing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/networking/load-balancing/</guid><description>&lt;p>This page describes Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener nofollow">Services&lt;/a> and their use in Cloudfleet Kubernetes Engine (CFKE). Kubernetes have different types of Services, which can be used to group a set of Pod endpoints into a single resource. Before reading this page, it is recommended to read the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener nofollow">Kubernetes documentation on Services&lt;/a> and the &lt;a href="https://cloudfleet.ai/docs/networking/architecture/">CFKE networking architecture&lt;/a> to understand how nodes communicate across clouds and regions.&lt;/p></description></item><item><title>Resources</title><link>https://cloudfleet.ai/docs/terraform/resources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/terraform/resources/</guid><description>&lt;p>Terraform resources in the Cloudfleet provider allow you to define and manage your infrastructure as code. You can create and configure Kubernetes clusters, set up multi-cloud fleets with automatic node provisioning, and generate node join instructions for any environment - including on-premises and unsupported clouds. These resources are essential for provisioning and scaling Cloudfleet environments consistently and repeatably.&lt;/p></description></item><item><title>Vultr</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/vultr/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/vultr/</guid><description>&lt;p>Cloudfleet supports adding &lt;a href="https://www.vultr.com/" rel="noopener nofollow">Vultr&lt;/a> instances as nodes in your CFKE cluster. The most streamlined and scalable approach is to use the Cloudfleet Terraform provider&amp;rsquo;s cloud-init integration. This method allows you to create Vultr instances with Terraform while using the cloud-init configuration generated by the Cloudfleet Terraform provider to automatically register the instances with your CFKE cluster.&lt;/p></description></item><item><title>OVH</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/ovh/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/ovh/</guid><description>&lt;p>You can integrate &lt;a href="https://www.ovhcloud.com/" rel="noopener nofollow">OVH&lt;/a> Public Cloud instances as worker nodes in your CFKE cluster. The recommended workflow leverages the Cloudfleet Terraform provider combined with cloud-init automation. This approach enables you to provision OVH instances through Terraform and utilize the automatically generated cloud-init scripts from Cloudfleet to seamlessly join these instances to your existing CFKE cluster.&lt;/p></description></item><item><title>Scaleway</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/scaleway/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/scaleway/</guid><description>&lt;p>Cloudfleet enables integration with &lt;a href="https://www.scaleway.com/" rel="noopener nofollow">Scaleway&lt;/a> instances as worker nodes in your CFKE cluster. The optimal approach combines the Cloudfleet Terraform provider with cloud-init automation for seamless provisioning. This workflow allows you to deploy Scaleway instances via Terraform and leverage the automatically generated cloud-init configuration from Cloudfleet to register these instances with your CFKE cluster.&lt;/p></description></item><item><title>Exoscale</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/exoscale/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/exoscale/</guid><description>&lt;p>Cloudfleet supports adding &lt;a href="https://www.exoscale.com/" rel="noopener nofollow">Exoscale&lt;/a> instances as nodes in your CFKE cluster. The most streamlined and scalable approach is to use the Cloudfleet Terraform provider&amp;rsquo;s cloud-init integration. This method allows you to create Exoscale instance pools with Terraform while using the cloud-init configuration generated by the Cloudfleet Terraform provider to automatically register the instances with your CFKE cluster.&lt;/p></description></item><item><title>Proxmox</title><link>https://cloudfleet.ai/docs/hybrid-and-on-premises/proxmox/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/hybrid-and-on-premises/proxmox/</guid><description>&lt;p>Cloudfleet supports adding &lt;a href="https://www.proxmox.com/" rel="noopener nofollow">Proxmox VE&lt;/a> virtual machines as nodes in your CFKE cluster. The recommended approach uses the Cloudfleet Terraform provider combined with the Telmate Proxmox provider to automate VM creation with cloud-init integration. This method provisions Proxmox VMs via Terraform while using cloud-init configuration generated by Cloudfleet to automatically register the VMs with your CFKE cluster.&lt;/p></description></item><item><title>Deploy MariaDB on Kubernetes with MariaDB operator</title><link>https://cloudfleet.ai/tutorials/cloud/deploy-mariadb-on-kubernetes-with-mariadb-operator/</link><pubDate>Sat, 27 Sep 2025 00:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/deploy-mariadb-on-kubernetes-with-mariadb-operator/</guid><description>&lt;p>This guide will walk you through deploying MariaDB on a Cloudfleet Kubernetes Engine (CFKE) cluster using the &lt;a href="https://github.com/mariadb-operator/mariadb-operator" rel="noopener nofollow">MariaDB operator&lt;/a>. MariaDB is fully compatible with MySQL, serving as a drop-in replacement with enhanced features and performance. The MariaDB operator enables you to declaratively manage MariaDB instances using Kubernetes Custom Resource Definitions (CRDs), providing features like high availability, automated backups, and seamless scaling.&lt;/p></description></item><item><title>Authentication</title><link>https://cloudfleet.ai/docs/container-registry/authentication/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/authentication/</guid><description>&lt;p>Cloudfleet Container Registry (CFCR) supports three authentication methods designed for different use cases: the Docker credential helper for interactive use, API tokens for CI/CD automation, and automatic authentication for CFKE clusters.&lt;/p></description></item><item><title>Data sources</title><link>https://cloudfleet.ai/docs/terraform/data-sources/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/terraform/data-sources/</guid><description>&lt;p>Terraform data sources let you retrieve read-only information from existing Cloudfleet resources so you can reference them elsewhere in your configuration. They’re especially useful when you need to connect to an existing cluster, fleet, or use client credentials without managing those resources directly in your Terraform code. This allows for more flexible and modular setups, where infrastructure is shared or managed outside of the current deployment workflow.&lt;/p></description></item><item><title>Node regions</title><link>https://cloudfleet.ai/docs/cloud-infrastructure/node-regions/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cloud-infrastructure/node-regions/</guid><description>&lt;p>In cloud computing, the concept of &amp;lsquo;regions&amp;rsquo; refers to the specific geographical locations where cloud service providers operate their data centers. Each region represents a distinct area, often encompassing multiple data centers, that hosts cloud resources and services. These regions are strategically selected to reduce latency, improve redundancy, and comply with local data laws and regulations. By choosing a region closer to the end-users or their target audience, businesses can achieve faster data transfer rates and better application performance. Additionally, adhering to regional data sovereignty laws is crucial for companies handling sensitive information, making the choice of region an important consideration in cloud infrastructure planning.&lt;/p></description></item><item><title>On-premises load balancing with BGP</title><link>https://cloudfleet.ai/docs/networking/on-premises-load-balancing-with-bgp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/networking/on-premises-load-balancing-with-bgp/</guid><description>&lt;p>When it comes to exposing your Kubernetes service to external clients, you have various options to choose from. Two commonly used methods are &lt;code>NodePort&lt;/code> and &lt;code>LoadBalancer&lt;/code>. &lt;code>NodePort&lt;/code> simply exposes a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener nofollow">Kubernetes service&lt;/a> on each node&amp;rsquo;s IP at a static port. The &lt;code>LoadBalancer&lt;/code> type of service is a Kubernetes abstraction that is used to create a network Load Balancer service that exposes your service using a single floating IP (VIP).&lt;/p></description></item><item><title>Egress Gateways</title><link>https://cloudfleet.ai/docs/networking/egress-gateways/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/networking/egress-gateways/</guid><description>&lt;p>Cloudfleet Kubernetes Engine (CFKE) provides the ability to define specific egress points for your workloads. This is achieved using Cilium&amp;rsquo;s Egress Gateway feature, which is integrated into CFKE. This document explains how to configure and use Egress Gateways to ensure that outbound traffic from selected pods originates from a predictable, static IP address.&lt;/p></description></item><item><title>Fleet configuration</title><link>https://cloudfleet.ai/docs/cloud-infrastructure/fleet-configuration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cloud-infrastructure/fleet-configuration/</guid><description>&lt;p>Fleets represent cloud accounts connected to a CFKE cluster. When you create a Fleet, CFKE automatically provisions nodes within that cloud account to run your workloads. Currently, CFKE supports AWS, GCP, and Hetzner for &lt;a href="https://cloudfleet.ai/docs/workload-management/node-provisioner/">node auto-provisioning&lt;/a>.&lt;/p></description></item><item><title>Image management</title><link>https://cloudfleet.ai/docs/container-registry/managing-artifacts/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/managing-artifacts/</guid><description>&lt;p>Cloudfleet Container Registry (CFCR) stores Docker images, OCI images, multi-architecture images, and Helm charts. This guide covers common operations for managing these artifacts.&lt;/p></description></item><item><title>Privacy policy</title><link>https://cloudfleet.ai/company/privacy-policy/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/company/privacy-policy/</guid><description>&lt;p>We at Cloudfleet GmbH (“Cloudfleet”, “us”, “we”, or “our”) endeavor to respect your privacy. This Privacy Policy (“Policy”) is designed to help you understand how we collect, use, safeguard, share, and disclose your personal information in connection with our products and services. It also explains your rights and choices with respect to your personal information. This Policy applies when you interact or make a purchase through our websites located at cloudfleet.ai, and any other websites, pages, features or content we own or operate (collectively, the “&lt;strong>Site(s)&lt;/strong>”); when, as a subscriber to our GPU Cloud, you use our Cloud available through the online platform at console.cloudfleet.ai (the “&lt;strong>Platform&lt;/strong>”); or other products or services that direct you to this Policy (collectively, including the Sites and the Cloud, the “&lt;strong>Service(s)&lt;/strong>”).&lt;/p></description></item><item><title>Access control</title><link>https://cloudfleet.ai/docs/container-registry/access-control/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/access-control/</guid><description>&lt;p>Cloudfleet Container Registry (CFCR) uses the same role-based access control as other Cloudfleet services. There is no separate registry-specific permission system to configure.&lt;/p></description></item><item><title>Control plane scalability</title><link>https://cloudfleet.ai/docs/cluster-management/control-plane-scalability/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/cluster-management/control-plane-scalability/</guid><description>&lt;p>Cloudfleet provides a &lt;strong>fully managed Kubernetes service&lt;/strong>, meaning we take on the operational burden of scaling and maintaining your cluster&amp;rsquo;s control plane. A well-performing control plane is the backbone of a healthy and responsive cluster, and our team ensures it can handle your growing demands. While Cloudfleet handles the underlying scaling, this guide provides insights into how different components interact and how you can optimize your usage patterns to get the best performance from your managed Kubernetes experience.
This guide provides insights and best practices for understanding how your interactions with the &lt;a href="https://kubernetes.io/docs/reference/architecture/control-plane/#kube-apiserver" rel="noopener nofollow">API Server&lt;/a> can affect the performance and scaling of other managed components like the Controller Manager and Scheduler.&lt;/p></description></item><item><title>CI/CD integration</title><link>https://cloudfleet.ai/docs/container-registry/ci-cd-integration/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/container-registry/ci-cd-integration/</guid><description>&lt;p>Automate container image builds and pushes using your existing CI/CD platform. This guide provides configuration examples for popular CI/CD systems.&lt;/p></description></item><item><title>Install Cloudfleet CLI</title><link>https://cloudfleet.ai/docs/introduction/install-cloudfleet-cli/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/introduction/install-cloudfleet-cli/</guid><description>&lt;p>&lt;code>cloudfleet&lt;/code> is the official command-line tool to access Cloudfleet services and allows you to interact with the Cloudfleet API. You can manage your infrastructure from a user-friendly command line, with all the benefits of a scriptable interface. The CLI is automatically generated based on the OpenAPI schema of the API, and you can find descriptions of endpoint input and output schemas in the &lt;a href="https://cloudfleet.ai/docs/reference/api-reference/">API Reference&lt;/a> section of this documentation.&lt;/p></description></item><item><title>MCP server</title><link>https://cloudfleet.ai/docs/introduction/mcp-server/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/introduction/mcp-server/</guid><description>&lt;p>The Cloudfleet CLI includes a built-in &lt;a href="https://modelcontextprotocol.io/" rel="noopener nofollow">Model Context Protocol (MCP)&lt;/a> server that enables AI assistants like Claude, Cursor, and other MCP-compatible tools to interact with your Cloudfleet infrastructure. This integration allows you to query clusters, inspect Kubernetes resources, and view infrastructure information through natural language conversations. The MCP server operates in read-only mode and provides access to any namespace that your configured user or token has permissions to view.&lt;/p></description></item><item><title>API reference</title><link>https://cloudfleet.ai/docs/reference/api-reference/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/reference/api-reference/</guid><description>&lt;p>Welcome to the Cloudfleet API Reference. This guide provides an overview of the Cloudfleet Application Programming Interface (API), detailing various operations, request and response structures, and error codes. The current version of the Cloudfleet API is &lt;code>v1&lt;/code>, accessible at &lt;code>https://api.cloudfleet.ai/v1/&lt;/code>.&lt;/p></description></item><item><title>Release notes</title><link>https://cloudfleet.ai/docs/release-notes/release-notes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/release-notes/release-notes/</guid><description>&lt;p>These release notes highlight user-visible changes and bug fixes for Cloudfleet&amp;rsquo;s Kubernetes Engine (CFKE). Beyond the features listed here, Cloudfleet is frequently updated for improved stability, performance, and security. You may notice your CFKE version number increasing without any visible features as we continuously enhance the underlying platform.&lt;/p></description></item><item><title>External private registries</title><link>https://cloudfleet.ai/docs/workload-management/private-container-registries/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/workload-management/private-container-registries/</guid><description>&lt;p>This guide explains how to pull container images from external private registries in CFKE, such as AWS ECR, GCP Artifact Registry, Docker Hub, and self-hosted registries.&lt;/p></description></item><item><title>Accessing cloud APIs securely</title><link>https://cloudfleet.ai/docs/workload-management/accessing-cloud-apis/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/workload-management/accessing-cloud-apis/</guid><description>&lt;p>Accessing cloud APIs from workloads running in the Cloudfleet Kubernetes Engine (CFKE) is common. For example, you may need to access AWS S3, GCP Cloud Storage, or Azure Blob Storage from your workloads. You can use CFKE&amp;rsquo;s Kubernetes Service Accounts to authenticate your workloads to access these cloud APIs without hardcoding keys.&lt;/p></description></item><item><title>GPU-based workloads</title><link>https://cloudfleet.ai/docs/workload-management/gpu-based-workloads/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/docs/workload-management/gpu-based-workloads/</guid><description>&lt;p>Cloudfleet Kubernetes Engine (CFKE) supports NVIDIA GPUs for workloads that require GPU acceleration. This guide explains how to provision nodes with NVIDIA GPUs and schedule workloads effectively.&lt;/p></description></item><item><title>Hetzner Cloud introduces new shared vCPU server families: CX Gen3 and CPX Gen2</title><link>https://cloudfleet.ai/blog/partner-news/2025-10-hetzner-cloud-introduces-new-shared-vcpu-server-families-cx-gen3-and-cpx-gen2/</link><pubDate>Mon, 27 Oct 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2025-10-hetzner-cloud-introduces-new-shared-vcpu-server-families-cx-gen3-and-cpx-gen2/</guid><description>&lt;p>Hetzner Cloud has introduced a new generation of shared vCPU instances, creating a simpler and more consistent structure across its server families. While the update may look confusing at first, it becomes clear once you see how the lineup is organized.&lt;/p></description></item><item><title>BaseJump powers a cutting-edge gaming platform with Cloudfleet Kubernetes Engine</title><link>https://cloudfleet.ai/customers/basejump/</link><pubDate>Mon, 01 Sep 2025 10:00:00 -0700</pubDate><guid>https://cloudfleet.ai/customers/basejump/</guid><description>&lt;p>Today, billions of people use in-game assets every day across multiple games, virtual worlds, and even as filters on social networks. This number is set to grow rapidly as innovations in AI and platforms like &lt;a href="https://basejump.xyz/" rel="noopener nofollow">BaseJump&lt;/a> make it possible to create games and assets instantly using generative AI. BaseJump is building a new generation of player-owned games and interoperable assets powered by decentralized infrastructure.&lt;/p></description></item><item><title>Cloud freedom starts with architecture: a response to the UK CMA’s cloud market report</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2025-08-cloud-freedom-starts-with-architecture-a-response-to-the-uk-cmas-cloud-market-report/</link><pubDate>Tue, 05 Aug 2025 09:30:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2025-08-cloud-freedom-starts-with-architecture-a-response-to-the-uk-cmas-cloud-market-report/</guid><description>&lt;p>If you’ve ever found it difficult to switch cloud providers or felt limited by your current setup, you’re not alone. Last week, the UK’s Competition and Markets Authority (CMA) released a &lt;a href="https://assets.publishing.service.gov.uk/media/688b8891fdde2b8f73469544/final_decision_report_.pdf" rel="noopener nofollow">report&lt;/a> (and a &lt;a href="https://assets.publishing.service.gov.uk/media/688b20e6ff8c05468cb7b120/summary_of_final_decision.pdf" rel="noopener nofollow">short summary&lt;/a>) highlighting structural issues in the public cloud market, particularly around limited competition and barriers to customer mobility.&lt;/p></description></item><item><title>Multicloud vs hybrid cloud: choosing the right strategy for your organization</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2025-0723-multicloud-vs-hybrid-cloud/</link><pubDate>Wed, 23 Jul 2025 09:20:00 +0000</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2025-0723-multicloud-vs-hybrid-cloud/</guid><description>&lt;p>Organizations today face critical decisions about their cloud infrastructure strategy. Two prominent approaches have emerged as leading solutions: &lt;a href="https://cloudfleet.ai/multi-cloud-kubernetes/">multicloud&lt;/a> and &lt;a href="https://cloudfleet.ai/on-premises-kubernetes-hybrid-cloud/">hybrid cloud&lt;/a>. While these terms are sometimes used interchangeably, they represent fundamentally different architectural philosophies with distinct benefits and challenges.&lt;/p></description></item><item><title>How TextCortex supercharged their AI-first development with Cloudfleet</title><link>https://cloudfleet.ai/blog/product-updates/2025-07-14-how-textcortex-supercharged-their-ai-first-development-with-cloudfleet/</link><pubDate>Mon, 14 Jul 2025 00:38:30 +0200</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2025-07-14-how-textcortex-supercharged-their-ai-first-development-with-cloudfleet/</guid><description>&lt;style>
p img[alt="Onur Solmaz"] {
 float: left;
 margin-right: 15px;
 margin-bottom: 10px;
 margin-top: 8px;
 width: 150px;
 height: 150px;
 border: 1px solid #e0e0e0;
 box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
 padding: 4px;
 background-color: #fff;
}

p img[alt="GitHub pull request"] {
 border: 1px solid #e0e0e0;
 box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
 border-top-left-radius: 8px;
 border-top-right-radius: 8px;
}
&lt;/style>
&lt;p>&lt;em>Today&amp;rsquo;s guest post is from Onur Solmaz, VP of Engineering and Research at TextCortex, who shares how his team leveraged Cloudfleet to revolutionize their development workflow.&lt;/em>&lt;/p></description></item><item><title>Kubernetes control plane closer to home: Introducing Cloudfleet’s new Europe region</title><link>https://cloudfleet.ai/blog/product-updates/2025-06-23-cloudfleet-launches-european-union-control-plane-region/</link><pubDate>Mon, 23 Jun 2025 01:00:00 +0200</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2025-06-23-cloudfleet-launches-european-union-control-plane-region/</guid><description>&lt;p>We have exciting news to share today, something many of you have been asking for. We are thrilled to announce the official launch of our first European Union control plane region, hosted in Frankfurt, Germany.&lt;/p></description></item><item><title>kuberc is Here! Customizing kubectl with Kubernetes 1.33</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2025-05-customizing-kubectl-with-kuberc/</link><pubDate>Fri, 16 May 2025 09:30:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2025-05-customizing-kubectl-with-kuberc/</guid><description>&lt;p>For years, Kubernetes users have wished for a dedicated way to personalize their kubectl command-line experience, much like &lt;code>.bashrc&lt;/code> for shells or &lt;code>.gitconfig&lt;/code> for Git. That wish is now taking its first official steps: &lt;strong>kuberc&lt;/strong> has arrived as an alpha feature in Kubernetes 1.33.&lt;/p></description></item><item><title>Create GPU Kubernetes cluster with Lambda cloud</title><link>https://cloudfleet.ai/tutorials/cloud/create-gpu-kubernetes-cluster-with-lambda-cloud/</link><pubDate>Fri, 21 Feb 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/create-gpu-kubernetes-cluster-with-lambda-cloud/</guid><description>&lt;p>&lt;a href="https://lambdalabs.com" rel="noopener nofollow">Lambda&lt;/a> is a leading cloud provider that specializes in delivering high-performance AI infrastructure, offering fast and modern NVIDIA GPUs to empower deep learning initiatives. Their cutting-edge hardware and cloud solutions have garnered the trust of industry giants like Microsoft, Intel, and Amazon Research, enabling teams to accelerate their AI breakthroughs.&lt;/p></description></item><item><title>Cloudfleet joins NVIDIA Inception</title><link>https://cloudfleet.ai/blog/partner-news/2025-02-cloudfleet-joins-nvidia-inception/</link><pubDate>Thu, 13 Feb 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2025-02-cloudfleet-joins-nvidia-inception/</guid><description>&lt;p>Cloudfleet has joined &lt;a href="https://nvda.ws/2BvtUc9" rel="noopener nofollow">NVIDIA Inception&lt;/a>, a program that supports startups revolutionizing industries through technological advancements.&lt;/p>
&lt;p>Cloudfleet’s mission is to enable secure, private, and decentralized computing for everyone. We are focused on building a platform that empowers individuals and organizations to take control of their data and computing resources.&lt;/p></description></item><item><title>Hetzner Load Balancer auto-provisioning is now generally available</title><link>https://cloudfleet.ai/blog/product-updates/2025-02-hetzner-load-balancer-auto-provisioning/</link><pubDate>Thu, 13 Feb 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2025-02-hetzner-load-balancer-auto-provisioning/</guid><description>&lt;p>We’re excited to announce the general availability of Hetzner Load Balancer auto-provisioning for all Cloudfleet Kubernetes Engine (CFKE) users. This feature enables seamless, automated provisioning of Hetzner Load Balancers for your services across multi-region, multi-cloud environments.&lt;/p></description></item><item><title>Cloudfleet CLI Now Available on Windows via Winget</title><link>https://cloudfleet.ai/blog/product-updates/2025-02-cloudfleet-cli-windows-winget/</link><pubDate>Sat, 08 Feb 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2025-02-cloudfleet-cli-windows-winget/</guid><description>&lt;p>We’re excited to announce the latest version of Cloudfleet CLI for Windows, making it easier than ever to create and manage Kubernetes clusters with Cloudfleet.&lt;/p></description></item><item><title>Deploy Kubernetes on Proxmox: a step-by-step tutorial</title><link>https://cloudfleet.ai/tutorials/on-premises/deploy-kubernetes-on-proxmox-a-step-by-step-tutorial/</link><pubDate>Tue, 04 Feb 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/on-premises/deploy-kubernetes-on-proxmox-a-step-by-step-tutorial/</guid><description>&lt;p>In this tutorial, we’ll set up a Kubernetes cluster on Proxmox using the Cloudfleet Kubernetes Engine (CFKE) free plan. Step by step, we’ll guide you from a virtual machine in your on-premises environment to a highly available Kubernetes cluster, ready for deploying your first applications.&lt;/p></description></item><item><title>Proxmox vs Kubernetes in 2025: understanding the differences and similarities</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/proxmox-vs-kubernetes-understanding-the-differences-and-similarities/</link><pubDate>Mon, 20 Jan 2025 09:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/proxmox-vs-kubernetes-understanding-the-differences-and-similarities/</guid><description>&lt;p>In the ever-evolving world of IT infrastructure, two key technologies have become cornerstones of modern computing: virtualization and containerization. These innovations have revolutionized how organizations deploy, manage, and scale their workloads, establishing themselves as industry standards for operating infrastructure. Today, it is rare to find a data center operator not leveraging virtualization, and by 2025, most companies have either adopted or are planning to adopt containerization.&lt;/p></description></item><item><title>TextCortex scales their advanced AI knowledge base software with Cloudfleet</title><link>https://cloudfleet.ai/customers/textcortex/</link><pubDate>Wed, 15 Jan 2025 10:00:00 -0700</pubDate><guid>https://cloudfleet.ai/customers/textcortex/</guid><description>&lt;p>&lt;a href="https://textcortex.com/" rel="noopener nofollow">TextCortex&lt;/a> is a leading European Generative AI platform helping organizations transform scattered knowledge into actionable intelligence. By enabling companies to build and deploy highly customized AI Agents, TextCortex addresses one of the most common and costly challenges in modern enterprises: inaccessible and siloed information.&lt;/p></description></item><item><title>Proxmox VE Datacenter Manager: unified datacenter management tool</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2025-01-proxmox-datacenter-manager/</link><pubDate>Fri, 10 Jan 2025 09:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2025-01-proxmox-datacenter-manager/</guid><description>&lt;p>Proxmox has long been a favorite among homelab enthusiasts and enterprise users, delivering powerful virtualization solutions at a fraction of the cost of competitors. Now, with the &lt;a href="https://proxmox.com/en/about/company-details/press-releases/proxmox-datacenter-manager-alpha" rel="noopener nofollow">release&lt;/a> of the Proxmox Datacenter Manager (PDM) in alpha, the platform is taking a significant step forward in simplifying multi-node and multi-cluster management. PDM offers a centralized and streamlined approach to managing virtualized environments, making it easier than ever to oversee complex datacenter setups.&lt;/p></description></item><item><title>How to create Kubernetes Jobs from AWS Lambda</title><link>https://cloudfleet.ai/tutorials/cloud/how-to-create-kubernetes-jobs-from-aws-lambda/</link><pubDate>Wed, 08 Jan 2025 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/cloud/how-to-create-kubernetes-jobs-from-aws-lambda/</guid><description>&lt;p>In modern cloud-native architectures, automation and scalability are critical for efficiently handling workloads. Kubernetes Jobs are a great tool for running one-time, short-lived tasks in your Kubernetes cluster, while AWS Lambda provides a serverless approach to running code in response to events without needing to manage any infrastructure. By combining the power of AWS Lambda with Kubernetes Jobs, you can trigger and run tasks in your Kubernetes cluster dynamically based on external events, such as file uploads, API requests, or messages from other AWS services.&lt;/p></description></item><item><title>🎉 Cloudfleet price reductions of up to 65%: Kubernetes control plane at the lowest cost</title><link>https://cloudfleet.ai/blog/product-updates/2025-01-kubernetes-price-reductions/</link><pubDate>Tue, 31 Dec 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2025-01-kubernetes-price-reductions/</guid><description>&lt;p>Cloudfleet&amp;rsquo;s mission is to make Kubernetes and cloud-native computing available to everyone. As part of this mission, we are constantly optimizing the &lt;strong>Cloudfleet Kubernetes Engine (CFKE)&lt;/strong> and its underlying infrastructure to reduce costs and pass the savings on to our customers.&lt;/p></description></item><item><title>Best Practices for Kubernetes Namespace Naming Conventions</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-11-kubernetes-namespaces-best-practices/</link><pubDate>Wed, 13 Nov 2024 09:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-11-kubernetes-namespaces-best-practices/</guid><description>&lt;p>In the dynamic world of Kubernetes, namespaces play a crucial role in organizing and managing resources within a cluster. A well-thought-out naming convention for these namespaces not only enhances clarity but also streamlines operations, collaboration, and automation processes. This blog post delves into the best practices for naming Kubernetes namespaces effectively.&lt;/p></description></item><item><title>A Look into Google Axion Processors: Google’s New ARM-Based CPUs</title><link>https://cloudfleet.ai/blog/partner-news/2024-10-google-cloud-new-arm-instance-axion/</link><pubDate>Tue, 29 Oct 2024 11:25:11 +0200</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2024-10-google-cloud-new-arm-instance-axion/</guid><description>&lt;p>Google released its Axion instances powered by custom ARM chips today. These instances were first introduced in April 2024, and they are now &lt;a href="https://cloud.google.com/blog/products/compute/try-c4a-the-first-google-axion-processor?hl=en" rel="noopener nofollow">generally available&lt;/a> following the announcement by Amin Vahdat, VP/GM ML, Systems &amp;amp; Cloud AI, at the Google Cloud App Dev &amp;amp; Infrastructure Summit.&lt;/p></description></item><item><title>Azure ND H200 v5 series - virtual machines optimized for AI supercomputing</title><link>https://cloudfleet.ai/blog/partner-news/2024-10-azure-ai-supercomputing-virtual-machines-nd-h200-v5-series/</link><pubDate>Wed, 02 Oct 2024 14:25:11 +0200</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2024-10-azure-ai-supercomputing-virtual-machines-nd-h200-v5-series/</guid><description>&lt;p>As the AI landscape rapidly evolves, the demand for scalable and high-performance infrastructure continues to grow. In response to this need, Microsoft has introduced new cloud-based AI supercomputing clusters powered by the &lt;strong>Azure ND H200 v5 series virtual machines (VMs)&lt;/strong>, which are now generally available. These VMs are specifically designed to handle the increasing complexity of advanced AI workloads, such as foundational model training and generative inference. With enhanced scale, efficiency, and performance, the ND H200 v5 VMs have already seen adoption among customers and are utilized by Microsoft AI services, including &lt;strong>Azure Machine Learning&lt;/strong> and &lt;strong>Azure OpenAI Service&lt;/strong>.&lt;/p></description></item><item><title>🚀 New Version of Cloudfleet CLI for Streamlined On-Premises Node Onboarding</title><link>https://cloudfleet.ai/blog/product-updates/2024-09-new-cli-for-on-premises-nodes/</link><pubDate>Mon, 30 Sep 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2024-09-new-cli-for-on-premises-nodes/</guid><description>&lt;p>The latest update of the &lt;strong>Cloudfleet CLI&lt;/strong> now offers enhanced functionality for onboarding self-managed nodes to your cluster, making it easier than ever to expand with your own &lt;a href="https://cloudfleet.ai/on-premises-kubernetes-hybrid-cloud/">on-premises Kubernetes&lt;/a> hardware or third-party cloud resources. With this version, users can seamlessly integrate Linux machines running Ubuntu 22.04 or 24.04 into their clusters, ensuring they act as equal members alongside managed nodes and support any workload compatible with their hardware.&lt;/p></description></item><item><title>Cloudfleet Kubernetes Engine (CFKE) introduces support for Kubernetes 1.31 Elli</title><link>https://cloudfleet.ai/blog/product-updates/2024-09-kubernetes-1.31/</link><pubDate>Mon, 30 Sep 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2024-09-kubernetes-1.31/</guid><description>&lt;p>Cloudfleet Kubernetes Engine (CFKE) has been upgraded to include support for &lt;strong>Kubernetes 1.31 Elli&lt;/strong>, offering the latest features, performance enhancements, and security improvements. Users can now take full advantage of the new capabilities and updates in Kubernetes 1.31 to optimize their workloads and infrastructure.&lt;/p></description></item><item><title>3TB memory: Graviton4-powered Amazon EC2 X8g instances</title><link>https://cloudfleet.ai/blog/partner-news/2024-09-aws-graviton4-x8g-instances/</link><pubDate>Wed, 18 Sep 2024 14:47:33 +0200</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2024-09-aws-graviton4-x8g-instances/</guid><description>&lt;p>Amazon has announced the availability of &lt;strong>Graviton-4-powered, memory-optimized X8g instances&lt;/strong> on AWS, offering ten virtual sizes and two bare metal options, with up to &lt;strong>3 TiB of DDR5 memory&lt;/strong> and &lt;strong>192 vCPUs&lt;/strong>. These X8g instances are the most energy-efficient EC2 Graviton instances yet, delivering unmatched price performance and scale-up capabilities compared to any previous Graviton-powered instances. With a &lt;strong>16:1 memory-to-vCPU ratio&lt;/strong>, these instances are specifically designed for demanding workloads such as &lt;strong>Electronic Design Automation (EDA)&lt;/strong>, &lt;strong>in-memory databases and caches&lt;/strong>, &lt;strong>relational databases&lt;/strong>, &lt;strong>real-time analytics&lt;/strong>, and &lt;strong>memory-constrained microservices&lt;/strong>.&lt;/p></description></item><item><title>What to Expect from Prometheus 3: Big Changes and Exciting Features</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-prometheus-3-beta/</link><pubDate>Fri, 13 Sep 2024 09:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-prometheus-3-beta/</guid><description>&lt;p>After 7 years and over 7,500 commits since the release of &lt;strong>Prometheus 2.0&lt;/strong>, the Prometheus community is gearing up for the highly anticipated &lt;strong>Prometheus 3.0&lt;/strong>. While there are countless new features and fixes, some major updates stand out that you&amp;rsquo;ll want to explore. We encourage the community to dive in, test these features, and report any issues to help make the final release as stable as possible. Here&amp;rsquo;s a preview of what’s new in &lt;strong>Prometheus 3.0&lt;/strong>.&lt;/p></description></item><item><title>Kubernetes vs. Docker Swarm: Choosing the Right Orchestration Tool for Your Team</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-docker-swarm-vs-kubernetes/</link><pubDate>Tue, 10 Sep 2024 01:40:47 +0200</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-docker-swarm-vs-kubernetes/</guid><description>&lt;p>Container orchestration has become an integral part of modern application development and deployment. With the rise of containerized applications, tools like Kubernetes and Docker Swarm have emerged as powerful solutions to manage these containers efficiently. Choosing the right orchestration tool can significantly impact your team&amp;rsquo;s productivity, application performance, and scalability. In this article, we’ll compare Kubernetes and Docker Swarm to help you decide which tool is best suited for your team’s needs.&lt;/p></description></item><item><title>New Hetzner Region - Singapore</title><link>https://cloudfleet.ai/blog/partner-news/2024-08-new-hetzner-region-singapore/</link><pubDate>Sat, 10 Aug 2024 14:16:25 +0200</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2024-08-new-hetzner-region-singapore/</guid><description>&lt;p>Now, you can access Hetzner cloud services in Singapore with &lt;a href="https://cloudfleet.ai/lp/managed-hetzner-kubernetes/">Hetzner managed Kubernetes&lt;/a>! Take full advantage of this strategic location in the heart of Asia, providing swift connections to key markets like China, India, and Japan. Singapore’s advanced network infrastructure and extensive international submarine cables make it an ideal hub for enhancing data transfer speeds.&lt;/p></description></item><item><title>Kubernetes 1.31 Release: Key Features, Enhancements, and Deprecations</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-08-kubernetes-1-31-release/</link><pubDate>Thu, 01 Aug 2024 15:30:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-08-kubernetes-1-31-release/</guid><description>&lt;p>Kubernetes v1.31 has officially landed, bringing a host of exciting new features and critical updates. Just like previous releases, this version continues the trend of enhancing Kubernetes’ stability, security, and performance, all while expanding its capabilities to meet the ever-growing needs of containerized applications. In this release, you&amp;rsquo;ll find a whopping &lt;strong>45 enhancements&lt;/strong>, including features moving to stable, beta, and alpha phases. Let&amp;rsquo;s dive into the most significant updates that make Kubernetes v1.31 a game-changer.&lt;/p></description></item><item><title>Cloudfleet Now Supports Hetzner Cloud for Automatic Node Provisioning in Kubernetes Clusters</title><link>https://cloudfleet.ai/blog/product-updates/2024-07-hetzner-available-on-cloudfleet/</link><pubDate>Thu, 11 Jul 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2024-07-hetzner-available-on-cloudfleet/</guid><description>&lt;h1 id="cloudfleet-now-supports-hetzner-cloud-for-automatic-node-provisioning-in-kubernetes-clusters">Cloudfleet Now Supports Hetzner Cloud for Automatic Node Provisioning in Kubernetes Clusters&lt;/h1>
&lt;p>Cloudfleet is excited to announce the integration of Hetzner Cloud, allowing users to automatically provision nodes in Kubernetes clusters seamlessly. This new feature will enable developers and businesses to enhance their cloud infrastructure with greater efficiency and flexibility, taking advantage of Hetzner&amp;rsquo;s high-performance servers combined with Cloudfleet&amp;rsquo;s automation capabilities.&lt;/p></description></item><item><title>Azure Managed Lustre for HPC and AI workloads</title><link>https://cloudfleet.ai/blog/partner-news/2024-07-azure-managed-lustre/</link><pubDate>Wed, 10 Jul 2024 14:38:30 +0200</pubDate><guid>https://cloudfleet.ai/blog/partner-news/2024-07-azure-managed-lustre/</guid><description>&lt;p>Microsoft has &lt;a href="https://azure.microsoft.com/en-us/blog/announcing-azure-managed-lustre-for-your-hpc-and-ai-workloads/" rel="noopener nofollow">announced&lt;/a> the &lt;strong>general availability of Azure Managed Lustre&lt;/strong>, a managed high-performance parallel file system designed for &lt;strong>high performance computing (HPC)&lt;/strong> and &lt;strong>AI workloads&lt;/strong>. Azure Managed Lustre brings the reliable Lustre file system as a first-party managed service on Azure, allowing long-time on-premises Lustre users to transition seamlessly to the cloud. By leveraging this service, users gain access to a complete HPC solution that includes compute and high-performance storage, all integrated on Azure. With Azure Managed Lustre, businesses can now harness the high throughput and performance capabilities of Lustre while enjoying a seamless, managed experience that lets them focus on core business objectives.&lt;/p></description></item><item><title>Access Cloudfleet Kubernetes cluster from GitHub Actions</title><link>https://cloudfleet.ai/tutorials/general/access-cloudfleet-kubernetes-cluster-from-github-actions/</link><pubDate>Mon, 01 Jul 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/general/access-cloudfleet-kubernetes-cluster-from-github-actions/</guid><description>&lt;p>This guide explains how to securely access the Cloudfleet Kubernetes Engine (CFKE) API from continuous integration (CI) tools. It covers generating API credentials, storing them securely, and using them to interact with CFKE clusters.&lt;/p></description></item><item><title>Add code interpreter into your LLM apps with llm-sandbox</title><link>https://cloudfleet.ai/tutorials/machine-learning/add-code-interpreter-into-your-llm-apps-with-llm-sandbox/</link><pubDate>Mon, 01 Jul 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/tutorials/machine-learning/add-code-interpreter-into-your-llm-apps-with-llm-sandbox/</guid><description>&lt;p>&lt;strong>&lt;a href="https://github.com/vndee/llm-sandbox" rel="noopener nofollow">LLM Sandbox&lt;/a>&lt;/strong> is a &lt;strong>lightweight, portable environment&lt;/strong> built to run code generated by Large Language Models (LLMs) in a secure, isolated setting using &lt;strong>Docker containers&lt;/strong>. With its intuitive interface, you can easily set up, manage, and execute code within a controlled environment, simplifying the process of testing and running LLM-generated scripts.&lt;/p></description></item><item><title>Cloudfleet Kubernetes Engine (CFKE) introduces support for Kubernetes 1.30 Uwubernetes</title><link>https://cloudfleet.ai/blog/product-updates/2024-09-kubernetes-1.30/</link><pubDate>Tue, 30 Apr 2024 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2024-09-kubernetes-1.30/</guid><description>&lt;p>Cloudfleet Kubernetes Engine (CFKE) has been upgraded to include support for &lt;strong>Kubernetes 1.30 Uwubernetes&lt;/strong>, offering the latest features, performance enhancements, and security improvements. Users can now take full advantage of the new capabilities and updates in Kubernetes 1.30 to optimize their workloads and infrastructure.&lt;/p></description></item><item><title>Kubernetes 1.30 Release: Powerful Features and Exciting Enhancements</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-04-kubernetes-1-30-release/</link><pubDate>Mon, 01 Apr 2024 15:30:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-04-kubernetes-1-30-release/</guid><description>&lt;h1 id="kubernetes-v130-release-powerful-features-and-exciting-enhancements">Kubernetes v1.30 Release: Powerful Features and Exciting Enhancements&lt;/h1>
&lt;p>The release of &lt;strong>Kubernetes v1.30&lt;/strong> is here, and it’s packed with game-changing features and improvements. Continuing its tradition of regular, high-quality updates, Kubernetes v1.30 introduces new stable, beta, and alpha features that further enhance its scalability, security, and performance. With &lt;strong>45 enhancements&lt;/strong> in this release, including features in stable, beta, and alpha stages, Kubernetes continues to lead the way in container orchestration.&lt;/p></description></item><item><title>Platform Engineering vs. DevOps: What’s the Difference and Why Does It Matter?</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-03-devops-and-platform-engineering-differences/</link><pubDate>Sun, 10 Mar 2024 14:01:16 +0200</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-03-devops-and-platform-engineering-differences/</guid><description>&lt;p>Here&amp;rsquo;s the first five sections of the article using the specified outline and Markdown formatting. Let me know if everything looks good before proceeding further.&lt;/p></description></item><item><title>Announcing Support for Two Additional Providers: Runpod and Datacrunch</title><link>https://cloudfleet.ai/blog/product-updates/2023-11-runpod-datacrunch/</link><pubDate>Sat, 11 Nov 2023 12:00:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/product-updates/2023-11-runpod-datacrunch/</guid><description>&lt;p>&lt;em>Expanding Our Horizons with Runpod and Datacrunch&lt;/em>&lt;/p>
&lt;p>We are thrilled to announce a significant expansion in our service offerings at Cloudfleet – the integration of two additional GPU instance providers, Runpod and Datacrunch. This expansion is a testament to our commitment to providing our users with an extensive range of options for their machine learning and AI needs. Runpod and Datacrunch are renowned for their powerful and efficient GPU instances, making them ideal choices for demanding computational tasks. By incorporating these providers into our platform, we are opening new avenues for innovation and performance.&lt;/p></description></item><item><title>Choosing the Right GPU for Machine Learning Inference</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-choosing-the-right-gpu-for-inference/</link><pubDate>Fri, 31 Mar 2023 15:30:00 -0700</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-choosing-the-right-gpu-for-inference/</guid><description>&lt;p>Machine learning inference is a crucial step in deploying AI models for real-world applications. Whether you are building a recommendation system, autonomous vehicle, or a language translation service, selecting the right GPU (Graphics Processing Unit) for your inference tasks can significantly impact performance, cost, and efficiency. In this blog post, we&amp;rsquo;ll explore the key factors to consider when choosing the right GPU for machine learning inference.&lt;/p></description></item><item><title>Artificial Intelligence and Machine learning</title><link>https://cloudfleet.ai/machine-learning-infrastructure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/machine-learning-infrastructure/</guid><description/></item><item><title>AWS Fargate Pricing: Tips to Optimize Costs and Save Money</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-aws-fargate-pricing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2024-09-aws-fargate-pricing/</guid><description>&lt;p>Ever wondered how to run your containers without the headache of managing servers? AWS Fargate might just be your new best friend. It&amp;rsquo;s a serverless compute engine for containers that works with Amazon Elastic Kubernetes Service (EKS) and Elastic Container Service (ECS). Since it&amp;rsquo;s serverless, you can focus on what really matters - your containers - without getting tangled up in the underlying infrastructure. Fargate automatically scales compute resources to meet your container&amp;rsquo;s needs, making it a go-to choice for those dipping their toes into the world of containers.&lt;/p></description></item><item><title>Cloudfleet Affiliate Program</title><link>https://cloudfleet.ai/affiliates/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/affiliates/</guid><description/></item><item><title>Cloudfleet Pricing</title><link>https://cloudfleet.ai/pricing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/pricing/</guid><description/></item><item><title>Cloudfleet Security</title><link>https://cloudfleet.ai/security/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/security/</guid><description/></item><item><title>Company Information</title><link>https://cloudfleet.ai/company/imprint/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/company/imprint/</guid><description>&lt;p>&lt;strong>Company Name:&lt;/strong>
Cloudfleet GmbH&lt;/p>
&lt;p>&lt;strong>Registered Address:&lt;/strong>&lt;br>
Kolonnenstraße 8&lt;br>
10827 Berlin&lt;br>
Germany&lt;/p>
&lt;p>&lt;strong>Commercial Register:&lt;/strong>&lt;br>
HRB 279792 B&lt;br>
Amtsgericht Berlin-Charlottenburg&lt;/p>
&lt;p>&lt;strong>Managing Director:&lt;/strong>
Yegor Tokmakov&lt;/p></description></item><item><title>Comparison of Different NVIDIA GPU Architectures</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-comparison-of-different-nvidia-gpu-rchitectures/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-comparison-of-different-nvidia-gpu-rchitectures/</guid><description>&lt;p>NVIDIA has been at the forefront of GPU (Graphics Processing Unit) technology for decades, consistently pushing the boundaries of performance and efficiency. Their GPUs have found applications in gaming, data centers, AI, and scientific computing. In this blog post, we&amp;rsquo;ll explore and compare different NVIDIA GPU architectures, highlighting the key features and advancements that each generation brings.&lt;/p></description></item><item><title>Independent Software Vendors (ISVs)</title><link>https://cloudfleet.ai/independent-software-vendor-distribution/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/independent-software-vendor-distribution/</guid><description/></item><item><title>Kubernetes Cost optimization and FinOps</title><link>https://cloudfleet.ai/kubernetes-cost-optimization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/kubernetes-cost-optimization/</guid><description/></item><item><title>Managed Hetzner Kubernetes with Cloudfleet</title><link>https://cloudfleet.ai/lp/managed-hetzner-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/lp/managed-hetzner-kubernetes/</guid><description/></item><item><title>Multi-cloud Kubernetes clusters and globally distributed Kubernetes workloads</title><link>https://cloudfleet.ai/multi-cloud-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/multi-cloud-kubernetes/</guid><description/></item><item><title>On-premises Kubernetes and Kubernetes in Hybrid Clouds</title><link>https://cloudfleet.ai/on-premises-kubernetes-hybrid-cloud/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/on-premises-kubernetes-hybrid-cloud/</guid><description/></item><item><title>Partnering with Cloudfleet</title><link>https://cloudfleet.ai/partners/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/partners/</guid><description/></item><item><title>Proxmox Managed Kubernetes</title><link>https://cloudfleet.ai/lp/proxmox-ve-managed-kubernetes/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/lp/proxmox-ve-managed-kubernetes/</guid><description/></item><item><title>Terms of Service</title><link>https://cloudfleet.ai/company/terms/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/company/terms/</guid><description>&lt;h1 id="website-terms-of-service">Website Terms of Service&lt;/h1>
&lt;p>Welcome to Cloudfleet! As part of our commitment to you, we have drafted the following terms in order to support a smooth and vibrant user experience. It is important for you to read the Terms and Conditions carefully, as they contain important information and restrictions regarding your use of our services. These Terms are a binding agreement between us, Cloudfleet GmbH (&amp;ldquo;Cloudfleet&amp;rdquo;, &amp;ldquo;us&amp;rdquo;, &amp;ldquo;we&amp;rdquo;, &amp;ldquo;Company), and You, whether you are an individual user, or a user on behalf of a company or team (&amp;ldquo;User&amp;rdquo;, &amp;ldquo;Customer&amp;rdquo;, &amp;ldquo;You&amp;rdquo;).&lt;/p></description></item><item><title>Terms of Service - Affiliate Program</title><link>https://cloudfleet.ai/company/affiliate-terms-of-service/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/company/affiliate-terms-of-service/</guid><description>&lt;p>&lt;strong>Last Updated:&lt;/strong> March 11, 2025&lt;/p>
&lt;p>&lt;strong>PLEASE READ THESE REFERRAL PROGRAM TERMS (THIS “AGREEMENT”) CAREFULLY. BY APPLYING TO OR PARTICIPATING IN THE AFFILIATE REFERRAL PROGRAM (THE “PROGRAM”) AND/OR BY CLICKING A BUTTON AND/OR CHECKING A BOX MARKED “CONFIRM,” “I AGREE,” OR SOMETHING TO THAT EFFECT, YOU (AS DEFINED BELOW) SIGNIFY THAT YOU HAVE READ, UNDERSTOOD, AND AGREE TO BE BOUND BY THIS AGREEMENT, INCLUDING ALL TERMS INCORPORATED HEREIN BY REFERENCE. NOTE THAT THIS AGREEMENT CONTAINS A BINDING ARBITRATION CLAUSE IN SECTION 12.2 (THE “ARBITRATION AGREEMENT”) AND A CLASS ACTION/JURY TRIAL WAIVER CLAUSE IN SECTION 12.3 (THE “CLASS ACTION/JURY TRIAL WAIVER”) THAT REQUIRE, UNLESS YOU OPT OUT PURSUANT TO THE INSTRUCTIONS IN THE ARBITRATION AGREEMENT, THE EXCLUSIVE USE OF FINAL AND BINDING ARBITRATION ON AN INDIVIDUAL BASIS TO RESOLVE DISPUTES BETWEEN YOU AND US, INCLUDING ANY CLAIMS THAT AROSE OR WERE ASSERTED BEFORE YOU AGREED TO THIS AGREEMENT. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAWS (AS DEFINED BELOW), YOU EXPRESSLY WAIVE YOUR RIGHT TO SEEK RELIEF IN A COURT OF LAW AND TO HAVE A JURY TRIAL ON YOUR CLAIMS, AS WELL AS YOUR RIGHT TO PARTICIPATE AS A PLAINTIFF OR CLASS MEMBER IN ANY CLASS, COLLECTIVE, PRIVATE ATTORNEY GENERAL, OR REPRESENTATIVE ACTION OR PROCEEDING. IF YOU DO NOT AGREE TO THIS AGREEMENT, THEN DO NOT PARTICIPATE (OR CONTINUE TO PARTICIPATE) IN THE PROGRAM.&lt;/strong>&lt;/p></description></item><item><title>Understanding CUDA: Harnessing the Power of GPU Computing</title><link>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-understanding-cuda-harnessing-the-power-of-gpu-computing/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://cloudfleet.ai/blog/cloud-native-how-to/2023-03-understanding-cuda-harnessing-the-power-of-gpu-computing/</guid><description>&lt;p>In the realm of high-performance computing and parallel processing, CUDA (Compute Unified Device Architecture) is a name that shines brightly. Developed by NVIDIA, CUDA is a parallel computing platform and application programming interface (API) that allows developers to tap into the immense computational power of NVIDIA GPUs (Graphics Processing Units). In this blog post, we&amp;rsquo;ll delve into what CUDA is, how it works, and where it is used.&lt;/p></description></item></channel></rss>