Skip to main content
Version: Develop

Setting Up the Klutch Control Plane Cluster

The Control Plane Cluster is the central management hub for Klutch. It hosts the core logic for service orchestration, policy enforcement, and API exposure across all connected infrastructure.

This guide establishes the minimal viable foundation for the Control Plane by deploying two essential components: Crossplane® for orchestration and the klutch-bind backend for secure API access. Subsequent guides will cover connecting specific Automation Backends.

Prerequisites

Before beginning, ensure your environment meets the following requirements:

  • Kubernetes Cluster: A running cluster to host the Control Plane.
  • Inbound Network Access: App Clusters must reach the Control Plane API. This guide assumes a Gateway API implementation (e.g., Envoy Gateway) and Cert-Manager are installed. Although this guide relies on these specific components, customers may expose the backend via a Gateway managed by a different controller, or an Ingress managed by any controller. Alternative certificate strategies besides cert-manager are compatible with Klutch as well.
cert-manager and GatewayAPI

When using cert-manager together with GatewayAPI, an additional step must be taken to enable the GatewayAPI-compatibility mode within cert-manager

  • for versions prior to v1.15, this requires enabling the ExperimentalGatewayAPISupport feature gate

    When deploying cert-manager using the Helm chart
    helm upgrade --install cert-manager jetstack/cert-manager \
    --namespace cert-manager \
    --set "extraArgs={--feature-gates=ExperimentalGatewayAPISupport=true}"
    When deploying cert-manager using static manifests
    kubectl patch deployment cert-manager -n cert-manager --type='json' \
    -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--feature-gates=ExperimentalGatewayAPISupport=true"}]'
  • starting with version v1.15, the feature gate is enabled by default, but an additional flag must be set to enable this feature

    When deploying cert-manager using the Helm chart
    helm upgrade --install cert-manager oci://quay.io/jetstack/charts/cert-manager \
    --namespace cert-manager \
    --set config.apiVersion="controller.config.cert-manager.io/v1alpha1" \
    --set config.kind="ControllerConfiguration" \
    --set config.enableGatewayAPI=true
    When deploying cert-manager using static manifests
    kubectl patch deployment cert-manager -n cert-manager --type='json' \
    -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-",
    "value":"--enable-gateway-api"}]'
Security Best Practice

The Control Plane stores sensitive information (provider secrets, automation backend credentials) in Kubernetes Secrets.

Enable Encryption at Rest on this cluster to ensure data security.

1. Install Crossplane®

Klutch relies on Crossplane® to manage infrastructure resources.

Crossplane® Version Support

Klutch supports Crossplane® v1.17 through v1.20. (v2.x is not yet supported).

Install Crossplane® using Helm, ensuring the Server-Side Apply (SSA) argument is enabled:

helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update

helm install crossplane \
crossplane-stable/crossplane \
--version 1.20.0 \
--namespace crossplane-system \
--create-namespace \
--set args='{"--enable-ssa-claims"}' \
--wait

Verify that Crossplane® is running before proceeding:

kubectl wait --for=condition=Ready pod --all -n crossplane-system --timeout=120s

2. Deploy klutch-bind Backend

The klutch-bind backend is a core Klutch component that manages the cluster binding process using OpenID Connect (OIDC) and exposes the necessary APIs for App Clusters to consume services. It requires a publicly accessible URL.

2.1 Choose Backend Hostname

The klutch-bind backend requires a publicly accessible URL.

Choose a DNS name (for example, klutch.example.com) that App Clusters will use to reach the backend. You will map this name to the external address after the backend is deployed.

This DNS name is used as <backend-host> in the following steps.

2.2 Configure Identity Provider (OIDC)

Register the backend as a client in your Identity Provider (IdP).

  1. Create Client: Create a new OIDC client in your IdP.

  2. Configure Callback: Set the Valid Redirect URI / Callback URL to: https://<backend-host>:443/callback

  3. Verify IdP Reachability Confirm that the OIDC discovery endpoint is accessible from your environment. This step ensures your Identity Provider is correctly configured before you attempt the Klutch deployment.

    Run the following command, replacing <oidc-issuer-url> with your specific provider URL:

    curl -i https://<oidc-issuer-url>/.well-known/openid-configuration

    Expected Result: A successful request returns an HTTP 200 OK and a JSON document containing configuration metadata (e.g., issuer, authorization_endpoint, and token_endpoint) similar to the following:

    {
    "issuer": "https://idp.example.com/realms/klutch",
    "authorization_endpoint": "https://idp.example.com/realms/klutch/protocol/openid-connect/auth",
    "token_endpoint": "https://idp.example.com/realms/klutch/protocol/openid-connect/token",
    "jwks_uri": "https://idp.example.com/realms/klutch/protocol/openid-connect/certs",
    ...
    }
    Stop

    Do not proceed to Section 2.3 until this command returns a valid JSON response. If the request fails (e.g., 404 Not Found, SSL Error, or Timeout), double-check your Realm/Client path and ensure your network allows outbound traffic to the IdP.

Keycloak Example

For a comprehensive walkthrough using Keycloak, including OIDC registration and example configuration values, refer to the Keycloak Setup Guide. When following this Guide it is necessary to use an exposure mechanism with support for upstream TLS (such as the recommended Envoy Gateway controller).

2.3 Deploy klutch-bind Backend

A. Install CRDs

Apply the Custom Resource Definitions (CRDs) that enable the Klutch Control Plane to expose Klutch APIs and make data services available for binding by App Clusters.

kubectl apply -f https://anynines-artifacts.s3.eu-central-1.amazonaws.com/central-management/v1.5.0/crds.yaml

B. Gather Configuration Values

Obtain or generate the values required to fill out the installation manifest.

ValueCommand / SourceDescription
Signing/Encryption Keysopenssl rand -base64 32
(Run twice)
Generate distinct strings for securing cookies (signing-key and encryption-key).
Cluster CA
<certificate>
kubectl config view --minify --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}'Remote Cluster CA certificate (Base64).
API Address
<kubernetes-api-external-name>
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'The Kubernetes API URL. Must be reachable by App Clusters. Used as k8s-api-url in the cluster-config Secret.
OIDC CredentialsObtained in Step 2.2Client ID (<oidc-issuer-client-id>), Secret (<oidc-issuer-client-secret>), and Issuer URL (<oidc-issuer-client-url>).
Backend Host
<backend-host>
Determined in Step 2.1The DNS name App Clusters will use to reach the backend.
ACME Email
<Add-your-email-here>
Your email addressRequired for Let's Encrypt certificate registration.

C. Apply the Manifest

  1. Copy the manifest below and save it as klutch-bind-backend.yaml.
  2. Replace all placeholders (marked with < >) with the values gathered above.
  3. Apply the configuration.
View klutch-bind Manifest (klutch-bind-backend.yaml)
apiVersion: v1
kind: Namespace
metadata:
name: bind
---
apiVersion: v1
kind: Secret
metadata:
name: cookie-config
namespace: bind
type: Opaque
stringData:
signing-key: "<signing-key>"
encryption-key: "<encryption-key>"
---
apiVersion: v1
kind: Secret
metadata:
name: k8sca
namespace: bind
type: Opaque
stringData:
ca: |
<certificate>
---
apiVersion: v1
kind: Secret
metadata:
name: oidc-config
namespace: bind
type: Opaque
stringData:
oidc-issuer-client-id: "<oidc-issuer-client-id>"
oidc-issuer-client-secret: "<oidc-issuer-client-secret>"
oidc-issuer-url: "<oidc-issuer-client-url>"
oidc-callback-url: "https://<backend-host>:443/callback"
---
apiVersion: v1
kind: Secret
metadata:
name: cluster-config
namespace: bind
type: Opaque
stringData:
k8s-api-url: "<kubernetes-api-external-url>"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: klutch-bind-backend
namespace: bind
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-binder
rules:
- apiGroups: ["kube-bind.io"]
resources: ["apiserviceexportrequests"]
verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
- apiGroups: ["kube-bind.io"]
resources: ["clusterbindings", "clusterbindings/status", "apiserviceexports", "apiserviceexports/status"]
verbs: ["get", "watch", "list", "patch", "update"]
- apiGroups: ["kube-bind.io"]
resources: ["apiservicenamespaces"]
verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: klutch-bind-backend
namespace: bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: klutch-bind-backend
namespace: bind
---
apiVersion: v1
kind: Service
metadata:
name: klutch-bind-backend
namespace: bind
spec:
type: ClusterIP
ports:
- protocol: TCP
name: klutch-bind-backend
port: 443
targetPort: 9443
selector:
app: klutch-bind-backend
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: bind
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <Add-your-email-here>
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- name: klutch-bind-backend-gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: klutch-bind-backend
namespace: bind
labels:
app: klutch-bind-backend
spec:
replicas: 1
selector:
matchLabels:
app: klutch-bind-backend
strategy:
type: Recreate
template:
metadata:
labels:
app: klutch-bind-backend
spec:
serviceAccountName: klutch-bind-backend
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: klutch-bind-backend
image: public.ecr.aws/w5n9a2g2/anynines/kubebind-backend:v1.5.0
securityContext:
allowPrivilegeEscalation: false
args:
- --namespace-prefix=cluster
- --pretty-name=anynines
- --consumer-scope=Namespaced
- --oidc-issuer-client-id=$(OIDC-ISSUER-CLIENT-ID)
- --oidc-issuer-client-secret=$(OIDC-ISSUER-CLIENT-SECRET)
- --oidc-issuer-url=$(OIDC-ISSUER-URL)
- --oidc-callback-url=$(OIDC-CALLBACK-URL)
- --listen-address=0.0.0.0:9443
- --cookie-signing-key=$(COOKIE-SIGNING-KEY)
- --cookie-encryption-key=$(COOKIE-ENCRYPTION-KEY)
- --external-address=$(K8S-API-URL)
- --external-ca-file=/certa/ca
ports:
- name: https
containerPort: 9443
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
env:
- name: K8S-API-URL
valueFrom:
secretKeyRef:
name: cluster-config
key: k8s-api-url
- name: OIDC-ISSUER-CLIENT-ID
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-client-id
- name: OIDC-ISSUER-CLIENT-SECRET
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-client-secret
- name: OIDC-ISSUER-URL
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-url
- name: OIDC-CALLBACK-URL
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-callback-url
- name: COOKIE-SIGNING-KEY
valueFrom:
secretKeyRef:
name: cookie-config
key: signing-key
- name: COOKIE-ENCRYPTION-KEY
valueFrom:
secretKeyRef:
name: cookie-config
key: encryption-key
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: "100m"
memory: 256Mi
volumeMounts:
- name: ca
mountPath: /certa/
volumes:
- name: oidc-config
secret:
secretName: oidc-config
- name: cookie-config
secret:
secretName: cookie-config
- name: ca
secret:
secretName: k8sca
items:
- key: ca
path: ca
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: klutch-bind-backend-envoy-gateway
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: klutch-bind-backend-gateway
namespace: bind
annotations:
cert-manager.io/issuer: letsencrypt
spec:
gatewayClassName: klutch-bind-backend-envoy-gateway
listeners:
- name: http
hostname: "<backend-host>"
protocol: HTTP
port: 80
- name: https
hostname: "<backend-host>"
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: klutch-bind-backend-tls # cert-manager will create/update this Secret
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: klutch-bind-backend
namespace: bind
spec:
hostnames:
- "<backend-host>"
parentRefs:
- name: klutch-bind-backend-gateway
sectionName: https
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: klutch-bind-backend
port: 443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: klutch-bind-backend-redirect
namespace: bind
spec:
hostnames:
- "<backend-host>"
parentRefs:
- name: klutch-bind-backend-gateway
sectionName: http
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 308

Apply the file you edited:

kubectl apply -f klutch-bind-backend.yaml
RBAC Configuration

This manifest defines the necessary RBAC privileges for Klutch. We recommend auditing these rules against your internal security standards prior to production use.

2.4 Configure External Access

A. Obtain External Address

Retrieve the public IP or hostname assigned to the component that exposes traffic to your cluster. If you are using Envoy Gateway as in our reference, retrieve the address from the Gateway status:

kubectl get gateway -n bind klutch-bind-backend-gateway

Locate the ADDRESS in the output.

B. Configure DNS

Create a DNS record that points the chosen <backend-host> name to the external address. Use the appropriate record type (A record for IPs, CNAME for hostnames).

Example configuration: AWS Route 53 with a CNAME record
  • Record name: klutch.example.com (<backend-host>)
  • Record type: CNAME
  • Value: abc123...elb.eu-central-1.amazonaws.com (external hostname of the exposure mechanism)
  • TTL: 300
  • Routing policy: Simple

2.5 Verification

1. Verify Backend Pods Ensure the backend pod starts successfully:

kubectl wait --for=condition=Ready pod -l app=klutch-bind-backend -n bind --timeout=120s

2. Verify External Access Confirm the Gateway has acquired an external IP/hostname and the TLS secret was created:

kubectl get gateway klutch-bind-backend-gateway -n bind
  • ADDRESS: Must show an IP or Hostname.
  • PROGRAMMED: Must be True.

Next Steps: Connect Automation Backends

The Control Plane is now deployed, but it does not yet have any resources to offer. An Automation Backend must now be connected to define where and how data services are provisioned.

Select your target infrastructure to proceed:

  • To add managed cloud services like AWS RDS and S3 to your service catalog, configure the AWS Cloud backend.
  • To add VM-based data services (e.g., a9s PostgreSQL, a9s Messaging) or Kubernetes-based data services (e.g., a8s PostgreSQL), configure the anynines backend.