Setting Up the Klutch Control Plane Cluster
The Control Plane Cluster is the central management hub for Klutch. It hosts the core logic for service orchestration, policy enforcement, and API exposure across all connected infrastructure.
This guide establishes the minimal viable foundation for the Control Plane by deploying two essential components: Crossplane® for orchestration and the klutch-bind backend for secure API access. Subsequent guides will cover connecting specific Automation Backends.
Prerequisites
Before beginning, ensure your environment meets the following requirements:
- Kubernetes Cluster: A running cluster to host the Control Plane.
- Inbound Network Access: App Clusters must reach the Control Plane API. This guide assumes an Ingress Controller (e.g., Ingress Nginx) and Cert-Manager are installed. Although this guide relies on these specific components, Klutch supports alternative exposure mechanisms and certificate strategies.
- Tools:
- Helm (v3.2.0+)
- kubectl (v1.14+)
- Crossplane CLI (v1.17.0+)
The Control Plane stores sensitive information (provider secrets, automation backend credentials) in Kubernetes Secrets.
Enable Encryption at Rest on this cluster to ensure data security.
1. Install Crossplane®
Klutch relies on Crossplane® to manage infrastructure resources.
Klutch supports Crossplane® v1.17 through v1.20. (v2.x is not yet supported).
Install Crossplane® using Helm, ensuring the Server-Side Apply (SSA) argument is enabled:
helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update
helm install crossplane \
crossplane-stable/crossplane \
--version 1.20.0 \
--namespace crossplane-system \
--create-namespace \
--set args='{"--enable-ssa-claims"}' \
--wait
Verify that Crossplane® is running before proceeding:
kubectl wait --for=condition=Ready pod --all -n crossplane-system --timeout=120s
2. Deploy klutch-bind Backend
The klutch-bind backend is a core Klutch component that manages the cluster binding process using OpenID Connect (OIDC) and exposes the necessary APIs for App Clusters to consume services. It requires a publicly accessible URL.
2.1 Prepare klutch-bind Network
The klutch-bind backend requires a publicly accessible URL.
A. Obtain External Address
Retrieve the public IP or hostname assigned to the component that exposes traffic to your cluster. If you are using the Ingress Nginx controller as in our reference, run:
kubectl get svc -n ingress-nginx ingress-nginx-controller
Locate the EXTERNAL-IP in the output.
B. Configure DNS
Create a DNS record that points the desired domain to the external address. Use the appropriate record type (A record for IPs, CNAME for hostnames).
This DNS name is used as <backend-host> in the next step and in the manifest below.
Example configuration: AWS Route 53 with a CNAME record
- Record name:
klutch.example.com(<backend-host>) - Record type:
CNAME - Value:
abc123...elb.eu-central-1.amazonaws.com(external hostname of the exposure mechanism) - TTL:
300 - Routing policy:
Simple
2.2 Configure Identity Provider (OIDC)
Register the backend as a client in your Identity Provider (IdP).
-
Create Client: Create a new OIDC client in your IdP.
-
Configure Callback: Set the Valid Redirect URI / Callback URL to:
https://<backend-host>:443/callback -
Verify IdP Reachability Confirm that the OIDC discovery endpoint is accessible from your environment. This step ensures your Identity Provider is correctly configured before you attempt the Klutch deployment.
Run the following command, replacing
<oidc-issuer-url>with your specific provider URL:curl -i https://<oidc-issuer-url>/.well-known/openid-configurationExpected Result: A successful request returns an HTTP 200 OK and a JSON document containing configuration metadata (e.g.,
issuer,authorization_endpoint, andtoken_endpoint) similar to the following:{
"issuer": "https://idp.example.com/realms/klutch",
"authorization_endpoint": "https://idp.example.com/realms/klutch/protocol/openid-connect/auth",
"token_endpoint": "https://idp.example.com/realms/klutch/protocol/openid-connect/token",
"jwks_uri": "https://idp.example.com/realms/klutch/protocol/openid-connect/certs",
...
}StopDo not proceed to Section 2.3 until this command returns a valid JSON response. If the request fails (e.g., 404 Not Found, SSL Error, or Timeout), double-check your Realm/Client path and ensure your network allows outbound traffic to the IdP.
For a comprehensive walkthrough using Keycloak, including OIDC registration and example configuration values, refer to the Keycloak Setup Guide.
2.3 Deploy klutch-bind Backend
A. Install CRDs
Apply the Custom Resource Definitions (CRDs) that enable the Klutch Control Plane to expose Klutch APIs and make data services available for binding by App Clusters.
kubectl apply -f https://anynines-artifacts.s3.eu-central-1.amazonaws.com/central-management/v1.5.0/crds.yaml
B. Gather Configuration Values
Obtain or generate the values required to fill out the installation manifest.
| Value | Command / Source | Description |
|---|---|---|
| Signing/Encryption Keys | openssl rand -base64 32 (Run twice) | Generate distinct strings for securing cookies (signing-key and encryption-key). |
Cluster CA<certificate> | kubectl config view --minify --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | Remote Cluster CA certificate (Base64). |
API Address<kubernetes-api-external-name> | kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}' | The Kubernetes API URL. Must be reachable by App Clusters. Used as k8s-api-url in the cluster-config Secret. |
| OIDC Credentials | Obtained in Step 2.2 | Client ID (<oidc-issuer-client-id>), Secret (<oidc-issuer-client-secret>), and Issuer URL (<oidc-issuer-client-url>). |
Backend Host<backend-host> | Determined in Step 2.1 | The DNS name App Clusters will use to reach the backend. |
ACME Email<Add-your-email-here> | Your email address | Required for Let's Encrypt certificate registration. |
C. Apply the Manifest
- Copy the manifest below and save it as
klutch-bind-backend.yaml. - Replace all placeholders (marked with
< >) with the values gathered above. - Apply the configuration.
View klutch-bind Manifest (klutch-bind-backend.yaml)
apiVersion: v1
kind: Namespace
metadata:
name: bind
---
apiVersion: v1
kind: Secret
metadata:
name: cookie-config
namespace: bind
type: Opaque
stringData:
signing-key: "<signing-key>"
encryption-key: "<encryption-key>"
---
apiVersion: v1
kind: Secret
metadata:
name: k8sca
namespace: bind
type: Opaque
data:
ca: |
<certificate>
---
apiVersion: v1
kind: Secret
metadata:
name: oidc-config
namespace: bind
type: Opaque
stringData:
oidc-issuer-client-id: "<oidc-issuer-client-id>"
oidc-issuer-client-secret: "<oidc-issuer-client-secret>"
oidc-issuer-url: "<oidc-issuer-client-url>"
oidc-callback-url: "https://<backend-host>:443/callback"
---
apiVersion: v1
kind: Secret
metadata:
name: cluster-config
namespace: bind
type: Opaque
stringData:
k8s-api-url: "<kubernetes-api-external-url>"
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: klutch-bind-backend
namespace: bind
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-binder
rules:
- apiGroups: ["kube-bind.io"]
resources: ["apiserviceexportrequests"]
verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
- apiGroups: ["kube-bind.io"]
resources: ["clusterbindings", "clusterbindings/status", "apiserviceexports", "apiserviceexports/status"]
verbs: ["get", "watch", "list", "patch", "update"]
- apiGroups: ["kube-bind.io"]
resources: ["apiservicenamespaces"]
verbs: ["create", "delete", "patch", "update", "get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: klutch-bind-backend
namespace: bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: klutch-bind-backend
namespace: bind
---
apiVersion: v1
kind: Service
metadata:
name: klutch-bind-backend
namespace: bind
spec:
type: ClusterIP
ports:
- protocol: TCP
name: klutch-bind-backend
port: 443
targetPort: 9443
selector:
app: klutch-bind-backend
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt
namespace: bind
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <Add-your-email-here>
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: klutch-bind-backend
namespace: bind
labels:
app: klutch-bind-backend
spec:
replicas: 1
selector:
matchLabels:
app: klutch-bind-backend
strategy:
type: Recreate
template:
metadata:
labels:
app: klutch-bind-backend
spec:
serviceAccountName: klutch-bind-backend
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: klutch-bind-backend
image: public.ecr.aws/w5n9a2g2/anynines/kubebind-backend:v1.5.0
securityContext:
allowPrivilegeEscalation: false
args:
- --namespace-prefix=cluster
- --pretty-name=anynines
- --consumer-scope=Namespaced
- --oidc-issuer-client-id=$(OIDC-ISSUER-CLIENT-ID)
- --oidc-issuer-client-secret=$(OIDC-ISSUER-CLIENT-SECRET)
- --oidc-issuer-url=$(OIDC-ISSUER-URL)
- --oidc-callback-url=$(OIDC-CALLBACK-URL)
- --listen-address=0.0.0.0:9443
- --cookie-signing-key=$(COOKIE-SIGNING-KEY)
- --cookie-encryption-key=$(COOKIE-ENCRYPTION-KEY)
- --external-address=$(K8S-API-URL)
- --external-ca-file=/certa/ca
ports:
- name: https
containerPort: 9443
livenessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
tcpSocket:
port: 9443
initialDelaySeconds: 5
periodSeconds: 10
env:
- name: K8S-API-URL
valueFrom:
secretKeyRef:
name: cluster-config
key: k8s-api-url
- name: OIDC-ISSUER-CLIENT-ID
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-client-id
- name: OIDC-ISSUER-CLIENT-SECRET
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-client-secret
- name: OIDC-ISSUER-URL
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-issuer-url
- name: OIDC-CALLBACK-URL
valueFrom:
secretKeyRef:
name: oidc-config
key: oidc-callback-url
- name: COOKIE-SIGNING-KEY
valueFrom:
secretKeyRef:
name: cookie-config
key: signing-key
- name: COOKIE-ENCRYPTION-KEY
valueFrom:
secretKeyRef:
name: cookie-config
key: encryption-key
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: "100m"
memory: 256Mi
volumeMounts:
- name: ca
mountPath: /certa/
volumes:
- name: oidc-config
secret:
secretName: oidc-config
- name: cookie-config
secret:
secretName: cookie-config
- name: ca
secret:
secretName: k8sca
items:
- key: ca
path: ca
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: klutch-bind-backend
namespace: bind
annotations:
cert-manager.io/issuer: letsencrypt
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
spec:
ingressClassName: nginx
tls:
- secretName: klutch-bind-backend-tls
hosts:
- "<backend-host>"
rules:
- host: "<backend-host>"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: klutch-bind-backend
port:
number: 443
Apply the file you edited:
kubectl apply -f klutch-bind-backend.yaml
This manifest defines the necessary RBAC privileges for Klutch. We recommend auditing these rules against your internal security standards prior to production use.
D. Verification
1. Verify Backend Pods Ensure the backend pod starts successfully:
kubectl wait --for=condition=Ready pod -l app=klutch-bind-backend -n bind --timeout=120s
2. Verify External Access Confirm the Ingress resource has acquired an external IP/hostname and a valid TLS certificate:
kubectl get ingress klutch-bind-backend -n bind
- ADDRESS: Must show an IP or Hostname.
- HOSTS: Must match the configured
<backend-host>.
Next Steps: Connect Automation Backends
The Control Plane is now deployed, but it does not yet have any resources to offer. An Automation Backend must now be connected to define where and how data services are provisioned.
Select your target infrastructure to proceed:
- To add managed cloud services like AWS RDS and S3 to your service catalog, configure the AWS Cloud backend.
- To add VM-based data services (e.g., a9s PostgreSQL, a9s Messaging) or Kubernetes-based data services (e.g., a8s PostgreSQL), configure the anynines backend.