Local Deployment (Kind)
Deploying Konflux locally on macOS or Linux using Kind.
This section covers all supported ways to install Konflux. Use the table below to find the right guide for your situation:
| My situation | Guide |
|---|---|
| Quick local development | Local Deployment (Kind) |
| Existing OpenShift cluster | Installing on OpenShift |
| Any Kubernetes cluster, building the operator from source | Building and Installing from Source |
| Any Kubernetes cluster, release bundle | Installing from Release |
| Any Kubernetes cluster with OLM installed | Installing from OLM |
Make sure the following conditions are met:
kubectl command-line tool is configured to communicate with your cluster.| Local development (Kind) | Production | |
|---|---|---|
| Replicas | 1 per component | 2–3 per component (HA) |
| CPU request | ~30m per component | 100m+ per component |
| Memory request | ~128Mi per component | 256Mi+ per component |
| Host RAM | 8–16 GB | Based on load |
| Host CPU | 4 cores minimum | Based on load |
For local deployments, the KIND_MEMORY_GB setting in scripts/deploy-local.env
controls how much memory is allocated to the Kind cluster (minimum 8, recommended 16
for a full stack).
For production, replica counts and resource requests can be tuned via the Konflux CR. See the Resource Management guide for details and examples.
Deploying Konflux locally on macOS or Linux using Kind.
Deploying Konflux on an existing OpenShift cluster using the automated deployment script.
Building and running the Konflux Operator from source for development or custom deployments.
Step-by-step guide for installing Konflux from a pre-built release bundle.
Installing the Konflux Operator through the Operator Lifecycle Manager (OLM).