[{"body":"The Konflux Operator deploys all Konflux components from a single Konflux Custom Resource (CR). This page covers how to apply a CR and verify that all components are ready.\nCreate the Konflux Custom Resource Apply one of the samples from operator/config/samples/ (or create your own) and wait for Konflux to be ready. See the Examples page for all example configurations.\nDo not use konflux_v1alpha1_konflux.yaml for production — it contains demo users with static passwords intended for local testing only. Use OIDC authentication instead. kubectl apply -f operator/config/samples/\u003cone of the sample files\u003e Verify the Konflux CR is ready Wait for the Ready condition if the deployment is still in progress:\nkubectl wait --for=condition=Ready=True konflux konflux --timeout=15m Check the Konflux CR status. When Konflux CR is ready, the output includes the UI URL:\nkubectl get konflux konflux NAME READY UI-URL AGE konflux True https://konflux-ui-konflux-ui.apps.\u003ccluster-domain\u003e 10m ","categories":"","description":"How to create a Konflux Custom Resource and verify the installation is ready.","excerpt":"How to create a Konflux Custom Resource and verify the installation is ready.","ref":"/konflux-ci/docs/guides/apply-konflux-cr/","tags":"","title":"Applying the Konflux Custom Resource"},{"body":"This guide walks you through deploying Konflux locally on macOS or Linux using Kind. The automated deploy-local.sh script handles cluster creation, operator deployment, and GitHub integration in a single step.\nIt relies on sourcing deploy-local.env file which provides it with environment variables. The following steps guide you through setting up the env file and running the script.\nPrerequisites Verify that the following tools are installed:\nTool Minimum version Kind v0.26.0 podman or docker podman v5.3.1 / docker v27.0.1 kubectl v1.31.4 git v2.46 openssl v3.0.13 Verify Minimum host free resources:\nCPU: 4 cores RAM: 8 GB Clone the repository and create a copy of the env file:\ngit clone https://github.com/konflux-ci/konflux-ci.git cd konflux-ci cp scripts/deploy-local.env.template scripts/deploy-local.env A GitHub Application: Konflux uses it to receive webhook events from GitHub, trigger build pipelines on pull requests, and write pipeline status back to the PR. Create the App by following GitHub Application Secrets.\nSetup Once you cloned the repo, created your copy of the env file, created the GitHub app and populated its secrets in the env file, refer to Configuration options for a full reference of all available variables. macOS-specific configuration is handled automatically by the script.\nOnce the env file is set, deploy Konflux:\nIf you have both Docker and Podman installed and prefer to use Podman, set the provider explicitly before running the script:\nexport KIND_EXPERIMENTAL_PROVIDER=podman ./scripts/deploy-local.sh The script performs all of the following automatically:\nCreates a Kind cluster with proper resource allocation Increases inotify and PID limits Deploys the Konflux operator (using the method set by OPERATOR_INSTALL_METHOD) Applies the Konflux CR configuration Sets up GitHub App integration and smee webhook proxy Provides a local OCI registry at localhost:5001 Verify the installation Once the script completes, open https://localhost:9443 in your browser and log in with the demo credentials:\nUsername: user1@konflux.dev Password: password Remote machine? If Kind is running on a remote host, open an SSH tunnel to access the UI from your local browser: ssh -L 9443:localhost:9443 $USER@$VM_IP The demo users use static passwords and are intended for local development only. Never use this configuration in a production environment. For production, configure an OIDC connector instead. What gets deployed The script always sets up the base infrastructure, regardless of the install method chosen. The Konflux operator and its managed components are only installed when OPERATOR_INSTALL_METHOD is not none.\nAll methods Component Details Kind cluster Single-node cluster with ingress on port 9443 cert-manager TLS certificate lifecycle management trust-manager CA bundle distribution across namespaces Tekton + Pipelines as Code Pipeline execution engine and GitHub-triggered pipeline automation Kyverno Policy engine for namespace and RBAC automation smee client Webhook proxy relay for GitHub events release, local and build methods Component Details Konflux Operator Deploys and manages all Konflux components lifecycles When using none, the script stops after setting up the base infrastructure and secrets. You then install and run the operator manually - see the none method workflow under Install method values below, and Building and Installing from Source.\nConfiguration options All options are set in scripts/deploy-local.env (copied from scripts/deploy-local.env.template). They can also be passed as environment variables directly:\nOPERATOR_INSTALL_METHOD=build ./scripts/deploy-local.sh Operator configuration Variable Default Description OPERATOR_INSTALL_METHOD release How the operator is installed. See install method values below OPERATOR_IMAGE quay.io/konflux-ci/konflux-operator:latest Operator image used with the local and build methods KONFLUX_CR (auto-selected) Path to the Konflux CR file to apply. Available samples are in operator/config/samples/. Can also be passed as a positional argument to the script Install method values Value Description When to use release Installs from the latest GitHub release (install.yaml) Normal local development local Deploys from your current checkout using kustomize, with the latest released image Testing manifest changes against a specific release image build Builds the operator image locally before deploying Operator development - testing code changes none Sets up Kind + dependencies + secrets, then exits without installing the operator Running the operator manually - see Building and Installing from Source When using local, the manifests from your checkout are applied with the latest released image. To avoid version mismatches, checkout the matching release tag first: git checkout v1.0.0 OPERATOR_INSTALL_METHOD=local ./scripts/deploy-local.sh The default CR does not enable image-controller. If you set QUAY_TOKEN and QUAY_ORGANIZATION, you must also use a CR that enables it (e.g. konflux-e2e.yaml):\nKONFLUX_CR=operator/config/samples/konflux-e2e.yaml ./scripts/deploy-local.sh If QUAY_TOKEN and QUAY_ORGANIZATION are both set and no CR is specified, the script automatically selects konflux-e2e.yaml.\nInfrastructure configuration Variable Default Description KIND_MEMORY_GB 8 Memory allocated to the Kind cluster (GB). Minimum: 8, recommended: 16 for full stack REGISTRY_HOST_PORT 5001 Host port for the internal OCI registry. Port 5000 is often taken by macOS AirPlay Receiver ENABLE_REGISTRY_PORT 1 Expose the registry on the host (0 to restrict to in-cluster access only) INCREASE_PODMAN_PIDS_LIMIT 1 Increase Podman PID limits for Tekton pipeline performance (0 to disable) PODMAN_MACHINE_NAME (default machine) macOS only - name of the Podman machine to use when multiple machines exist ENABLE_IMAGE_CACHE 0 Persist containerd image cache across cluster recreations (1 to enable) Secrets Variable Required Description GITHUB_APP_ID Yes Numeric ID of your GitHub App (found in the App settings page) GITHUB_PRIVATE_KEY Yes¹ Literal PEM private key content (multi-line, quoted) GITHUB_PRIVATE_KEY_PATH Yes¹ Path to .pem file - takes precedence over GITHUB_PRIVATE_KEY WEBHOOK_SECRET Yes Webhook secret for GitHub webhooks. Must match the secret configured in the GitHub App QUAY_TOKEN No² Quay OAuth token for image-controller auto-provisioning. See Registry Configuration for instructions. QUAY_ORGANIZATION No² Quay organization where component images will be stored. See Registry Configuration for instructions. SMEE_CHANNEL No³ Full Smee channel URL (https://smee.io/\u003cchannel-id\u003e). Required only when using smee for webhook relay (cluster not reachable); must match the GitHub App webhook URL. Generate the channel ID with `head -c 30 /dev/random ¹ Provide either GITHUB_PRIVATE_KEY or GITHUB_PRIVATE_KEY_PATH.\n² QUAY_TOKEN and QUAY_ORGANIZATION have no effect unless you also set KONFLUX_CR to a sample that enables image-controller (e.g. operator/config/samples/konflux-e2e.yaml).\n³ Required when the cluster is not reachable and you use smee as the webhook proxy. Set SMEE_CHANNEL to the same URL you use as the GitHub App webhook URL (e.g. generate a channel ID with head -c 30 /dev/random | base64 | tr -dc 'a-zA-Z0-9' and use https://smee.io/\u003cthat-id\u003e). If unset, the script generates a random channel ID at deploy time, but you would then need to set this variable and update the GitHub App webhook URL to match. Omit when your cluster has a publicly reachable webhook URL.\nThe GitHub App must have the following permissions: checks:write, contents:write, issues:write, pull_requests:write.\nWhat’s next Onboard a new Application - onboard an application, run builds, tests, and releases GitHub Application Secrets - full GitHub App and webhook proxy reference Registry Configuration - configure an external container registry for build and release pipelines API Reference - full CR field reference Troubleshooting - solutions to common installation issues Examples - sample Konflux CR configurations ","categories":"","description":"Deploying Konflux locally on macOS or Linux using Kind.","excerpt":"Deploying Konflux locally on macOS or Linux using Kind.","ref":"/konflux-ci/docs/installation/install-local/","tags":"","title":"Local Deployment (Kind)"},{"body":" All commands shown in this document assume you are in the repository root. Onboard a new Application Onboard an application to Konflux on behalf of user2.\nThis section includes two options for onboarding an application to Konflux.\nThe first option uses the Konflux UI to onboard an application and release its builds to Quay.io.\nThe second option uses Kubernetes manifests to onboard and releases builds to a container registry deployed to the cluster. This approach simplifies onboarding to demonstrate Konflux more easily.\nBoth options will use an example repository containing a Dockerfile to be built by Konflux:\nFork the example repository, by clicking the Fork button from that repository and following the instructions on the “Create a new fork” page.\nInstall the GitHub app on your fork: Go to the app’s page on GitHub, click on Install App on the left-hand side, Select the organization the fork repository is on, click Only select repositories, and select your fork repository.\nWe will use our Konflux deployment to build and release Pull Requests for this fork.\nOption 1: Onboard Application with the Konflux UI With this approach, Konflux can create:\nThe manifests in GitHub for the pipelines it will run against the applications onboarded to Konflux. The Quay.io repositories into which it will push container images. Create a GitHub Application Pipeline creation requires the GitHub Application Secrets on all 3 namespaces and installing your newly-created GitHub app on your repository, as explained above.\nConfigure Quay.io Create an organization and an application in Quay.io that will allow Konflux to create repositories for your applications. To do that, follow the procedure to configure a Quay.io application and deploy image-controller.\nCreate Application and Component via the Konflux UI Follow these steps to onboard your application:\nLogin to Konflux as user2@konflux.dev (password: password). Click Create application. Verify the workspace is set to user-ns2. Provide a name to the application and click “Add a component”. Under Git repository url, copy the https link to your fork. This should be something similar to https://github.com/\u003cyour-name\u003e/testrepo.git. Uncheck the “Should the image produced be private” checkbox. If left checked, Pull Request pipelines will not start because image-controller will fail to provision the image repository. Leave Docker file blank. The default value of Dockerfile will be used. Under the Pipeline drop-down list, select docker-build-oci-ta. Click Create application. If you encounter a 404 Not Found error, refer to the troubleshooting guide. The UI should now display the Lifecycle diagram for your application. In the Components tab you should be able to see your component listed and you’ll be prompted to merge the automatically-created Pull Request (don’t do that just yet — we’ll have it merged in the Trigger the Release section).\nIf you have not completed the Quay.io setup steps in the previous section, Konflux will be unable to send a PR to your repository. Konflux will display \"Sending Pull Request\". In your GitHub repository you should now see a PR was created with two new pipelines. One is triggered by PR events (e.g. when PRs are created or changed), and the other is triggered by push events (e.g. when PRs are merged).\nYour application is now onboarded, and you can continue to the Observe the Behavior section.\nOption 2: Onboard Application with Kubernetes Manifests With this approach, we use kubectl to deploy the manifests for creating the Application and Component resources and we manually create the PR for introducing the pipelines to run using Konflux.\nTo do that:\nUse a text editor to edit your local copy of test/resources/demo-users/user/sample-components/ns2/application-and-component.yaml.\nUnder the Component and Repository resources, change the url fields so they point to your newly-created fork.\nNote the format differences between the two fields — the Component URL has a .git suffix, while the Repository URL does not.\nDeploy the manifests:\nkubectl create -f ./test/resources/demo-users/user/sample-components/ns2/application-and-component.yaml Log into the Konflux UI as user2@konflux.dev (password: password). You should be able to see your new Application and Component by clicking “View my applications”.\nImage Registry The build pipeline that you’re about to run pushes the images it builds to an image registry.\nFor the sake of simplicity, it’s configured to use a registry deployed into the cluster during previous steps of this setup (when dependencies were installed).\nThe statement above is only true when not onboarding via the Konflux UI. You can convert it to use a public image registry later on. Creating a Pull Request You’re now ready to create your first PR to your fork.\nClone your fork and create a new branch:\ngit clone \u003cmy-fork-url\u003e cd \u003cmy-fork-name\u003e git checkout -b add-pipelines Tekton will trigger pipelines present in the .tekton directory. The pipelines already exist on your repository; you just need to copy them to that location.\nCopy the manifests:\nmkdir -p .tekton cp pipelines/* .tekton/ Commit your changes and push them to your repository:\ngit add .tekton git commit -m \"add pipelines\" git push origin HEAD Your terminal should now display a link for creating a new Pull Request in GitHub. Click the link, make sure the PR is targeted against your fork’s main branch and not against the repository from which it was forked (i.e. base repository should reside under your user name).\nFinally, click “Create pull request” (we’ll have it merged in the Trigger the Release section).\nObserve the Behavior If the behavior you see is not as described below, consult the troubleshooting document for Pipelines not triggering on PRs. Once your PR is created, you should see a status being reported at the bottom of the PR’s comments section (just above the “Add a comment” box).\nYour GitHub App should now send PR events to your smee channel. Navigate to your smee channel’s web page. You should see a couple of events were sent just after your PR was created (e.g. check_run, pull_request).\nLog into the Konflux UI as user2 and check your applications. Select the application you created earlier, click on Activity and Pipeline runs. A build should’ve been triggered a few seconds after the PR was created.\nFollow the build progress. Depending on your system’s load and network connection (the build process involves pulling images), it might take a few minutes for the build to complete. It will clone the repository, build using the Dockerfile, and push the image to the registry.\nIf a pipeline is triggered but seems stuck for a long time, especially at early stages, refer to the Running out of resources troubleshooting section. Pull your new Image When the build process is done, you can check out the image you just built by pulling it from the registry.\nPublic Registry If using a public registry, navigate to the repository URL mentioned in the output-image value of your pull-request pipeline and locate your build.\nFor example, if using Quay.io, go to the Tags tab and locate the relevant build for the tag mentioned on the output-image value (e.g. on-pr-{{revision}}), and click the Fetch Tag button on the right to generate the command to pull the image.\nLocal Registry If using a local registry, port-forward the registry service so you can reach it from outside of the cluster:\nkubectl port-forward -n kind-registry svc/registry-service 30001:443 The local registry is using a self-signed certificate that is being distributed to all namespaces. You can fetch the certificate from the cluster and use it on the curl calls below:\nkubectl get secrets -n kind-registry local-registry-tls \\ -o jsonpath='{.data.ca\\.crt}' | base64 -d \u003e ca.crt curl --cacert ca.crt https://... Alternatively, use the -k flag to skip TLS verification.\nLeave the terminal hanging and on a new terminal window, list the repositories on the registry:\ncurl -k https://localhost:30001/v2/_catalog The output should look like this:\n{\"repositories\":[\"test-component\"]} List the tags on the test-component repository (assuming you did not change the pipeline’s output-image parameter):\ncurl -k https://localhost:30001/v2/test-component/tags/list You should see a list of tags pushed to that repository:\n{\"name\":\"test-component\",\"tags\":[\"on-pr-1ab9e6d756fbe84aa727fc8bb27c7362d40eb3a4\",\"sha256-b63f3d381f8bb2789f2080716d88ed71fe5060421277746d450fbcf938538119.sbom\"]} Pull the image starting with on-pr- (using podman below, but the commands should be similar for docker):\npodman pull --tls-verify=false localhost:30001/test-component:on-pr-1ab9e6d756fbe84aa727fc8bb27c7362d40eb3a4 Trying to pull localhost:30001/test-component:on-pr-1ab9e6d756fbe84aa727fc8bb27c7362d40eb3a4... Getting image source signatures Copying blob cde118a3f567 done | Copying blob 2efec45cd878 done | Copying blob fd5d635ec9b7 done | Copying config be9a47b762 done | Writing manifest to image destination be9a47b76264e8fb324d9ef7cddc93a933630695669afc4060e8f4c835c750e9 Start a Container Start a container based on the image you pulled:\npodman run --rm be9a47b76264e8fb324d9ef7cddc9... hello world What’s Next? Now that your application is onboarded and you’ve verified the build pipeline works, configure integration tests to automatically validate your builds:\nIntegration Tests →\n","categories":"","description":"Onboard a sample application to Konflux using the UI or Kubernetes manifests, observe a build, and pull the resulting image.","excerpt":"Onboard a sample application to Konflux using the UI or Kubernetes manifests, observe a build, and pull the resulting image.","ref":"/konflux-ci/docs/onboard/onboarding/","tags":"","title":"Onboard a new Application"},{"body":"The Konflux Operator is a Kubernetes-native operator that installs, configures, and manages the Konflux CI/CD platform from a single declarative Custom Resource.\nWhy use the Konflux Operator? Running a full CI/CD platform involves deploying and wiring together many components: build controllers, release pipelines, policy engines, identity providers, ingress, certificates, and more. Keeping them configured consistently and upgrading them safely adds operational overhead.\nThe Konflux Operator removes this complexity. You describe your desired platform configuration in one Konflux Custom Resource and the operator continuously reconciles the cluster toward that state - deploying components, propagating configuration changes, and cleaning up disabled features automatically.\nIt works on any Kubernetes cluster: local Kind environments, OpenShift, EKS, GKE, or any conformant distribution.\nFeatures Overview Single CR for the entire platform - one Konflux resource controls all components. No need to manage individual Helm releases or manifests per service.\nDeclarative lifecycle management - update the Konflux CR to change replicas, resource limits, ingress settings, or authentication connectors. The operator reconciles the change without manual intervention.\nOptional components - features such as the image controller, internal registry, default tenant namespace, and telemetry can be enabled or disabled via spec flags. Disabled components are cleaned up automatically - no orphaned resources left behind.\nIdentity provider configuration - Dex connectors (GitHub, OpenShift, OIDC, LDAP, static passwords) are configured directly in the Konflux CR, removing the need to manage separate Dex configuration files.\nResource and replica tuning - CPU requests, memory limits, and replica counts for each component are expressed in the CR and managed declaratively.\nStatus aggregation - the operator aggregates readiness conditions from all components into the parent Konflux CR status, giving a single place to check platform health.\nHigh-level Operation ┌──────────────────────────────────────────────────────────────────────────────────┐ │ kubectl apply -f my-konflux.yaml │ │ │ │ Konflux CR ──► Operator reconciler │ │ │ │ │ ┌─────────────────────────────────┼──────────────────────────┐ │ │ ▼ ▼ ▼ ▼ ▼ │ │ KonfluxUI KonfluxBuildService KonfluxIntegration KonfluxRelease ... │ │ reconciler reconciler ServiceReconciler ServiceReconciler │ │ │ │ │ │ │ │ UI + Dex + Build Service Integration Service Release Service │ │ Ingress/TLS + RBAC + RBAC + RBAC │ └──────────────────────────────────────────────────────────────────────────────────┘ You apply a single Konflux CR to the cluster. The operator’s main reconciler reads the CR and fans out to a set of child CRs - one per platform component. Each child CR has its own reconciler that applies the actual Kubernetes workloads for that component. Changes to the Konflux CR propagate automatically. Optional components are removed when disabled. The Konflux CR status reflects the aggregated health of all components and exposes the UI URL once ingress is ready. Managed Components The operator manages the following platform components through the Konflux CR:\nComponent Description UI The Konflux web interface, nginx reverse proxy, and Dex identity provider. Supports NodePort, Ingress, and OpenShift Route. Build Service Controller that manages Tekton-based build pipelines for application components. Integration Service Controller that triggers integration test pipelines after each build and evaluates their results. Release Service Controller that manages the Release, ReleasePlan, and ReleasePlanAdmission lifecycle for publishing artifacts. Application API CRDs and controllers for Application, Component, Snapshot, and related platform resources. Conforma Policy evaluation engine used to gate releases against supply chain compliance rules. Namespace Lister Service that enumerates tenant namespaces for the UI workspace selector. RBAC Cluster roles (konflux-admin-user-actions, konflux-maintainer-user-actions, konflux-contributor-user-actions) and bindings. Konflux Info ConfigMap-based cluster configuration consumed by PipelineRuns (OIDC issuer, signing URLs, environment metadata). Cert Manager (optional) Creates a ClusterIssuer for TLS certificates across platform components. Image Controller (optional) Automatically provisions Quay.io image repositories when components are onboarded via the UI. Internal Registry (optional) An in-cluster OCI registry for local or air-gapped environments. Default Tenant (optional) A shared default-tenant namespace where all authenticated users have maintainer access (convenient for development; disable for strict multi-tenancy). Telemetry (optional) Segment bridge for usage telemetry reporting. To learn more about configuring specific components, see the Guides section.\nTo get started with an installation, see Installation.\n","categories":"","description":"What the Konflux Operator is, how it works, and what it manages.","excerpt":"What the Konflux Operator is, how it works, and what it manages.","ref":"/konflux-ci/docs/overview/","tags":"","title":"Overview"},{"body":"Konflux uses a GitHub App for triggering pipelines via webhooks and for interacting with repositories (creating PRs, reporting status). You need to create a GitHub App and deploy its credentials as secrets in the cluster.\nCreating a GitHub App Create a GitHub App following the Pipelines-as-Code documentation.\nThat tutorial asks you to generate and set a Webhook secret when creating the App. The same value should be used in the App and for WEBHOOK_SECRET in deploy-local.env.\nGenerate a random secret running: head -c 30 /dev/random | base64.\nFor Homepage URL you can use https://localhost:9443/ (it doesn’t matter).\nFor Webhook URL, use either:\nYour cluster’s publicly reachable ingress URL, if available A smee webhook proxy URL, if the cluster is not reachable from the internet (see Webhook Proxy for Non-Exposed Clusters below) Per the instructions on the link, generate and download the private key. Take note of the location of the private Key, the App ID and the webhook secret you set in the App (random value generated above).\nIf using a local cluster, set these values in deploy-local.env:\nGITHUB_PRIVATE_KEY_PATH: path to private key downloaded earlier WEBHOOK_SECRET: secret generated earlier GITHUB_APP_ID: GitHub APP ID If deploying to a remote cluster, refer to the section below.\nInstall the GitHub App on the repositories you want to use with Konflux.\nWebhook Proxy for Non-Exposed Clusters When deployed in a local environment like Kind, or behind a firewall, GitHub cannot reach the cluster’s webhook endpoint directly. Use smee as a proxy to relay webhook events into the cluster.\nGenerate a smee channel ID with head -c 30 /dev/random | base64 | tr -dc 'a-zA-Z0-9', then use https://smee.io/\u003cchannel-id\u003e (with that output as \u003cchannel-id\u003e) as the Webhook URL when creating or configuring your GitHub App, and set the same URL as SMEE_CHANNEL in scripts/deploy-local.env. The deploy-local.sh script configures the smee client to listen on that channel. Alternatively, create a channel at smee.io and use the URL it gives you.\nCreating the Secrets The same GitHub App secret must be created in three namespaces so that all Konflux components can interact with GitHub:\nfor ns in pipelines-as-code build-service integration-service; do kubectl -n \"${ns}\" create secret generic pipelines-as-code-secret \\ --from-file=github-private-key=/path/to/github-app.pem \\ --from-literal=github-application-id=\"\u003cyour-app-id\u003e\" \\ --from-literal=webhook.secret=\"\u003cyour-webhook-secret\u003e\" done The deploy-local.sh script creates these secrets automatically from the values in scripts/deploy-local.env.\n","categories":"","description":"Creating a GitHub App and deploying its credentials as secrets in the cluster.","excerpt":"Creating a GitHub App and deploying its credentials as secrets in the cluster.","ref":"/konflux-ci/docs/guides/github-secrets/","tags":"","title":"GitHub Application Secrets"},{"body":"This guide covers deploying Konflux on an existing OpenShift cluster using the deploy-konflux-on-ocp.sh script. The script uses OpenShift-native components (OpenShift Pipelines, Red Hat cert-manager) instead of their upstream alternatives.\nThis is not the only way to install Konflux on OpenShift. You can also use: Installing from OLM — install through the OpenShift OperatorHub Installing from Release — apply the pre-built release bundle directly Building and Installing from Source — build and run the operator from your local checkout Prerequisites Tool Minimum version OpenShift v4.20 oc or kubectl v1.31.4 git v2.46 make — Go v1.25.0 openssl v3.0.13 cluster-admin permissions Setup Clone the repository: git clone https://github.com/konflux-ci/konflux-ci.git cd konflux-ci Run the deployment script: ./deploy-konflux-on-ocp.sh To customise the operator image or Konflux CR before running, see Script configuration.\nThe script performs all of the following automatically:\nDeploys Konflux dependencies using OpenShift-native operators Installs the Konflux CRDs Deploys the Konflux Operator into the konflux-operator namespace Waits for the Operator to be ready Applies the default Konflux CR and waits for all components to reach Ready What gets deployed Dependencies Component Details OpenShift Pipelines Installed via OLM (Red Hat’s productized Tekton) cert-manager Installed via the Red Hat cert-manager OLM operator trust-manager Deployed into the cert-manager namespace Kyverno Policy engine for namespace and RBAC automation Pipelines-as-Code GitHub-triggered pipeline automation Tekton Chains RBAC RBAC for supply-chain signing using OpenShift namespaces The following components are not deployed by deploy-deps.sh in this configuration:\nSkipped Reason Dex Managed by the Konflux Operator as part of the Konflux CR reconciliation Internal OCI registry OpenShift has its own integrated registry Smee webhook proxy Not needed when the cluster is internet-reachable The script skips the Smee webhook proxy (SKIP_SMEE=true). For GitHub to deliver webhook events (triggering build pipelines on pull requests), your cluster must be reachable from the internet. If it is not, you will need to configure Smee manually after installation. See GitHub Application Secrets for details. Operator and Konflux Component Details Konflux CRDs Konflux custom resource definition Konflux Operator Deployed in the konflux-operator namespace Konflux instance All Konflux components managed by the default sample CR Script configuration Operator image By default, the script uses quay.io/konflux-ci/konflux-operator:latest. To use a different image, set OPERATOR_IMAGE before running:\nOPERATOR_IMAGE=\u003cyour-registry\u003e/konflux-operator:\u003ctag\u003e ./deploy-konflux-on-ocp.sh To build and use a custom operator image from source:\ncd operator make docker-build docker-push IMG=\u003cyour-registry\u003e/konflux-operator:\u003ctag\u003e cd .. OPERATOR_IMAGE=\u003cyour-registry\u003e/konflux-operator:\u003ctag\u003e ./deploy-konflux-on-ocp.sh Konflux Custom Resource The script applies operator/config/samples/konflux_v1alpha1_konflux.yaml by default.\nThe default CR contains demo users with static passwords intended for local testing only. Never use this configuration in a production environment. Use OIDC authentication instead. See Examples for alternative sample configurations. To use a different CR, apply it after the script completes:\nkubectl delete konflux konflux kubectl apply -f \u003cyour-konflux-cr\u003e.yaml kubectl wait --for=condition=Ready=True konflux konflux --timeout=15m Verify the Konflux CR is ready See Applying the Konflux Custom Resource for instructions on verifying the Konflux CR status and accessing the UI URL.\nCreate GitHub integration secrets After the script completes, follow the GitHub Application Secrets guide to create a GitHub App and deploy its credentials into the cluster.\nUninstall Remove the Konflux CR and all managed components:\nkubectl delete konflux konflux Remove the operator and CRDs from the operator/ directory:\ncd operator make undeploy make uninstall What’s next GitHub Application Secrets — create a GitHub App and configure webhook delivery Onboard a new Application — onboard an application, run builds, tests, and releases Registry Configuration — configure an external container registry for build and release pipelines API Reference — full CR field reference Troubleshooting — solutions to common issues Examples — sample Konflux CR configurations ","categories":"","description":"Deploying Konflux on an existing OpenShift cluster using the automated deployment script.","excerpt":"Deploying Konflux on an existing OpenShift cluster using the automated deployment script.","ref":"/konflux-ci/docs/installation/install-openshift/","tags":"","title":"Installing on OpenShift"},{"body":"If you onboarded your application using the Konflux UI, the integration tests are automatically created for you by Konflux.\nOn the Konflux UI, the integration tests definition should be visible in the Integration tests tab under your application, and a pipeline should’ve been triggered for them under the Activity tab, named after the name of the application. You can click it and examine the logs to see the kind of things it verifies, and to confirm it passed successfully.\nOnce confirmed, skip to Add Customized Integration Tests.\nIf you onboarded your application manually, you will now configure your application to trigger integration tests after each PR build is done.\nConfigure Integration Tests You can add integration tests either via the Konflux UI, or by applying the equivalent Kubernetes resource.\nIf you imported your component via the UI, a similar Integration Test is pre-installed. In our case, the resource is defined in test/resources/demo-users/user/sample-components/ns2/ec-integration-test.yaml.\nApply the resource manifest:\nkubectl create -f ./test/resources/demo-users/user/sample-components/ns2/ec-integration-test.yaml Alternatively, you can provide the content from that YAML using the UI:\nLogin as user2 and navigate to your application and component. Click the Integration tests tab. Click Actions and select Add Integration test. Fill-in the details from the YAML. Click Add Integration test. Either way, you should now see the test listed in the UI under Integration tests.\nOur integration test is using a pipeline residing in the location defined under the resolverRef field on the YAML mentioned above. From now on, after the build pipeline runs, the pipeline mentioned on the integration test will also be triggered.\nTo verify that, go back to your GitHub PR and add a comment: /retest.\nOn the Konflux UI, under your component Activity tab, you should now see the build pipeline running again (test-component-on-pull-request-...), and when it’s done, you should see another pipeline run called test-component-c6glg-... being triggered.\nYou can click it and examine the logs to see the kind of things it verifies, and confirm it passes successfully.\nAdd Customized Integration Tests (Optional) The custom integration test currently only supports testing images stored externally to the cluster. If using the local registry, skip to Configure Releases. The integration tests you added just now are relatively generic Enterprise Contract tests. The next step adds a customized test scenario which is specific to our application.\nOur simple application is a container image with an entrypoint that prints hello world and exits, and we’re going to add a test to verify that it does indeed print that.\nAn integration test scenario references a pipeline definition. In this case, the pipeline is defined on our example repository. Looking at the pipelines definition, you can see that it takes a single parameter named SNAPSHOT. This parameter is provided automatically by Konflux and it contains references to the images built by the pipeline that triggered the integration tests. We can define additional parameters to be passed from Konflux to the pipeline, but in this case, we only need the snapshot.\nThe pipeline then uses the snapshot to extract the image that was built by the pipeline that triggered it and deploys that image. Next, it collects the execution logs and verifies that they indeed contain hello world.\nWe can either use the Konflux UI or the Kubernetes CLI to add the integration test scenario.\nUsing the Konflux UI Login as user2 and navigate to your application and component. Click the Integration tests tab. Click Actions and select Add Integration test. Fill in the fields: Integration test name: a name of your choice GitHub URL: https://github.com/konflux-ci/testrepo Revision: main Path in repository: integration-tests/testrepo-integration.yaml Click Add Integration test. Using kubectl The manifest is stored in test/resources/demo-users/user/sample-components/ns2/integration-test-hello.yaml:\nVerify the application field contains your application name.\nDeploy the manifest:\nkubectl create -f ./test/resources/demo-users/user/sample-components/ns2/integration-test-hello.yaml Post a /retest comment on your GitHub PR, and once the pull-request pipeline is done, you should see your new integration test being triggered alongside the one you had before.\nIf you examine the logs, you should be able to see the snapshot being parsed and the test being executed.\nWhat’s Next? With integration tests in place, you’re ready to configure the release pipeline and publish your application to a registry:\nConfigure Releases →\n","categories":"","description":"Configure and run integration tests for your application after each build pipeline completes.","excerpt":"Configure and run integration tests for your application after each build pipeline completes.","ref":"/konflux-ci/docs/onboard/integration/","tags":"","title":"Integration Tests"},{"body":"Resource Types Banner Appears in:\nKonfluxInfoSpec Banner contains banner configuration\nFieldDescription items []github.com/konflux-ci/konflux-ci/operator/api/v1alpha1.BannerItem Items is the list of banners to display\nBannerItem BannerItem contains individual banner configuration\nFieldDescription summary [Required] string Summary is the banner text (5-500 chars, supports Markdown)\ntype [Required] string Type is the banner type (info, warning, danger)\nstartTime string StartTime is the start time in HH:mm format (required if date fields are set)\nendTime string EndTime is the end time in HH:mm format (required if date fields are set)\ntimeZone string TimeZone is the IANA timezone (optional, defaults to UTC)\nyear int Year is the year for one-time banners\nmonth int Month is the month (1-12)\ndayOfWeek int DayOfWeek is the day of week (0-6, 0=Sunday)\ndayOfMonth int DayOfMonth is the day of month (1-31)\nBuildServiceConfig Appears in:\nKonfluxSpec BuildServiceConfig defines the configuration for the build-service component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxBuildServiceSpec Spec configures the build-service component.\nCertManagerConfig Appears in:\nKonfluxSpec CertManagerConfig defines the configuration for the cert-manager component.\nFieldDescription createClusterIssuer bool CreateClusterIssuer controls whether cluster issuer resources are created. Defaults to true if not specified.\nClusterConfig Appears in:\nKonfluxInfoSpec ClusterConfig contains cluster-wide key-value configuration.\nFieldDescription data ClusterConfigData Data contains structured cluster-wide configuration values. These values will be stored in the \"cluster-config\" ConfigMap in the \"konflux-info\" namespace. The ConfigMap keys are stable and part of the public API consumed by PipelineRuns. WARNING: Changing field names or JSON tags is a BREAKING CHANGE that will affect all PipelineRuns reading from the ConfigMap. Field names must remain stable.\nClusterConfigData Appears in:\nClusterConfig ClusterConfigData contains the structured fields for cluster configuration. The field names (and their JSON tags) directly map to ConfigMap keys that are read by PipelineRuns. These keys are part of the stable API and must not change without a major version release.\nFieldDescription defaultOIDCIssuer string DefaultOIDCIssuer is the default OIDC issuer URL.\nenableKeylessSigning bool EnableKeylessSigning determines if pipelines should perform/validate keyless signing. When nil, the key is omitted from the ConfigMap (unset).\nfulcioInternalUrl string FulcioInternalUrl is the internal Fulcio URL.\nfulcioExternalUrl string FulcioExternalUrl is the external Fulcio URL.\nrekorInternalUrl string RekorInternalUrl is the internal Rekor URL.\nrekorExternalUrl string RekorExternalUrl is the external Rekor URL.\ntufInternalUrl string TufInternalUrl is the internal TUF URL.\ntufExternalUrl string TufExternalUrl is the external TUF URL.\ntrustifyServerInternalUrl string TrustifyServerInternalUrl is the internal URL for the Trustify server.\ntrustifyServerExternalUrl string TrustifyServerExternalUrl is the external URL for the Trustify server.\nbuildIdentityRegexp string BuildIdentityRegexp is a regex pattern used for matching the identity that signed artifacts as part of the build pipeline.\ntrustifyOIDCIssuerUrl string TrustifyOIDCIssuerUrl is the URL of the OIDC issuer used by Trustification clients.\ntektonChainsIdentity string TektonChainsIdentity is the identity of Tekton Chains used when verifying the signature of attestations produced by Tekton Chains.\nComponentStatus Appears in:\nKonfluxStatus ComponentStatus represents the status of a Konflux component.\nFieldDescription name [Required] string Name of the component\nready [Required] bool Ready indicates if the component is ready\nmessage string Message provides additional information about the component status\nConditionAccessor ConditionAccessor is implemented by types that have a Conditions slice in their Status. This interface allows for generic condition management across different CR types.\nContainerSpec Appears in:\nControllerManagerDeploymentSpec\nDexDeploymentSpec\nNamespaceListerDeploymentSpec\nProxyDeploymentSpec\nContainerSpec defines customizations for a specific container. This type is reused across all deployment specs.\nFieldDescription resources k8s.io/api/core/v1.ResourceRequirements Resources specifies the resource requirements for the container.\nenv []k8s.io/api/core/v1.EnvVar Env specifies environment variables for the container.\nControllerManagerDeploymentSpec Appears in:\nKonfluxBuildServiceSpec\nKonfluxIntegrationServiceSpec\nKonfluxReleaseServiceSpec\nControllerManagerDeploymentSpec defines customizations for the controller-manager deployment.\nFieldDescription replicas [Required] int32 Replicas is the number of replicas for the controller-manager deployment.\nmanager ContainerSpec Manager defines customizations for the manager container.\nDefaultTenantConfig Appears in:\nKonfluxSpec DefaultTenantConfig defines the configuration for the default tenant component. The default tenant provides a namespace accessible by all authenticated users.\nFieldDescription enabled bool Enabled controls whether the default tenant is created. Defaults to true if not specified.\nDexDeploymentSpec Appears in:\nKonfluxUISpec DexDeploymentSpec defines customizations for the dex deployment.\nFieldDescription replicas [Required] int32 Replicas is the number of replicas for the dex deployment.\ndex ContainerSpec Dex defines customizations for the dex container.\nconfig github.com/konflux-ci/konflux-ci/operator/pkg/dex.DexParams Config defines the Dex IdP configuration parameters.\nGitHubIntegration Appears in:\nIntegrationsConfig GitHubIntegration contains GitHub integration configuration\nFieldDescription application_url [Required] string ApplicationURL is the GitHub App installation URL\nImageControllerConfig Appears in:\nKonfluxSpec ImageControllerConfig defines the configuration for the image-controller component. The Enabled field controls whether the component is deployed (top-level concern). The Spec field is the runtime configuration passed to the KonfluxImageController CR.\nFieldDescription enabled bool Enabled indicates whether image-controller should be deployed. If false or unset, the component CR will not be created.\nspec KonfluxImageControllerSpec Spec configures the image-controller component.\nInfoImageControllerConfig Appears in:\nIntegrationsConfig InfoImageControllerConfig contains image controller configuration for info.json\nFieldDescription enabled [Required] bool Enabled indicates if image controller is enabled\nnotifications []InfoNotificationConfig Notifications contains notification configurations\nInfoNotificationConfig Appears in:\nInfoImageControllerConfig InfoNotificationConfig contains notification configuration for info.json\nFieldDescription title [Required] string Title is the notification title\nevent [Required] string Event is the event type (e.g., \"repo_push\", \"build_complete\")\nmethod [Required] string Method is the notification method (e.g., \"webhook\", \"email\")\nconfig [Required] k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1.JSON Config contains method-specific configuration (as JSON). For webhook method, use: {\"url\": \"https://webhook.example.com/endpoint\"} For email method, use: {\"email\": \"notifications@example.com\"} Example webhook config: config: url: \"https://webhook.example.com/build\" Example email config: config: email: \"team@example.com\"\nIngressSpec Appears in:\nKonfluxUISpec IngressSpec defines the ingress configuration for KonfluxUI.\nFieldDescription enabled bool Enabled controls whether an Ingress resource should be created. When nil (unset), defaults to true on OpenShift, false otherwise.\ningressClassName string IngressClassName specifies which IngressClass to use for the ingress.\nhost string Host is the hostname used as the endpoint for configuring oauth2-proxy, dex, and related components. When set, this hostname is always used regardless of whether ingress is enabled, allowing users who manage their own external routing (e.g., Gateway API, hardware LB) to configure the endpoint without the operator managing an Ingress resource. On OpenShift, if empty, the default ingress domain and naming convention will be used.\nannotations map[string]string Annotations to add to the ingress resource.\ntlsSecretName string TLSSecretName is the name of the Kubernetes TLS secret to use for the ingress. If not specified, TLS will not be configured on the ingress.\nnodePortService NodePortServiceSpec NodePortService configures the proxy Service as a NodePort type. When set, the proxy Service will be exposed via NodePort instead of ClusterIP. This is useful for accessing Konflux UI from outside the cluster without an Ingress controller.\nIngressStatus Appears in:\nKonfluxUIStatus IngressStatus defines the observed state of the Ingress configuration.\nFieldDescription enabled [Required] bool Enabled indicates whether the Ingress resource is enabled.\nhostname string Hostname is the hostname configured for the ingress. This is the actual hostname being used, whether explicitly configured or auto-generated.\nurl string URL is the full URL to access the KonfluxUI.\nIntegrationServiceConfig Appears in:\nKonfluxSpec IntegrationServiceConfig defines the configuration for the integration-service component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxIntegrationServiceSpec Spec configures the integration-service component.\nIntegrationsConfig Appears in:\nPublicInfo IntegrationsConfig contains integration configuration\nFieldDescription github GitHubIntegration GitHub contains GitHub integration configuration\nsbom_server SBOMServerConfig SBOMServer contains SBOM server configuration\nimage_controller InfoImageControllerConfig ImageController contains image controller configuration\nInternalRegistryConfig Appears in:\nKonfluxSpec InternalRegistryConfig defines the configuration for the internal registry component. Enabling internal registry requires trust-manager to be deployed.\nFieldDescription enabled bool Enabled controls whether internal registry resources are deployed. Defaults to false if not specified.\nKonflux Appears in:\nKonflux is the Schema for the konfluxes API.\nFieldDescription spec [Required] KonfluxSpec No description provided. status [Required] KonfluxStatus No description provided. KonfluxApplicationAPI Appears in:\nKonfluxApplicationAPI is the Schema for the konfluxapplicationapis API.\nFieldDescription spec [Required] KonfluxApplicationAPISpec No description provided. status [Required] KonfluxApplicationAPIStatus No description provided. KonfluxApplicationAPIList KonfluxApplicationAPIList contains a list of KonfluxApplicationAPI.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxApplicationAPI No description provided. KonfluxApplicationAPISpec Appears in:\nKonfluxApplicationAPI KonfluxApplicationAPISpec defines the desired state of KonfluxApplicationAPI.\nKonfluxApplicationAPIStatus Appears in:\nKonfluxApplicationAPI KonfluxApplicationAPIStatus defines the observed state of KonfluxApplicationAPI.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxApplicationAPI state\nKonfluxBuildService Appears in:\nKonfluxBuildService is the Schema for the konfluxbuildservices API\nFieldDescription spec [Required] KonfluxBuildServiceSpec No description provided. status [Required] KonfluxBuildServiceStatus No description provided. KonfluxBuildServiceList KonfluxBuildServiceList contains a list of KonfluxBuildService\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxBuildService No description provided. KonfluxBuildServiceSpec Appears in:\nBuildServiceConfig\nKonfluxBuildService\nKonfluxBuildServiceSpec defines the desired state of KonfluxBuildService\nFieldDescription buildControllerManager ControllerManagerDeploymentSpec BuildControllerManager defines customizations for the controller-manager deployment.\nKonfluxBuildServiceStatus Appears in:\nKonfluxBuildService KonfluxBuildServiceStatus defines the observed state of KonfluxBuildService\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxBuildService state\nKonfluxCertManager Appears in:\nKonfluxCertManager is the Schema for the konfluxcertmanagers API.\nFieldDescription spec [Required] KonfluxCertManagerSpec No description provided. status [Required] KonfluxCertManagerStatus No description provided. KonfluxCertManagerList KonfluxCertManagerList contains a list of KonfluxCertManager.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxCertManager No description provided. KonfluxCertManagerSpec Appears in:\nKonfluxCertManager KonfluxCertManagerSpec defines the desired state of KonfluxCertManager.\nFieldDescription createClusterIssuer bool CreateClusterIssuer controls whether cluster issuer resources are created. Defaults to true if not specified. The cluster-Issuer will be used for generating certificates for the Konflux components\nKonfluxCertManagerStatus Appears in:\nKonfluxCertManager KonfluxCertManagerStatus defines the observed state of KonfluxCertManager.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxCertManager state\nKonfluxDefaultTenant Appears in:\nKonfluxDefaultTenant is the Schema for the konfluxdefaulttenants API.\nFieldDescription spec [Required] KonfluxDefaultTenantSpec No description provided. status [Required] KonfluxDefaultTenantStatus No description provided. KonfluxDefaultTenantList KonfluxDefaultTenantList contains a list of KonfluxDefaultTenant\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxDefaultTenant No description provided. KonfluxDefaultTenantSpec Appears in:\nKonfluxDefaultTenant KonfluxDefaultTenantSpec defines the desired state of KonfluxDefaultTenant.\nKonfluxDefaultTenantStatus Appears in:\nKonfluxDefaultTenant KonfluxDefaultTenantStatus defines the observed state of KonfluxDefaultTenant.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxDefaultTenant state\nKonfluxEnterpriseContract Appears in:\nKonfluxEnterpriseContract is the Schema for the konfluxenterprisecontracts API.\nFieldDescription spec [Required] KonfluxEnterpriseContractSpec No description provided. status [Required] KonfluxEnterpriseContractStatus No description provided. KonfluxEnterpriseContractList KonfluxEnterpriseContractList contains a list of KonfluxEnterpriseContract.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxEnterpriseContract No description provided. KonfluxEnterpriseContractSpec Appears in:\nKonfluxEnterpriseContract KonfluxEnterpriseContractSpec defines the desired state of KonfluxEnterpriseContract.\nFieldDescription foo [Required] string Foo is an example field of KonfluxEnterpriseContract. Edit konfluxenterprisecontract_types.go to remove/update\nKonfluxEnterpriseContractStatus Appears in:\nKonfluxEnterpriseContract KonfluxEnterpriseContractStatus defines the observed state of KonfluxEnterpriseContract.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxEnterpriseContract state\nKonfluxImageController Appears in:\nKonfluxImageController is the Schema for the konfluximagecontrollers API.\nFieldDescription spec [Required] KonfluxImageControllerSpec No description provided. status [Required] KonfluxImageControllerStatus No description provided. KonfluxImageControllerList KonfluxImageControllerList contains a list of KonfluxImageController.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxImageController No description provided. KonfluxImageControllerSpec Appears in:\nImageControllerConfig\nKonfluxImageController\nKonfluxImageControllerSpec defines the desired state of KonfluxImageController.\nFieldDescription quayCABundle QuayCABundleSpec QuayCABundle configures a custom CA bundle for Quay registry communication. When set, the CA certificate from the referenced ConfigMap is mounted into the image-controller pod and used for TLS verification when connecting to Quay. This is required when using a self-hosted Quay registry with a custom CA.\nKonfluxImageControllerStatus Appears in:\nKonfluxImageController KonfluxImageControllerStatus defines the observed state of KonfluxImageController.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxImageController state\nKonfluxInfo Appears in:\nKonfluxInfo is the Schema for the konfluxinfoes API.\nFieldDescription spec [Required] KonfluxInfoSpec No description provided. status [Required] KonfluxInfoStatus No description provided. KonfluxInfoConfig Appears in:\nKonfluxSpec KonfluxInfoConfig defines the configuration for the info component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxInfoSpec Spec configures the info component.\nKonfluxInfoList KonfluxInfoList contains a list of KonfluxInfo.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxInfo No description provided. KonfluxInfoSpec Appears in:\nKonfluxInfo\nKonfluxInfoConfig\nKonfluxInfoSpec defines the desired state of KonfluxInfo.\nFieldDescription publicInfo PublicInfo PublicInfo defines the configuration for the info.json ConfigMap. If not specified, default development values will be used.\nbanner Banner Banner defines the configuration for the banner-content.yaml ConfigMap. If not specified, an empty banner array will be used.\nclusterConfig ClusterConfig ClusterConfig defines cluster-wide key-value configuration. The key-value pairs will be stored in a ConfigMap named \"cluster-config\" in the \"konflux-info\" namespace, readable by all authenticated users. User-provided values take precedence over auto-detected values.\nKonfluxInfoStatus Appears in:\nKonfluxInfo KonfluxInfoStatus defines the observed state of KonfluxInfo.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxInfo state\nKonfluxIntegrationService Appears in:\nKonfluxIntegrationService is the Schema for the konfluxintegrationservices API\nFieldDescription spec [Required] KonfluxIntegrationServiceSpec No description provided. status [Required] KonfluxIntegrationServiceStatus No description provided. KonfluxIntegrationServiceList KonfluxIntegrationServiceList contains a list of KonfluxIntegrationService\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxIntegrationService No description provided. KonfluxIntegrationServiceSpec Appears in:\nIntegrationServiceConfig\nKonfluxIntegrationService\nKonfluxIntegrationServiceSpec defines the desired state of KonfluxIntegrationService\nFieldDescription integrationControllerManager ControllerManagerDeploymentSpec IntegrationControllerManager defines customizations for the controller-manager deployment.\nKonfluxIntegrationServiceStatus Appears in:\nKonfluxIntegrationService KonfluxIntegrationServiceStatus defines the observed state of KonfluxIntegrationService\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxIntegrationService state\nKonfluxInternalRegistry Appears in:\nKonfluxInternalRegistry is the Schema for the konfluxinternalregistries API. Enabling the internal registry requires trust-manager to be deployed for Certificate and Bundle resources.\nFieldDescription spec [Required] KonfluxInternalRegistrySpec No description provided. status [Required] KonfluxInternalRegistryStatus No description provided. KonfluxInternalRegistryList KonfluxInternalRegistryList contains a list of KonfluxInternalRegistry.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxInternalRegistry No description provided. KonfluxInternalRegistrySpec Appears in:\nKonfluxInternalRegistry KonfluxInternalRegistrySpec defines the desired state of KonfluxInternalRegistry.\nKonfluxInternalRegistryStatus Appears in:\nKonfluxInternalRegistry KonfluxInternalRegistryStatus defines the observed state of KonfluxInternalRegistry.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxInternalRegistry state\nKonfluxList KonfluxList contains a list of Konflux.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []Konflux No description provided. KonfluxNamespaceLister Appears in:\nKonfluxNamespaceLister is the Schema for the konfluxnamespacelisters API.\nFieldDescription spec [Required] KonfluxNamespaceListerSpec No description provided. status [Required] KonfluxNamespaceListerStatus No description provided. KonfluxNamespaceListerList KonfluxNamespaceListerList contains a list of KonfluxNamespaceLister.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxNamespaceLister No description provided. KonfluxNamespaceListerSpec Appears in:\nKonfluxNamespaceLister\nNamespaceListerConfig\nKonfluxNamespaceListerSpec defines the desired state of KonfluxNamespaceLister.\nFieldDescription namespaceLister NamespaceListerDeploymentSpec NamespaceLister defines customizations for the namespace-lister deployment.\nKonfluxNamespaceListerStatus Appears in:\nKonfluxNamespaceLister KonfluxNamespaceListerStatus defines the observed state of KonfluxNamespaceLister.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxNamespaceLister state\nKonfluxRBAC Appears in:\nKonfluxRBAC is the Schema for the konfluxrbacs API.\nFieldDescription spec [Required] KonfluxRBACSpec No description provided. status [Required] KonfluxRBACStatus No description provided. KonfluxRBACList KonfluxRBACList contains a list of KonfluxRBAC.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxRBAC No description provided. KonfluxRBACSpec Appears in:\nKonfluxRBAC KonfluxRBACSpec defines the desired state of KonfluxRBAC.\nFieldDescription foo [Required] string Foo is an example field of KonfluxRBAC. Edit konfluxrbac_types.go to remove/update\nKonfluxRBACStatus Appears in:\nKonfluxRBAC KonfluxRBACStatus defines the observed state of KonfluxRBAC.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxRBAC state\nKonfluxReleaseService Appears in:\nKonfluxReleaseService is the Schema for the konfluxreleaseservices API\nFieldDescription spec [Required] KonfluxReleaseServiceSpec No description provided. status [Required] KonfluxReleaseServiceStatus No description provided. KonfluxReleaseServiceList KonfluxReleaseServiceList contains a list of KonfluxReleaseService\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxReleaseService No description provided. KonfluxReleaseServiceSpec Appears in:\nKonfluxReleaseService\nReleaseServiceConfig\nKonfluxReleaseServiceSpec defines the desired state of KonfluxReleaseService\nFieldDescription releaseControllerManager ControllerManagerDeploymentSpec ReleaseControllerManager defines customizations for the controller-manager deployment.\nKonfluxReleaseServiceStatus Appears in:\nKonfluxReleaseService KonfluxReleaseServiceStatus defines the observed state of KonfluxReleaseService\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxReleaseService state\nKonfluxSegmentBridge Appears in:\nKonfluxSegmentBridge is the Schema for the konfluxsegmentbridges API.\nFieldDescription spec [Required] KonfluxSegmentBridgeSpec No description provided. status [Required] KonfluxSegmentBridgeStatus No description provided. KonfluxSegmentBridgeList KonfluxSegmentBridgeList contains a list of KonfluxSegmentBridge.\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxSegmentBridge No description provided. KonfluxSegmentBridgeSpec Appears in:\nKonfluxSegmentBridge\nTelemetryConfig\nKonfluxSegmentBridgeSpec defines the desired state of KonfluxSegmentBridge.\nFieldDescription segmentKey string SegmentKey is the write key used to authenticate with the Segment API. When not specified, a default key baked into the operator build is used, routing telemetry data to the Konflux dev team's Segment project.\nsegmentAPIURL string SegmentAPIURL is the base URL of the Segment API endpoint, without \"/batch\". The operator appends \"/batch\" to produce the SEGMENT_BATCH_API env var. Example: \"https://console.redhat.com/connections/api/v1\" When not specified, defaults to \"https://api.segment.io/v1\". Only plain HTTPS base URLs are supported (no query strings or fragments).\nKonfluxSegmentBridgeStatus Appears in:\nKonfluxSegmentBridge KonfluxSegmentBridgeStatus defines the observed state of KonfluxSegmentBridge.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxSegmentBridge state\nKonfluxSpec Appears in:\nKonflux KonfluxSpec defines the desired state of Konflux.\nFieldDescription imageController ImageControllerConfig ImageController configures the image-controller component. The runtime configuration is copied to the KonfluxImageController CR by the operator.\nui KonfluxUIConfig KonfluxUI configures the UI component. The runtime configuration is copied to the KonfluxUI CR by the operator.\nintegrationService IntegrationServiceConfig KonfluxIntegrationService configures the integration-service component. The runtime configuration is copied to the KonfluxIntegrationService CR by the operator.\nreleaseService ReleaseServiceConfig KonfluxReleaseService configures the release-service component. The runtime configuration is copied to the KonfluxReleaseService CR by the operator.\nbuildService BuildServiceConfig KonfluxBuildService configures the build-service component. The runtime configuration is copied to the KonfluxBuildService CR by the operator.\nnamespaceLister NamespaceListerConfig NamespaceLister configures the namespace-lister component. The runtime configuration is copied to the KonfluxNamespaceLister CR by the operator.\ninfo KonfluxInfoConfig KonfluxInfo configures the info component. The runtime configuration is copied to the KonfluxInfo CR by the operator.\ncertManager CertManagerConfig CertManager configures the cert-manager component. The runtime configuration is copied to the KonfluxCertManager CR by the operator.\ninternalRegistry InternalRegistryConfig InternalRegistry configures the internal registry component. The runtime configuration is copied to the KonfluxInternalRegistry CR by the operator. Enabling internal registry requires trust-manager to be deployed.\ndefaultTenant DefaultTenantConfig DefaultTenant configures the default tenant component. The default tenant provides a namespace accessible by all authenticated users. The runtime configuration is copied to the KonfluxDefaultTenant CR by the operator.\ntelemetry TelemetryConfig Telemetry configures the user-facing telemetry component (internally backed by segment-bridge). When enabled, the operator deploys a CronJob that collects anonymized usage data from the cluster and sends it to Segment for analysis. The runtime configuration is copied to the KonfluxSegmentBridge CR by the operator.\nKonfluxStatus Appears in:\nKonflux KonfluxStatus defines the observed state of Konflux.\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the Konflux state\ncomponents []ComponentStatus Components shows the status of individual Konflux components\nuiURL string UIURL is the URL to access the Konflux UI. This is populated from the KonfluxUI status when ingress is enabled.\nKonfluxUI Appears in:\nKonfluxUI is the Schema for the konfluxuis API\nFieldDescription spec [Required] KonfluxUISpec No description provided. status [Required] KonfluxUIStatus No description provided. KonfluxUIConfig Appears in:\nKonfluxSpec KonfluxUIConfig defines the configuration for the UI component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxUISpec Spec configures the UI component.\nKonfluxUIList KonfluxUIList contains a list of KonfluxUI\nFieldDescription metadata [Required] k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta No description provided. items [Required] []KonfluxUI No description provided. KonfluxUISpec Appears in:\nKonfluxUI\nKonfluxUIConfig\nKonfluxUISpec defines the desired state of KonfluxUI\nFieldDescription ingress IngressSpec Ingress defines the ingress configuration for KonfluxUI. This affects the proxy, oauth2-proxy, and dex components.\nproxy ProxyDeploymentSpec Proxy defines customizations for the proxy deployment.\ndex DexDeploymentSpec Dex defines customizations for the dex deployment.\nKonfluxUIStatus Appears in:\nKonfluxUI KonfluxUIStatus defines the observed state of KonfluxUI\nFieldDescription conditions []k8s.io/apimachinery/pkg/apis/meta/v1.Condition Conditions represent the latest available observations of the KonfluxUI state\ningress IngressStatus Ingress contains the observed state of the Ingress configuration.\nNamespaceListerConfig Appears in:\nKonfluxSpec NamespaceListerConfig defines the configuration for the namespace-lister component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxNamespaceListerSpec Spec configures the namespace-lister component.\nNamespaceListerDeploymentSpec Appears in:\nKonfluxNamespaceListerSpec NamespaceListerDeploymentSpec defines customizations for the namespace-lister deployment.\nFieldDescription replicas [Required] int32 Replicas is the number of replicas for the namespace-lister deployment.\nnamespaceLister ContainerSpec NamespaceLister defines customizations for the namespace-lister container.\nNodePortServiceSpec Appears in:\nIngressSpec NodePortServiceSpec defines the NodePort service configuration for the proxy.\nFieldDescription httpsPort int32 HTTPSPort is the NodePort to use for the HTTPS port. If not specified, Kubernetes will allocate a port automatically. This is useful for exposing Konflux UI to the outside world without an Ingress controller.\nProxyDeploymentSpec Appears in:\nKonfluxUISpec ProxyDeploymentSpec defines customizations for the proxy deployment.\nFieldDescription replicas [Required] int32 Replicas is the number of replicas for the proxy deployment.\nnginx ContainerSpec Nginx defines customizations for the nginx container.\noauth2Proxy ContainerSpec OAuth2Proxy defines customizations for the oauth2-proxy container.\nPublicInfo Appears in:\nKonfluxInfoSpec PublicInfo contains configurable parameters for info.json\nFieldDescription environment [Required] string Environment is the environment type (development, production, staging)\nvisibility [Required] string Visibility is the visibility level (public, private)\nintegrations IntegrationsConfig Integrations contains integration configuration\nstatusPageUrl string StatusPageUrl is the URL to the status page\nrbac []RBACRole RBAC contains RBAC role definitions\nQuayCABundleSpec Appears in:\nKonfluxImageControllerSpec QuayCABundleSpec configures a custom CA bundle for Quay registry communication. The referenced ConfigMap must exist in the image-controller namespace.\nFieldDescription configMapName [Required] string ConfigMapName is the name of the ConfigMap containing the CA certificate.\nkey [Required] string Key is the key within the ConfigMap that contains the CA certificate in PEM format. Must be a plain filename without path separators or directory traversal sequences.\nRBACRole Appears in:\nPublicInfo RBACRole contains RBAC role definition\nFieldDescription name [Required] string Name is the ClusterRole name (e.g., \"konflux-admin-user-actions\")\ndescription [Required] string Description is the role description\ndisplayName string DisplayName is the human-readable name displayed in the UI. If not specified, defaults to the Name field.\nReleaseServiceConfig Appears in:\nKonfluxSpec ReleaseServiceConfig defines the configuration for the release-service component. The Spec field is the runtime configuration passed to the component.\nFieldDescription spec KonfluxReleaseServiceSpec Spec configures the release-service component.\nSBOMServerConfig Appears in:\nIntegrationsConfig SBOMServerConfig contains SBOM server configuration\nFieldDescription url [Required] string URL is the SBOM content URL\nsbom_sha [Required] string SBOMSha is the SBOM SHA URL\nTelemetryConfig Appears in:\nKonfluxSpec TelemetryConfig defines the user-facing telemetry configuration. Internally, this configures the segment-bridge component. The Enabled field controls whether the component is deployed. The Spec field is the runtime configuration passed to the KonfluxSegmentBridge CR.\nFieldDescription enabled bool Enabled controls whether the telemetry CronJob is deployed. Defaults to false if not specified.\nspec KonfluxSegmentBridgeSpec Spec configures the telemetry component.\n","categories":"","description":"Generated API reference documentation for Konflux Operator.","excerpt":"Generated API reference documentation for Konflux Operator.","ref":"/konflux-ci/docs/reference/konflux.v1alpha1/","tags":"","title":"Konflux v1alpha1 API"},{"body":"Building from source is intended for contributors to the Operator or anyone who needs to run a custom build. There are two modes:\nMode When to use Run Operator locally Iterative development - Operator runs on your machine, connects to a cluster Deploy Operator image Test a containerised build in-cluster Prerequisites Tool Minimum version Go v1.25.0 podman or docker podman v5.3.1 / docker v27.0.1 (required only when building an Operator image) kubectl v1.31.4 make — openssl v3.0.13 cluster-admin permissions A Kubernetes cluster with the following dependencies installed (see Setup): Tekton (or OpenShift Pipelines when using OpenShift) cert-manager trust-manager Kyverno Pipelines-as-Code Setup All ./scripts/ commands run from the repository root (konflux-ci/). All make commands run from the operator/ subdirectory. Clone the repository: git clone https://github.com/konflux-ci/konflux-ci.git cd konflux-ci Deploy the cluster dependencies: If you are working with a local Kind cluster, Local Deployment (Kind) provides a fully automated setup that handles cluster creation and dependency deployment in a single step. # Generic Kubernetes SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true ./deploy-deps.sh # OpenShift - use native operators instead of upstream ones USE_OPENSHIFT_PIPELINES=true USE_OPENSHIFT_CERTMANAGER=true \\ SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true \\ ./deploy-deps.sh Alternatively, apply the individual kustomizations under dependencies/ manually.\nRun Operator locally Running the Operator locally is the recommended workflow for most development scenarios. The Operator process runs on your machine and uses your kubectl context to connect to the cluster - no image build required.\nStep 1: Install the CRDs cd operator make install Step 2: Start the Operator make run The Operator connects to your cluster, watches for Konflux Custom Resources, and reconciles them. Keep this terminal open while you work.\nStep 3: Create and verify the Konflux Custom Resource In a separate terminal, see Applying the Konflux Custom Resource for instructions on creating a Konflux CR and verifying that all components are ready.\nDevelopment workflow After making code changes, stop the Operator with Ctrl+C and restart: make run No image rebuild or deployment restart is needed Run make help to see all available targets Deploy Operator image Use this approach when you want to run your custom build as an in-cluster deployment (e.g. to test Operator-managed upgrades or RBAC behaviour). There are two paths depending on your setup.\nPath 1: Full Kind deployment using the script If you are working with a local Kind cluster, deploy-local.sh with OPERATOR_INSTALL_METHOD=build handles the entire flow in one step - it builds the Operator image from your local checkout, loads it into Kind, deploys all dependencies, and installs the Operator:\nOPERATOR_INSTALL_METHOD=build ./scripts/deploy-local.sh See Local Deployment (Kind) for setup instructions and all configuration options.\nPath 2: Manual deployment on an existing cluster Use this path when you have an existing cluster that already has Konflux’s dependencies deployed — either manually or using the built-in scripts described in Setup — and you want to deploy only the Operator image.\nStep 1: Build and push the image cd operator make docker-build docker-push IMG=\u003cyour-registry\u003e/konflux-operator:\u003ctag\u003e Make sure you have push access to the registry and that the cluster can pull from it.\nStep 2: Install the CRDs make install Step 3: Deploy the Operator make deploy IMG=\u003cyour-registry\u003e/konflux-operator:\u003ctag\u003e If you encounter RBAC errors, ensure your user has cluster-admin privileges. Step 4: Create and verify the Konflux Custom Resource See Applying the Konflux Custom Resource for instructions on creating a Konflux CR and verifying that all components are ready.\nUninstall Remove the Konflux CR:\nkubectl delete konflux konflux Remove the CRDs:\nmake uninstall Undeploy the Operator (in-cluster mode only):\nmake undeploy What’s next Onboard a new Application - onboard an application, run builds, tests, and releases Local Deployment (Kind) - full automated Kind setup using deploy-local.sh API Reference - full CR field reference Troubleshooting - solutions to common issues Examples - sample Konflux CR configurations ","categories":"","description":"Building and running the Konflux Operator from source for development or custom deployments.","excerpt":"Building and running the Konflux Operator from source for development or custom deployments.","ref":"/konflux-ci/docs/installation/install-from-source/","tags":"","title":"Building and Installing from Source"},{"body":"Konflux supports any OCI-compliant container registry for storing built images.\nFor local Kind deployments, the internal OCI registry (registry-service.kind-registry.svc.cluster.local, exposed on localhost:5001) works out of the box with no authentication required. Note that images stored in the internal registry are lost when the Kind cluster is deleted.\nFor production deployments, use an external registry. The sections below cover obtaining registry credentials, creating push secrets for build and release pipelines, and optionally enabling the image-controller for automatic Quay repository provisioning when onboarding components via the UI.\nObtaining registry credentials Configuring push secrets for build and release pipelines requires registry credentials in the form of a Docker config.json file. The steps below describe how to obtain this file for the most common registry providers.\nQuay.io Log in to quay.io and click your user icon in the top-right corner. Select Account Settings. Click Generate Encrypted Password. Enter your login password and click Verify. Select Docker Configuration. Click Download \u003cyour-username\u003e-auth.json and note the download location. Use that path in the kubectl create secret command below. Docker Hub Log in to Docker Hub and navigate to Account Settings → Security. Create a new access token with read/write permissions. Authenticate locally to generate a config.json: podman login docker.io The config file is written to ${XDG_RUNTIME_DIR}/containers/auth.json (Podman) or ~/.docker/config.json (Docker). Use that path in the kubectl create secret command below. Other registries Follow your registry provider’s documentation to obtain a Docker config.json with authentication credentials. Most registries support podman login \u003cregistry\u003e or docker login \u003cregistry\u003e to generate the file.\nBuild pipeline push secret After the build pipeline builds an image it pushes it to a container registry. If using a registry that requires authentication, the namespace where the pipeline runs must be configured with a push secret.\nTekton injects push secrets into pipelines by attaching them to a service account. The service account used for running the pipelines is created by the Build Service Operator and is named build-pipeline-\u003ccomponent-name\u003e.\nCreate the secret in the pipeline namespace (replace $NS with your namespace, e.g. user-ns1): kubectl create -n $NS secret generic regcred \\ --from-file=.dockerconfigjson=\u003cpath/to/config.json\u003e \\ --type=kubernetes.io/dockerconfigjson Attach the secret to the component’s build pipeline service account: kubectl patch -n $NS serviceaccount \"build-pipeline-${COMPONENT_NAME}\" \\ -p '{\"secrets\": [{\"name\": \"regcred\"}]}' Release pipeline push secret If the release pipeline pushes images to a container registry it needs its own push secret. The release pipeline runs under the service account named in the ReleasePlanAdmission (e.g. release-pipeline in the demo resources).\nIn the managed namespace (e.g. managed-ns2), create the secret the same way as for the build pipeline: kubectl create -n $NS secret generic regcred \\ --from-file=.dockerconfigjson=\u003cpath/to/config.json\u003e \\ --type=kubernetes.io/dockerconfigjson Attach it to the release pipeline service account: kubectl patch -n $NS serviceaccount release-pipeline \\ -p '{\"secrets\": [{\"name\": \"regcred\"}]}' Trusted Artifacts (ociStorage) If the release pipeline uploads Trusted Artifacts, set the ociStorage field in your ReleasePlanAdmission to your own OCI storage URL (e.g. your registry path). Ensure the release-pipeline service account has credentials to push to that location (e.g. an additional registry secret or Quay token linked to that service account).\n# In your ReleasePlanAdmission spec: pipeline: pipelineRef: ociStorage: quay.io/my-org/my-component-release-ta For local Kind deployments using the internal registry:\n# In your ReleasePlanAdmission spec: pipeline: pipelineRef: ociStorage: registry-service.kind-registry/test-component-release-ta Quay.io auto-provisioning (image-controller) The image-controller automatically creates Quay repositories when a component is onboarded via the Konflux UI. This is required for the UI-based onboarding flow. It is optional when creating components directly with Kubernetes manifests.\nStep 1: Create a Quay organization and OAuth token Create a Quay.io account if you don’t have one.\nCreate a Quay organization.\nCreate an OAuth Application and generate an access token:\nIn your Quay organization, go to Applications → Create New Application Click on the application name → Generate Token Select these permissions: Administer Organization Administer Repositories Create Repositories Click Generate Access Token → Authorize Application and copy the token. This is your only opportunity to view it.\nStep 2: Enable image-controller and create the secret Option A — local Kind deployment via deploy-local.sh (recommended):\nSet the environment variables before running the script. This automatically selects the konflux-e2e.yaml sample CR which has imageController.enabled: true:\nexport QUAY_TOKEN=\"\u003ctoken from step 3\u003e\" export QUAY_ORGANIZATION=\"\u003corganization from step 2\u003e\" ./scripts/deploy-local.sh Option B — manual setup on an existing cluster:\nFirst, enable image-controller in your Konflux CR:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: imageController: enabled: true Then create the secret in the image-controller namespace:\nkubectl -n image-controller create secret generic quaytoken \\ --from-literal=quaytoken=\"\u003ctoken from step 3\u003e\" \\ --from-literal=organization=\"\u003corganization from step 2\u003e\" Self-hosted Quay registry The image-controller can be configured to work with a self-hosted Quay instance instead of the public quay.io. This requires two things: pointing image-controller to the self-hosted Quay API and, if the instance uses a custom CA certificate, providing that certificate.\nStep 1: Configure the Quay API URL To point image-controller at a self-hosted Quay instance, add the quayapiurl key to the quaytoken secret in the image-controller namespace. The value should be the full API URL including the /api/v1 suffix.\nIf you are creating the secret for the first time:\nkubectl -n image-controller create secret generic quaytoken \\ --from-literal=quaytoken=\"\u003cOAuth token\u003e\" \\ --from-literal=organization=\"\u003corganization\u003e\" \\ --from-literal=quayapiurl=\"https://quay.example.com/api/v1\" If the secret already exists, patch it:\nkubectl -n image-controller patch secret quaytoken \\ -p '{\"stringData\":{\"quayapiurl\":\"https://quay.example.com/api/v1\"}}' When quayapiurl is not set, image-controller defaults to the public Quay API (https://quay.io/api/v1).\nStep 2: Provide a custom CA certificate (optional) If the self-hosted Quay instance uses a TLS certificate signed by a custom CA (i.e. not trusted by the system CA bundle), you must provide the CA certificate so that image-controller can verify the connection.\nCreate a ConfigMap in the image-controller namespace with the CA certificate: kubectl -n image-controller create configmap quay-ca-bundle \\ --from-file=ca-bundle.crt=/path/to/your/ca-certificate.crt Configure the CA bundle in your Konflux CR: apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: imageController: enabled: true spec: quayCABundle: configMapName: quay-ca-bundle key: ca-bundle.crt The operator will mount the CA certificate into the image-controller pod and set the QUAY_ADDITIONAL_CA environment variable pointing to it. The certificate is appended to the system CA pool, so connections to other registries are not affected.\nThe configMapName and key fields allow you to use any ConfigMap name and key. For example, if your ConfigMap has the certificate under a different key:\nkubectl -n image-controller create configmap my-ca \\ --from-file=quay-ca.crt=/path/to/ca.crt spec: imageController: enabled: true spec: quayCABundle: configMapName: my-ca key: quay-ca.crt ","categories":"","description":"Configuring container registries for build and release pipelines.","excerpt":"Configuring container registries for build and release pipelines.","ref":"/konflux-ci/docs/guides/registry-configuration/","tags":"","title":"Registry Configuration"},{"body":"You will now configure Konflux to release your application to the registry.\nThis requires:\nA pipeline that will run on push events to the component repository. ReleasePlan and ReleasePlanAdmission resources that will react on the snapshot to be created after the on-push pipeline is triggered, which in turn will trigger the creation of the release. If onboarded using the Konflux UI, the pipeline was already created and configured for you.\nIf onboarded using Kubernetes manifests, you should have copied the pipeline to the .tekton directory before creating your initial PR.\nCreate ReleasePlan and ReleasePlanAdmission Resources Once you merge a PR, the on-push pipeline will be triggered and once it completes, a snapshot will be created and the integration tests will run against the container images built on the on-push pipeline.\nKonflux now needs ReleasePlan and ReleasePlanAdmission resources that will be used together with the snapshot for creating a new Release resource.\nThe ReleasePlan resource includes a reference to the application that the development team wants to release, along with the namespace where the application is supposed to be released (in this case, managed-ns2).\nThe ReleasePlanAdmission resource defines how the application should be released, and it is typically maintained not by the development team, but by the managed environment team (the team that supports the deployments of that application).\nThe ReleasePlanAdmission resource makes use of an Enterprise Contract (EC) policy, which defines criteria for gating releases.\nFor more details you can examine the manifests under the test/resources/demo-users/user/sample-components/managed-ns2/ directory.\nTo do all that, follow these steps:\nEdit the ReleasePlan manifest at test/resources/demo-users/user/sample-components/ns2/release-plan.yaml and verify that the application field contains the name of your application.\nDeploy the Release Plan under the development team namespace (user-ns2):\nkubectl create -f ./test/resources/demo-users/user/sample-components/ns2/release-plan.yaml Edit the ReleasePlanAdmission manifest at test/resources/demo-users/user/sample-components/managed-ns2/rpa.yaml.\nIf you're using the in-cluster registry, you are not required to make any of the changes to the ReleasePlanAdmission manifest described below before deploying it. Under applications, verify that your application is the one listed.\nUnder the components mapping list, set the name field so it matches the name of your component and replace the value of the repository field with the URL of the repository on the registry to which your released images are to be pushed. This is typically a different repository from the one builds are pushed to during tests.\nFor example, if your component is called test-component and you wish to release your images to a Quay.io repository called my-user/my-konflux-component-release:\nmapping: components: - name: test-component repository: quay.io/my-user/my-konflux-component-release The example release pipeline requires a repository into which trusted artifacts will be written as a manner of passing data between tasks in the pipeline.\nThe ociStorage field tells the pipeline where to have that stored. For example:\nociStorage: registry-service.kind-registry/test-component-release-ta Deploy the managed environment team’s namespace:\nkubectl apply -k ./test/resources/demo-users/user/sample-components/managed-ns2 At this point, you can click Releases on the left pane in the UI. The status for your ReleasePlan should be “Matched”.\nCreate a Registry Secret for the Managed Namespace If you're using the in-cluster registry, you can skip this step and proceed to Trigger the Release. In order for the release service to be able to push images to the registry, a secret is needed on the managed namespace (managed-ns2).\nThe secret needs to be created on this namespace regardless of whether you used the UI for onboarding or not, but if you weren’t, then this secret is identical to the one that was previously created on the development namespace (user-ns2).\nTo create it, follow the instructions for creating a push secret for the release pipeline for namespace managed-ns2.\nTrigger the Release You can now merge your PR and observe the behavior:\nMerge the PR in GitHub. On the Konflux UI, you should now see your on-push pipeline being triggered. Once it finishes successfully, the integration tests should run once more, and a release should be created under the Releases tab. Wait for the Release to be complete, and check your registry repository for the released image. Congratulations! You just created a release for your application.\nYour released image should be available inside the repository pointed by your ReleasePlanAdmission resource.\nWorking with External Image Registry (Optional) This section provides instructions if you’re interested in using an external image registry instead of the in-cluster one.\nPush Pull Request Builds to External Registry First, configure your application to use an external registry instead of the internal one. To do this, you need a repository on a public registry where you have push permissions (e.g. Docker Hub, Quay.io):\nCreate an account on a public registry (unless you have one already).\nCreate a push secret based on your login information and deploy it to your user namespace on the cluster (e.g. user-ns2).\nCreate a new repository on the registry to which your images will be pushed. For example, in Quay.io, click the Create New Repository button and provide it with a name and location. Free accounts tend to have limits on private repositories, so for the purpose of this example, you can make your repository public.\nConfigure your build pipeline to use your new repository on the public registry instead of the local registry.\nEdit .tekton/testrepo-pull-request.yaml inside your testrepo fork and replace the value of output-image to point to your repository. For example, if using Quay.io with username my-user and a repository called my-konflux-component:\n- name: output-image value: quay.io/my-user/my-konflux-component:on-pr-{{revision}} Push your changes to your testrepo fork, either as a new PR or as a change to an existing PR. Observe the behavior as before, and verify that the build pipeline finishes successfully, and that your public repository contains the images pushed by the pipeline.\nUse External Registry for on-push Pipeline Edit the content of the copy you made earlier to the on-push pipeline at .tekton/testrepo-push.yaml, replacing the value of output-image so that the repository URL is identical to the one previously set for the pull-request pipeline.\nFor example, if using Quay.io with username my-user and a repository called my-konflux-component:\n- name: output-image value: quay.io/my-user/my-konflux-component:{{revision}} This is the same as for the pull request pipeline, but the tag portion now only includes the revision (no on-pr- prefix). ","categories":"","description":"Configure ReleasePlan and ReleasePlanAdmission resources to release your application to a container registry.","excerpt":"Configure ReleasePlan and ReleasePlanAdmission resources to release your application to a container registry.","ref":"/konflux-ci/docs/onboard/release/","tags":"","title":"Configure Releases"},{"body":"Konflux uses Dex as a federated identity broker and oauth2-proxy to authenticate users against one or more third-party identity providers. The operator manages both components and exposes their configuration through the Konflux CR.\nAll authentication settings live under spec.ui.spec.dex in the Konflux CR.\nOverview Authentication in Konflux works as follows:\nThe user’s browser is redirected to Dex. Dex presents the configured connectors (GitHub, OpenShift, OIDC, LDAP, etc.) as login options. After the user authenticates with an upstream provider, Dex issues a token to oauth2-proxy. oauth2-proxy validates the token and grants access to the Konflux UI. The spec.ui.spec.dex.config section controls which identity providers are available and how Dex is configured.\nThe static-password configuration included in the default sample CR (and used in local Kind deployments) is intended for development and CI only. Remove staticPasswords and configure an OIDC connector before deploying to production. GitHub OAuth GitHub OAuth is the a common connector for Konflux deployments.\nCreating a GitHub OAuth App Create a GitHub OAuth App following the GitHub documentation.\nWhen registering the app, set the Authorization callback URL to:\nhttps://\u003cyour-konflux-hostname\u003e/idp/callback Dex is not exposed at a separate hostname — it runs behind the Konflux proxy at the /idp/ path of your Konflux UI URL. The operator derives this URL automatically from ingress.host (or the OpenShift default ingress domain if not explicitly set).\nOnce created, note the Client ID and generate a Client Secret — you will need both in the next step.\nCreating the Secret Store the credentials in the konflux-ui namespace where Dex runs:\nkubectl create secret generic github-client \\ --namespace konflux-ui \\ --from-literal=clientID=\"\u003cyour-client-id\u003e\" \\ --from-literal=clientSecret=\"\u003cyour-client-secret\u003e\" Configuring the Connector Reference the secret via environment variables in the dex container and add a github connector to the Dex configuration:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: ui: spec: ingress: enabled: true dex: dex: env: - name: GITHUB_CLIENT_ID valueFrom: secretKeyRef: name: github-client key: clientID - name: GITHUB_CLIENT_SECRET valueFrom: secretKeyRef: name: github-client key: clientSecret config: connectors: - type: github id: github name: GitHub config: clientID: $GITHUB_CLIENT_ID clientSecret: $GITHUB_CLIENT_SECRET Restricting Access to Specific Organisations To allow only members of certain GitHub organisations (and optionally specific teams) to log in, add an orgs block to the connector config:\nconfig: connectors: - type: github id: github name: GitHub config: clientID: $GITHUB_CLIENT_ID clientSecret: $GITHUB_CLIENT_SECRET orgs: - name: my-org teams: - developers - admins - name: another-org Refer to the Dex GitHub connector documentation for the full list of available options, including org and team restrictions.\nLogin with OpenShift When Konflux is deployed on an OpenShift cluster, the operator can automatically configure a Dex connector that delegates authentication to the cluster’s built-in OAuth server. Users can then log in with any identity provider already configured in OpenShift (LDAP, HTPasswd, GitHub, etc.).\nThe behaviour is controlled by configureLoginWithOpenShift in spec.ui.spec.dex.config:\nValue Behaviour true OpenShift connector is added when running on OpenShift false OpenShift connector is never added, even on OpenShift (unset) OpenShift connector is added automatically when running on OpenShift To explicitly enable OpenShift login:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: ui: spec: dex: config: configureLoginWithOpenShift: true When the operator detects OpenShift and this value is unset or true, it creates a ServiceAccount and Secret in the konflux-ui namespace and registers the cluster’s OAuth server as a Dex connector automatically - no additional secrets or connector configuration is required.\nTo disable OpenShift login on an OpenShift cluster:\nconfig: configureLoginWithOpenShift: false Generic OIDC Connector Any OIDC-compliant identity provider (Google, Keycloak, Azure AD, Okta, etc.) can be added using the oidc connector type.\nExample: Google config: connectors: - type: oidc id: google name: Google config: clientID: $GOOGLE_CLIENT_ID clientSecret: $GOOGLE_CLIENT_SECRET issuer: https://accounts.google.com Refer to the Dex OIDC connector documentation for the full list of available options.\nLDAP Connector Konflux supports authenticating users against an LDAP or Active Directory server through Dex’s built-in LDAP connector.\nconfig: connectors: - type: ldap id: ldap name: LDAP config: host: ldap.example.com:636 bindDN: cn=admin,dc=example,dc=com bindPW: $LDAP_BIND_PASSWORD userSearch: baseDN: ou=Users,dc=example,dc=com filter: \"(objectClass=person)\" username: uid idAttr: uid emailAttr: mail nameAttr: cn groupSearch: baseDN: ou=Groups,dc=example,dc=com filter: \"(objectClass=groupOfNames)\" nameAttr: cn userMatchers: - userAttr: DN groupAttr: member Store the bind password in a secret and expose it to the Dex container via an environment variable:\nkubectl create secret generic ldap-bind \\ --namespace konflux-ui \\ --from-literal=bindPassword=\"\u003cyour-bind-password\u003e\" dex: dex: env: - name: LDAP_BIND_PASSWORD valueFrom: secretKeyRef: name: ldap-bind key: bindPassword config: connectors: - type: ldap # ... Refer to the Dex LDAP connector documentation for the complete reference.\nStatic Passwords (Local Development Only) For local development and CI testing, Dex supports a built-in password database. Enable it with enablePasswordDB: true and define users in staticPasswords:\nconfig: enablePasswordDB: true passwordConnector: local staticPasswords: - email: user1@konflux.dev # Generate a bcrypt hash: echo password | htpasswd -BinC 10 admin | cut -d: -f2 hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\" # notsecret username: user1 userID: \"7138d2fe-724e-4e86-af8a-db7c4b080e20\" Static passwords are stored as bcrypt hashes in the Konflux CR, but the CR itself is visible to anyone with read access to the cluster. Never use this configuration in production. Use an OIDC connector instead. Combining Multiple Connectors Multiple connectors can be configured simultaneously. Dex presents all of them on its login page and allows users to choose. The following example enables GitHub, Google, and OpenShift login together:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: ui: spec: ingress: enabled: true dex: dex: env: - name: GITHUB_CLIENT_ID valueFrom: secretKeyRef: name: github-client key: clientID - name: GITHUB_CLIENT_SECRET valueFrom: secretKeyRef: name: github-client key: clientSecret - name: GOOGLE_CLIENT_ID valueFrom: secretKeyRef: name: google-client key: clientID - name: GOOGLE_CLIENT_SECRET valueFrom: secretKeyRef: name: google-client key: clientSecret config: configureLoginWithOpenShift: true connectors: - type: github id: github name: GitHub config: clientID: $GITHUB_CLIENT_ID clientSecret: $GITHUB_CLIENT_SECRET - type: oidc id: google name: Google config: clientID: $GOOGLE_CLIENT_ID clientSecret: $GOOGLE_CLIENT_SECRET issuer: https://accounts.google.com Additional Connectors Dex supports many more upstream providers including Bitbucket Cloud, GitLab, SAML 2.0, LinkedIn, Microsoft, and more. For the full list of available connectors and their configuration options, refer to the Dex connectors documentation.\n","categories":"","description":"Configuring login providers for Konflux using Dex connectors - GitHub, OpenShift, OIDC, LDAP, and static passwords.","excerpt":"Configuring login providers for Konflux using Dex connectors - GitHub, OpenShift, OIDC, LDAP, and static passwords.","ref":"/konflux-ci/docs/guides/oidc-configuration/","tags":"","title":"Authentication and OIDC Configuration"},{"body":"This guide covers deploying Konflux on any Kubernetes cluster using the pre-built release bundle.\nPrerequisites Tool Minimum version git v2.46 kubectl v1.31.4 openssl v3.0.13 cluster-admin permissions A Kubernetes cluster with the following dependencies installed (see Setup): Tekton (or OpenShift Pipelines when using OpenShift) cert-manager trust-manager Kyverno Pipelines-as-Code Setup Clone the repository: git clone https://github.com/konflux-ci/konflux-ci.git cd konflux-ci Deploy the cluster dependencies: If you are working with a local Kind cluster, Local Deployment (Kind) provides a fully automated setup that handles cluster creation and dependency deployment in a single step. # Generic Kubernetes SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true ./deploy-deps.sh # OpenShift - use native operators instead of upstream ones USE_OPENSHIFT_PIPELINES=true USE_OPENSHIFT_CERTMANAGER=true \\ SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true \\ ./deploy-deps.sh Alternatively, apply the individual kustomizations under dependencies/ manually.\nStep 1: Install the operator Apply the latest release bundle. This installs all CRDs, the operator deployment, RBAC, and required namespaces in a single command:\nkubectl apply -f https://github.com/konflux-ci/konflux-ci/releases/latest/download/install.yaml To install a specific version instead of the latest, replace latest with the version tag:\nkubectl apply -f https://github.com/konflux-ci/konflux-ci/releases/download/v0.0.1/install.yaml Wait for the operator to be ready:\nkubectl wait --for=condition=Available deployment/konflux-operator-controller-manager \\ -n konflux-operator --timeout=300s Step 2: Create and verify the Konflux Custom Resource See Applying the Konflux Custom Resource for instructions on creating a Konflux CR and verifying that all components are ready.\nUninstall Remove the Konflux CR and all managed components:\nkubectl delete konflux konflux Remove the operator and CRDs:\nkubectl delete -f https://github.com/konflux-ci/konflux-ci/releases/latest/download/install.yaml What’s next Onboard a new Application — onboard an application, run builds, tests, and releases GitHub Application Secrets — create a GitHub App and configure webhook delivery Registry Configuration — configure an external container registry for build and release pipelines API Reference — full CR field reference Troubleshooting — solutions to common installation and runtime issues Examples — sample Konflux CR configurations ","categories":"","description":"Step-by-step guide for installing Konflux from a pre-built release bundle.","excerpt":"Step-by-step guide for installing Konflux from a pre-built release bundle.","ref":"/konflux-ci/docs/installation/install-release/","tags":"","title":"Installing from Release"},{"body":"The Konflux Operator is published to the community operators catalog and can be installed through OLM on any cluster that has OLM installed.\nChannels Channels are scoped to a release stream (the vMAJOR.MINOR version). For example, for the v0.1 stream:\nChannel Description stable-v0.1 Latest stable release for the v0.1 stream — recommended for production candidate-v0.1 Release candidates for the v0.1 stream — for early testing of upcoming versions Substitute the appropriate stream (e.g. v0.2) when a newer stream is available.\nPrerequisites Tool Minimum version git v2.46 kubectl v1.31.4 openssl v3.0.13 cluster-admin permissions A Kubernetes cluster with OLM installed and the following dependencies (see Setup): Tekton (or OpenShift Pipelines when using OpenShift) cert-manager trust-manager Kyverno Pipelines-as-Code OLM is included by default on OpenShift. For vanilla Kubernetes, follow the OLM getting started guide to install it first. Setup Clone the repository: git clone https://github.com/konflux-ci/konflux-ci.git cd konflux-ci Deploy the cluster dependencies: # Generic Kubernetes SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true ./deploy-deps.sh # OpenShift - use native operators instead of upstream ones USE_OPENSHIFT_PIPELINES=true USE_OPENSHIFT_CERTMANAGER=true \\ SKIP_DEX=true SKIP_INTERNAL_REGISTRY=true SKIP_SMEE=true \\ ./deploy-deps.sh Alternatively, apply the individual kustomizations under dependencies/ manually.\nInstall via kubectl Step 1: Create the namespace kubectl create namespace konflux-operator Step 2: Create an OperatorGroup The Konflux Operator requires a cluster-wide OperatorGroup (targetNamespaces: []):\napiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: konflux-operator namespace: konflux-operator spec: upgradeStrategy: Default targetNamespaces: [] kubectl apply -f operatorgroup.yaml Step 3: Create a Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: konflux-operator namespace: konflux-operator spec: name: konflux-operator # channel: stable-v0.1 # omit to use the default channel installPlanApproval: Automatic source: community-operators sourceNamespace: \u003ccatalog-namespace\u003e # openshift-marketplace on OpenShift, olm on vanilla Kubernetes kubectl apply -f subscription.yaml Step 4: Verify the installation Wait for the operator to be ready:\nkubectl wait --for=condition=Available deployment/konflux-operator-controller-manager \\ -n konflux-operator --timeout=300s Check the subscription and install plan status:\nkubectl get subscription konflux-operator -n konflux-operator kubectl get installplan -n konflux-operator Install via the OpenShift Web Console On OpenShift, you can also install through the OperatorHub UI:\nNavigate to Operators → OperatorHub. Search for Konflux. Select the Konflux Operator and click Install. Choose the desired channel (e.g. stable-v0.1) and set the installation namespace to konflux-operator. Click Install and wait for the operator to become ready. Create and verify the Konflux Custom Resource See Applying the Konflux Custom Resource for instructions on creating a Konflux CR and verifying that all components are ready.\nWhat’s next Onboard a new Application — onboard an application, run builds, tests, and releases API Reference — full CR field reference Troubleshooting — solutions to common issues Examples — sample Konflux CR configurations ","categories":"","description":"Installing the Konflux Operator through the Operator Lifecycle Manager (OLM).","excerpt":"Installing the Konflux Operator through the Operator Lifecycle Manager (OLM).","ref":"/konflux-ci/docs/installation/install-olm/","tags":"","title":"Installing from OLM"},{"body":"Konflux organizes workloads into two types of namespaces: tenant namespaces where teams develop and build applications, and managed namespaces where release pipelines deploy applications. This guide explains how to create, label, and configure both types.\nNamespace types Type Purpose Label Tenant namespace Development and build workloads for a team konflux-ci.dev/type: tenant Managed namespace Release pipeline deployments managed by an ops team konflux-ci.dev/type: tenant Default tenant Shared namespace for all authenticated users (dev/test only) Managed by the operator Both tenant and managed namespaces require the konflux-ci.dev/type: tenant label so that the namespace lister component can discover them and make them available in the Konflux UI.\nTenant namespaces A tenant namespace is the workspace for a development team. Applications, components, integration tests, and release plans are all created inside tenant namespaces.\nDefault tenant namespace By default, the Konflux operator creates a default-tenant namespace where all authenticated users automatically receive maintainer permissions. This is convenient for local development and testing, but not appropriate for production multi-tenant environments where strict namespace isolation is required.\nSet spec.defaultTenant.enabled in the Konflux CR to control this behaviour:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: defaultTenant: enabled: true # true (default): creates default-tenant shared namespace # false: disables it; create per-team namespaces instead When set to false, create dedicated per-team namespaces as described in the Creating a tenant namespace section below.\nCreating a tenant namespace Create the namespace: kubectl create namespace \u003cnamespace-name\u003e Label it so the namespace lister can discover it: kubectl label namespace \u003cnamespace-name\u003e konflux-ci.dev/type=tenant Alternatively, use a Kubernetes manifest:\napiVersion: v1 kind: Namespace metadata: name: my-team-tenant labels: konflux-ci.dev/type: tenant Granting users access Konflux provides three ClusterRoles for namespace access:\nClusterRole Description konflux-admin-user-actions Full access to all Konflux resources including secrets konflux-maintainer-user-actions Partial access to Konflux resources without access to secrets konflux-contributor-user-actions View access to Konflux resources without access to secrets Note: Grant konflux-admin-user-actions only to users who need access to secrets (e.g. namespace owners or cluster administrators). For most team members, prefer konflux-maintainer-user-actions for day-to-day development work, or konflux-contributor-user-actions for read-only access.\nGrant a user admin access to a tenant namespace:\nkubectl create rolebinding \u003crolebinding-name\u003e \\ --clusterrole konflux-admin-user-actions \\ --user \u003cuser-email\u003e \\ -n \u003cnamespace-name\u003e Grant a user maintainer access:\nkubectl create rolebinding \u003crolebinding-name\u003e \\ --clusterrole konflux-maintainer-user-actions \\ --user \u003cuser-email\u003e \\ -n \u003cnamespace-name\u003e Grant a user contributor (read-only) access:\nkubectl create rolebinding \u003crolebinding-name\u003e \\ --clusterrole konflux-contributor-user-actions \\ --user \u003cuser-email\u003e \\ -n \u003cnamespace-name\u003e Example: creating a team tenant namespace kubectl create namespace my-team-tenant kubectl label namespace my-team-tenant konflux-ci.dev/type=tenant # Grant admin access to a team member kubectl create rolebinding my-team-admin \\ --clusterrole konflux-admin-user-actions \\ --user developer@example.com \\ -n my-team-tenant Or using Kubernetes manifests:\napiVersion: v1 kind: Namespace metadata: name: my-team-tenant labels: konflux-ci.dev/type: tenant --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-team-admin namespace: my-team-tenant roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: konflux-admin-user-actions subjects: - kind: User name: developer@example.com apiGroup: rbac.authorization.k8s.io Granting multiple users access You can bind multiple users to the same namespace either with individual RoleBinding resources or by listing multiple subjects in a single binding:\napiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: my-team-admins namespace: my-team-tenant roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: konflux-admin-user-actions subjects: - kind: User name: alice@example.com apiGroup: rbac.authorization.k8s.io - kind: User name: bob@example.com apiGroup: rbac.authorization.k8s.io Managed namespaces A managed namespace is where the release service deploys released applications. It is typically owned by the managed environment team (the team that supports the deployments of that application) rather than the development team that builds the application.\nThe ReleasePlanAdmission resource — which defines how and where an application is released — lives in the managed namespace. The development team creates a ReleasePlan in their tenant namespace that references the managed namespace as the release target.\nCreating a managed namespace Managed namespaces require the same konflux-ci.dev/type: tenant label as tenant namespaces:\nkubectl create namespace \u003cmanaged-namespace-name\u003e kubectl label namespace \u003cmanaged-namespace-name\u003e konflux-ci.dev/type=tenant Or using a manifest:\napiVersion: v1 kind: Namespace metadata: name: my-team-managed labels: konflux-ci.dev/type: tenant Release pipeline service account The release pipeline runs under a dedicated service account in the managed namespace. Create the service account before creating rolebindings that reference it:\napiVersion: v1 kind: ServiceAccount metadata: name: release-pipeline namespace: my-team-managed Granting access to the managed namespace The managed environment team needs admin access to manage releases, and the release-pipeline service account needs its own rolebinding:\n# Grant the managed environment team admin access kubectl create rolebinding ops-team-admin \\ --clusterrole konflux-admin-user-actions \\ --user ops@example.com \\ -n my-team-managed # Grant the release pipeline service account access kubectl create rolebinding release-pipeline-resource-role-binding \\ --clusterrole release-pipeline-resource-role \\ --serviceaccount my-team-managed:release-pipeline \\ -n my-team-managed Example: full managed namespace setup apiVersion: v1 kind: Namespace metadata: name: my-team-managed labels: konflux-ci.dev/type: tenant --- apiVersion: v1 kind: ServiceAccount metadata: name: release-pipeline namespace: my-team-managed --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: ops-team-admin namespace: my-team-managed roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: konflux-admin-user-actions subjects: - kind: User name: ops@example.com apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: release-pipeline-resource-role-binding namespace: my-team-managed roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: release-pipeline-resource-role subjects: - kind: ServiceAccount name: release-pipeline namespace: my-team-managed Connecting tenant and managed namespaces for releases Once both namespaces exist, connect them for the release flow:\nIn the tenant namespace, the development team creates a ReleasePlan that points to the managed namespace: apiVersion: appstudio.redhat.com/v1alpha1 kind: ReleasePlan metadata: name: my-app-release-plan namespace: my-team-tenant spec: application: my-application target: my-team-managed # The managed namespace In the managed namespace, the operations team creates a ReleasePlanAdmission that authorizes the release and defines its pipeline: apiVersion: appstudio.redhat.com/v1alpha1 kind: ReleasePlanAdmission metadata: name: my-app-admission namespace: my-team-managed spec: applications: - my-application origin: my-team-tenant # The development team's tenant namespace pipeline: pipelineRef: \u003cpipeline-ref\u003e serviceAccountName: release-pipeline policy: default When both resources exist and match, the ReleasePlan status shows “Matched”, indicating the release flow is ready.\n","categories":"","description":"How to create and configure tenant namespaces and managed namespaces in Konflux.","excerpt":"How to create and configure tenant namespaces and managed namespaces in Konflux.","ref":"/konflux-ci/docs/guides/namespace-management/","tags":"","title":"Tenant and Managed Namespace Management"},{"body":"The Konflux Kind environment targets workstations, CI runners, and other resource-constrained systems. Some workloads, however, prioritize performance over resource conservation.\nResource consumption for Konflux core services is configured through the Konflux Custom Resource, which the operator manages declaratively.\nWorkloads Deployed with Konflux Konflux is deployed using an Operator. Resource consumption for its components is configured through the Konflux Custom Resource rather than by patching manifests directly. The operator manages these resources declaratively.\nTo adjust resource consumption, edit your Konflux CR and specify resource requests and limits in the component specifications. For example:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: buildService: spec: buildControllerManager: manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi # Similar configuration for other components... See the sample Konflux CR for resource configuration examples across all components.\n","categories":"","description":"Tuning CPU and memory for Konflux core services and Tekton workloads in resource-constrained environments.","excerpt":"Tuning CPU and memory for Konflux core services and Tekton workloads in resource-constrained environments.","ref":"/konflux-ci/docs/guides/resource-management/","tags":"","title":"Resource Management"},{"body":"Conforma (previously known as Enterprise Contract) is the policy verification tool integrated into Konflux. It defines which checks are performed on your container images before they can be released. Conforma policies are evaluated in two places:\nIntegration tests - after every build pipeline run, before a snapshot is promoted. Release - as a gating step in the release pipeline, referenced from a ReleasePlanAdmission. For a broader overview of where policies are evaluated in your workflow, see Policy Evaluations in the Konflux documentation.\nPre-deployed policies The operator deploys an EnterpriseContractPolicy CR into the enterprise-contract-service namespace. This is publicly readable by any authenticated cluster user and can be referenced directly without creating your own.\nCR name Display name Rule collection(s) Description default Default @redhat Used for new Konflux applications. Includes most Red Hat rules, excluding hermetic builds. List the policies on a running cluster:\nkubectl get enterprisecontractpolicy -n enterprise-contract-service Inspect any individual policy:\nkubectl get enterprisecontractpolicy default -n enterprise-contract-service -o yaml Using a policy in integration tests When you create an application through the Konflux UI, an IntegrationTestScenario that runs the Conforma pipeline is created automatically. It references the enterprise-contract pipeline from the konflux-ci/build-definitions repository and uses the default policy.\nYou can inspect the integration test scenarios in your namespace:\nkubectl get integrationtestscenario -n \u003cyour-namespace\u003e kubectl get integrationtestscenario \u003ctest-name\u003e -n \u003cyour-namespace\u003e -o yaml To switch to a different policy, set the POLICY_CONFIGURATION parameter on the IntegrationTestScenario. The value can be:\nA namespace-qualified CR name: \u003cnamespace\u003e/\u003ccr-name\u003e (e.g. a pre-deployed policy or one you have created in your own namespace). A git URL pointing to a policy configuration file (e.g. github.com/conforma/config//slsa3). Via the Konflux UI Open your application and go to the Integration tests tab. Select the three dots next to the Enterprise Contract test and choose Edit. Click Add parameter. Set Name to POLICY_CONFIGURATION. Set Value to the policy reference, for example enterprise-contract-service/default. Click Save changes. For the full procedure, see Configuring the enterprise contract policy in the Konflux documentation.\nVia the CLI Edit the IntegrationTestScenario directly:\nkubectl edit integrationtestscenario \u003ctest-name\u003e -n \u003cyour-namespace\u003e Add or update the params key under spec:\nspec: application: my-application params: - name: POLICY_CONFIGURATION value: enterprise-contract-service/default To trigger a new integration test run after saving, open a pull request or comment /retest on an existing one.\nUsing a policy in release The release pipeline validates the snapshot against a Conforma policy before proceeding. The policy is configured in the spec.policy field of the ReleasePlanAdmission CR that lives in the managed tenant namespace.\napiVersion: appstudio.redhat.com/v1alpha1 kind: ReleasePlanAdmission metadata: name: sre-production namespace: managed-tenant-namespace spec: applications: - my-application origin: \u003cdev-tenant-namespace\u003e pipeline: pipelineRef: \u003cpipeline-ref\u003e serviceAccountName: release-pipeline policy: default # (1) (1) The policy field is required and accepts a bare policy name — the name of an EnterpriseContractPolicy CR in the enterprise-contract-service namespace (e.g. default). It must match the pattern ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$\nFor complete details on creating and configuring a ReleasePlanAdmission, see Creating a release plan admission in the Konflux documentation.\nCreating a custom policy If none of the pre-deployed policies suit your use case, you can define your own. There are two approaches:\nOption A: Git URL Point directly to a policy configuration file in a git repository. Several predefined configurations are available in the conforma/config repository. For example, to use the SLSA level 3 configuration hosted there:\ngithub.com/conforma/config//slsa3 The // syntax separates the git repository URL from the subdirectory path. Conforma looks for a policy.yaml or .ec/policy.yaml file in the specified directory.\nUse this value directly as the POLICY_CONFIGURATION parameter or spec.policy field - no cluster resource needs to be created.\nOption B: EnterpriseContractPolicy CR Create an EnterpriseContractPolicy CR in your namespace for full control over which rules are included or excluded. You can use any of the pre-deployed policies as a starting point.\nCreate a file named policy.yaml and adjust it to your requirements:\napiVersion: appstudio.redhat.com/v1alpha1 kind: EnterpriseContractPolicy metadata: name: my-custom-policy namespace: \u003cyour-namespace\u003e spec: description: A custom Conforma policy for my application publicKey: k8s://openshift-pipelines/public-key sources: - name: Release policies policy: - oci::quay.io/conforma/release-policy:konflux data: - oci::quay.io/konflux-ci/tekton-catalog/data-acceptable-bundles:latest - github.com/release-engineering/rhtap-ec-policy//data config: include: - \"@slsa3\" exclude: - hermetic_build_task.* Apply it to the cluster:\nkubectl apply -f policy.yaml Integration tests — reference it as \u003cyour-namespace\u003e/my-custom-policy in the POLICY_CONFIGURATION parameter of your IntegrationTestScenario.\nRelease — the ReleasePlanAdmission.spec.policy field only accepts a bare policy name and the release service looks up policies in the enterprise-contract-service namespace. To use a custom policy for release, deploy it there instead:\nkubectl apply -f policy.yaml -n enterprise-contract-service Then set policy: my-custom-policy in your ReleasePlanAdmission.\nSee the Conforma configuration reference for the full set of available include/exclude options and rule collections.\nCustomizing an existing policy to waive violations If Conforma reports a violation that you cannot remedy by changing the build process, you can waive the failing check by customizing the policy. The recommended workflow is:\nIdentify which policy is being used for your application. Decide whether to modify the shared policy or create a new one. Creating a new policy scoped to your namespace avoids impacting other users. Copy the relevant pre-deployed policy as a starting point, add your exclusions, and apply it to your namespace (see EnterpriseContractPolicy CR above). Update your integration test and ReleasePlanAdmission to reference the new policy. See Customizing Policy in the Konflux documentation for more details.\n","categories":"","description":"How to use and customize Conforma policies for integration tests and release.","excerpt":"How to use and customize Conforma policies for integration tests and release.","ref":"/konflux-ci/docs/guides/conforma-policy-configuration/","tags":"","title":"Conforma Policy Configuration"},{"body":"Installation issues Port 5000 conflict (macOS) Port 5000 is used by macOS AirPlay Receiver.\nOption 1: Disable AirPlay Receiver — System Settings → General → AirDrop \u0026 Handoff → AirPlay Receiver → Off\nOption 2: Use a different port. In scripts/deploy-local.env, set:\nREGISTRY_HOST_PORT=5001 Docker Hub rate limits If deployment fails with toomanyrequests errors, check for rate limit events:\nkubectl get events -A | grep toomanyrequests Pre-load the affected images into Kind to bypass Docker Hub:\npodman login docker.io podman pull ghcr.io/project-zot/zot:v2.1.13 kind load docker-image ghcr.io/project-zot/zot:v2.1.13 --name konflux unknown field \"replacements\" error If you see error: json: unknown field \"replacements\" while running setup scripts, your kubectl is outdated. Install the latest version: https://kubernetes.io/docs/tasks/tools/#kubectl\nUnable to bind PVCs If the cluster cannot fulfill volume claims, deploy-deps.sh fails with:\nerror: timed out waiting for the condition on persistentvolumeclaims/test-pvc ... Test PVC unable to bind on default storage class If using Kind, try restarting the cluster. Otherwise, ensure that PVCs can be allocated on the cluster’s default storage class.\nOperator not starting kubectl logs -n konflux-operator deployment/konflux-operator-controller-manager kubectl get crds | grep konflux Components not deploying kubectl get konflux konflux -o jsonpath='{.status.conditions}' | jq kubectl get events -n konflux-operator --sort-by='.lastTimestamp' Dex not starting Check Dex logs:\nkubectl logs -n konflux-ui deployment/dex Verify the Dex configuration:\nkubectl get configmap -n konflux-ui dex -o yaml Secrets not found Verify secrets exist in the correct namespaces:\nkubectl get secrets -n pipelines-as-code kubectl get secrets -n build-service kubectl get secrets -n integration-service If any are missing, recreate them — see GitHub Application Secrets.\nRuntime issues UI not accessible (https://localhost:9443) Verify that your Konflux CR includes the NodePort configuration. The default sample CR sets httpsPort: 30011, which Kind maps to host port 9443 via kind-config.yaml.\nIf the configuration is missing, patch the Konflux CR:\nkubectl patch konflux konflux --type=merge -p ' spec: ui: spec: ingress: nodePortService: httpsPort: 30011 ' Always use https:// — typing localhost:9443 without the scheme defaults to HTTP and will fail to load. Restarting the cluster after host reboot After a host reboot or sleep/wake cycle, restart the Kind container:\npodman restart konflux-control-plane # or: docker restart konflux-control-plane Then re-apply the inotify limits:\nsudo sysctl fs.inotify.max_user_watches=524288 sudo sysctl fs.inotify.max_user_instances=512 It may take a few minutes for the UI to become available again.\nPipelines not triggering on PRs Confirm that events were logged to your smee channel. If not, verify your steps for setting up the GitHub App and installing it on your fork repository.\nCheck the smee client pod logs and confirm they show channel events being forwarded to pipelines-as-code:\nkubectl get pods -n smee-client kubectl logs -n smee-client gosmee-client-\u003cpod-id\u003e If the smee client pod is missing or the logs do not include forwarding entries, verify the smee channel URL in the smee-client manifest, then redeploy it:\nkubectl delete -f ./smee/smee-client.yaml # fix the channel URL in the manifest kubectl create -f ./smee/smee-client.yaml If the host machine goes to sleep, the smee client may stop responding to events. Delete the pod and wait for it to be recreated: kubectl delete pods -n smee-client gosmee-client-\u003cpod-id\u003e Check the pipelines-as-code controller logs for errors linking events to a Repository resource:\nkubectl get pods -n pipelines-as-code kubectl logs -n pipelines-as-code pipelines-as-code-controller-\u003cpod-id\u003e If the logs mention a Repository resource cannot be found, compare the URL in the log to the one in your deployed application-and-component.yaml. Fix the manifest and redeploy it:\nkubectl apply -f ./test/resources/demo-users/user/sample-components/ns2/application-and-component.yaml This is only relevant if the application was onboarded manually (not via the Konflux UI). If the logs mention pipelines-as-code-secret is missing or malformed, verify it has the correct fields (github-private-key, github-application-id, webhook.secret). If it needs to be recreated, delete it and re-apply it following the GitHub secrets setup:\nkubectl delete secret pipelines-as-code-secret -n pipelines-as-code Once the above checks pass, post /retest as a comment on your PR and observe the behavior again.\nPR fails when webhook secret was not added If a webhook secret was not added when creating the GitHub App, PR pipelines will fail with:\nThere was an issue validating the commit: \"could not validate payload, check your webhook secret?: no signature has been detected, for security reason we are not allowing webhooks that has no secret\" Go to your GitHub App settings and verify a webhook secret is configured. See Pipelines as Code documentation for instructions.\nUnable to create Application with Component using the Konflux UI If you see a 404 Not Found error when trying to create an Application with a Component via the Konflux UI, the image-controller was most likely not installed or the Quay token secret is missing.\nEnable image-controller in your Konflux CR and create the quaytoken secret as described in the Registry Configuration guide, then try again.\nRelease fails If a release is triggered but then fails, check the logs of the pods in the managed namespace (e.g. managed-ns2):\nkubectl get pods -n managed-ns2 Check the logs of any pods with Error status:\nkubectl logs -n managed-ns2 \u003cpod-name\u003e Unfinished string at EOF Logs contain:\nparse error: Unfinished string at EOF at line 2, column 0 Verify that you provided a value for the repository field in test/resources/demo-users/user/sample-components/managed-ns2/rpa.yaml, then redeploy:\nkubectl apply -k ./test/resources/demo-users/user/sample-components/managed-ns2 400 Bad Request Logs contain:\nError: PUT https://quay.io/...: unexpected status code 400 Bad Request Verify that you created a registry push secret for the managed namespace (managed-ns2). See Release pipeline push secret for instructions.\nRunning out of resources Insufficient memory — Podman on macOS On macOS, Podman runs in a virtual machine. Check the VM memory:\npodman machine inspect | grep Memory If insufficient, create a new machine with more resources:\npodman machine stop podman machine init --memory 20480 --cpus 8 --rootful konflux-large podman machine start konflux-large Then configure the deployment script to use it (in scripts/deploy-local.env):\nPODMAN_MACHINE_NAME=\"konflux-large\" PID limit issues — Podman on Linux If Tekton pipelines fail with “cannot fork” errors, increase the PID limit:\npodman update --pids-limit 8192 konflux-control-plane Too many open files Increase inotify limits temporarily (the deploy-local.sh script does this automatically, but if you hit the issue on a restarted host):\nsudo sysctl fs.inotify.max_user_watches=524288 sudo sysctl fs.inotify.max_user_instances=512 Performance tuning — reduce UI replica count If builds are slow or the UI is sluggish, you can reduce resource requirements by lowering the replica count in your Konflux CR:\nspec: ui: spec: proxy: replicas: 1 # default is 2-3 ","categories":"","description":"Solutions to common issues encountered when installing and running Konflux locally.","excerpt":"Solutions to common issues encountered when installing and running Konflux locally.","ref":"/konflux-ci/docs/troubleshooting/","tags":"","title":"Troubleshooting"},{"body":"The Konflux operator can manage an Ingress resource for the UI automatically. When ingress is enabled (the default on OpenShift), the operator creates and maintains the Ingress. When ingress is disabled (the default on plain Kubernetes), the UI is accessible via port-forwarding on localhost:9443.\nIn both environments, you may want to manage your own routing — for example, using a standard Kubernetes Ingress with your own ingress controller, a hardware load balancer, or any other external routing mechanism.\nThis guide shows how to configure the operator so that all internal components (oauth2-proxy, dex, etc.) use your custom hostname, without the operator creating or managing an Ingress resource.\nConfiguration Set ingress.enabled: false and ingress.host to your desired hostname in the Konflux CR:\napiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: name: konflux spec: ui: spec: ingress: enabled: false host: konflux.example.com With this configuration:\ningress.enabled: false — the operator will not create an Ingress resource. You are responsible for routing traffic to the proxy service in the konflux-ui namespace. ingress.host — the operator configures oauth2-proxy, dex, and all related components to use this hostname for OIDC redirect URLs, issuer URLs, and allowed redirect domains. This ensures authentication flows work correctly with your custom URL. Routing to the Backend Regardless of which routing method you choose, traffic must reach the proxy service in the konflux-ui namespace on port web-tls (port 9443). The proxy service terminates TLS using a certificate signed by the ui-ca CA (managed by cert-manager).\nKubernetes Ingress On plain Kubernetes, create an Ingress resource with your ingress controller. The exact annotations depend on your ingress controller (nginx, traefik, etc.):\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: konflux-ui namespace: konflux-ui annotations: # Add annotations specific to your ingress controller here. # The backend (proxy service) uses TLS, so you may need to configure # your ingress controller for backend TLS / SSL passthrough. spec: ingressClassName: nginx # Adjust to your ingress controller rules: - host: konflux.example.com http: paths: - path: / pathType: Prefix backend: service: name: proxy port: name: web-tls tls: - hosts: - konflux.example.com secretName: my-tls-secret # Your TLS certificate for the edge Note: Since the backend proxy service uses TLS (port web-tls / 9443), you may need to configure your ingress controller to trust the backend CA certificate. The CA certificate is stored in the ui-ca Secret in the konflux-ui namespace.\nOpenShift Ingress (Route) On OpenShift, the Ingress-to-Route controller automatically converts Ingress resources into Routes. Use the OpenShift-specific annotations for TLS re-encryption:\napiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: konflux-ui namespace: konflux-ui annotations: route.openshift.io/destination-ca-certificate-secret: ui-ca route.openshift.io/termination: reencrypt spec: rules: - host: konflux.apps.my-cluster.example.com http: paths: - path: / pathType: Prefix backend: service: name: proxy port: name: web-tls Key points:\nroute.openshift.io/termination: reencrypt — the router terminates the external TLS connection and creates a new TLS connection to the backend. route.openshift.io/destination-ca-certificate-secret: ui-ca — tells the router to trust the CA certificate from the ui-ca Secret when connecting to the backend proxy service. Other Routing Methods The same approach works with any routing mechanism (Gateway API, hardware load balancer, service mesh, etc.) as long as traffic reaches the proxy service on port 9443 with TLS re-encryption. The backend proxy serves TLS using a certificate signed by the ui-ca CA — your routing layer must be configured to trust this CA for the backend connection.\n","categories":"","description":"How to configure a custom URL for Konflux UI when managing your own ingress or external routing.","excerpt":"How to configure a custom URL for Konflux UI when managing your own ingress or external routing.","ref":"/konflux-ci/docs/guides/custom-url/","tags":"","title":"Custom URL without Operator-Managed Ingress"},{"body":"The following examples demonstrate how to configure Konflux Operator Custom Resources:\nKonflux E2E Test Configuration Konflux configuration for E2E tests with image-controller enabled\n# Title: Konflux E2E Test Configuration # Description: Konflux configuration for E2E tests with image-controller enabled # # This sample extends the base configuration with image-controller enabled, # which is required for E2E tests but optional for local development. # # Usage: # E2E CI: Used by .github/workflows/operator-test-e2e.yaml # Local E2E testing: KONFLUX_CR=operator/config/samples/konflux-e2e.yaml ./scripts/deploy-local.sh apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux spec: ui: spec: ingress: nodePortService: httpsPort: 30011 proxy: replicas: 1 nginx: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi dex: config: enablePasswordDB: true passwordConnector: local # WARNING: Demo users for CI and local development ONLY # For production, remove staticPasswords and configure OIDC connectors # See docs/operator-deployment.md for production authentication examples staticPasswords: - email: \"user1@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\" # notsecret username: \"user1\" userID: \"7138d2fe-724e-4e86-af8a-db7c4b080e20\" - email: \"user2@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\" # notsecret username: \"user2\" userID: \"ea8e8ee1-2283-4e03-83d4-b00f8b821b64\" integrationService: spec: integrationControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi releaseService: spec: releaseControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 256Mi limits: cpu: 30m memory: 256Mi buildService: spec: buildControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi namespaceLister: spec: namespaceLister: replicas: 1 namespaceLister: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 256Mi info: spec: publicInfo: environment: development visibility: public banner: items: # Display a banner at the top of the Konflux UI to all users. - summary: \"Welcome to Konflux-CI! This development and testing environment has been deployed with default insecure passwords!\" type: danger certManager: # CreateClusterIssuer controls whether cluster issuer resources are created # Defaults to true if not specified createClusterIssuer: true internalRegistry: # Enabled controls whether internal registry resources are deployed # Defaults to false if not specified # For local development on Kind, this provides an OCI registry at localhost:5001 enabled: true # Default tenant creates a shared namespace accessible by all authenticated users. # Needed for E2E tests. See konflux_v1alpha1_konflux.yaml for detailed documentation. defaultTenant: enabled: true # E2E-specific: Enable image-controller for E2E tests imageController: enabled: true Empty Konflux Configuration Empty Konflux configuration (Default values)\n# Title: Empty Konflux Configuration # Description: Empty Konflux configuration (Default values) apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux spec: {} Konflux with GitHub Authentication Konflux configuration with GitHub authentication\n# Title: Konflux with GitHub Authentication # Description: Konflux configuration with GitHub authentication apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux spec: ui: spec: ingress: enabled: true proxy: replicas: 1 nginx: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi dex: dex: # The secret should be created in the konflux-ui namespace env: - name: GITHUB_CLIENT_ID valueFrom: secretKeyRef: name: github-client key: clientID - name: GITHUB_CLIENT_SECRET valueFrom: secretKeyRef: name: github-client key: clientSecret config: connectors: # See the Dex documentation for available connector configuration - https://dexidp.io/docs/connectors - type: github id: github name: GitHub config: clientID: $GITHUB_CLIENT_ID clientSecret: $GITHUB_CLIENT_SECRET Konflux Configuration Complete Konflux configuration with all components (UI, Build Service, Integration Service, Release Service, etc.)\n# Title: Konflux Configuration # Description: Complete Konflux configuration with all components (UI, Build Service, Integration Service, Release Service, etc.) # # This sample is used in CI tests and local development. It includes demo users with static passwords # for testing purposes. For production deployments, remove the staticPasswords section and configure # OIDC authentication (GitHub, Google, LDAP, etc.) instead. # # Usage: # CI: Used automatically by .github/workflows/operator-test-e2e.yaml # Local: ./scripts/deploy-local.sh (uses this file by default) apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: Konflux metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux spec: ui: spec: # NodePort exposes the UI on the host for Kind clusters. # Kind maps container port 30011 to host port 9443 (see kind-config.yaml). # Access the UI at https://localhost:9443 (HTTPS required). # For non-Kind clusters, use Ingress instead (see konflux_v1alpha1_konfluxui.yaml). ingress: nodePortService: httpsPort: 30011 proxy: replicas: 1 nginx: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi dex: config: enablePasswordDB: true passwordConnector: local # WARNING: Demo users for CI and local development ONLY # For production, remove staticPasswords and configure OIDC connectors # See docs/operator-deployment.md for production authentication examples staticPasswords: - email: \"user1@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\" # notsecret username: \"user1\" userID: \"7138d2fe-724e-4e86-af8a-db7c4b080e20\" - email: \"user2@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: \"$2a$10$2b2cU8CPhOTaGrs1HRQuAueS7JTT5ZHsHSzYiFPm1leZck7Mc8T4W\" # notsecret username: \"user2\" userID: \"ea8e8ee1-2283-4e03-83d4-b00f8b821b64\" integrationService: spec: integrationControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi releaseService: spec: releaseControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi buildService: spec: buildControllerManager: replicas: 1 manager: resources: requests: cpu: 30m memory: 128Mi limits: cpu: 30m memory: 128Mi namespaceLister: spec: namespaceLister: replicas: 1 namespaceLister: resources: requests: cpu: 100m memory: 128Mi limits: cpu: 500m memory: 256Mi info: spec: publicInfo: environment: development visibility: public banner: items: # Display a banner at the top of the Konflux UI to all users. - summary: \"Welcome to Konflux-CI! This development and testing environment has been deployed with default insecure passwords!\" type: danger certManager: # CreateClusterIssuer controls whether cluster issuer resources are created # Defaults to true if not specified createClusterIssuer: true internalRegistry: # Enabled controls whether internal registry resources are deployed # Defaults to false if not specified # For local development on Kind, this provides an OCI registry at localhost:5001 enabled: true # Default tenant creates a shared namespace (\"default-tenant\") accessible by all authenticated users. # All authenticated users get maintainer permissions in this namespace. # Appropriate for local development and testing. For production multi-tenant environments where # you need strict namespace isolation, set enabled: false and create per-team tenant namespaces. # Defaults to true if not specified. defaultTenant: enabled: true # telemetry: # # Enabled controls whether segment-bridge telemetry resources are deployed. # # Defaults to false if not specified. # enabled: true # spec: # # segmentKey is the Segment write key; omit to use the default build-time key # # segmentKey: \"your-write-key\" # # # segmentAPIURL is the base URL without \"/batch\". The operator appends \"/batch\". # # Defaults to \"https://api.segment.io/v1\" if not specified. # # segmentAPIURL: \"https://console.redhat.com/connections/api/v1\" Konflux Application API Configuration KonfluxApplicationAPI configuration (minimal example)\n# Title: Konflux Application API Configuration # Description: KonfluxApplicationAPI configuration (minimal example) apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxApplicationAPI metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-application-api spec: {} Konflux Build Service Configuration KonfluxBuildService configuration with custom resource limits and environment variables\n# Title: Konflux Build Service Configuration # Description: KonfluxBuildService configuration with custom resource limits and environment variables apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxBuildService metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-build-service spec: buildControllerManager: replicas: 2 manager: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 500m memory: 512Mi env: - name: PAC_WEBHOOK_URL value: \"http://pipelines-as-code-controller.pipelines-as-code.svc.cluster.local:8180\" - name: EXAMPLE_CUSTOM_CONFIG valueFrom: configMapKeyRef: name: build-service-config key: custom-config - name: EXAMPLE_SECRET_VALUE valueFrom: secretKeyRef: name: build-service-secret key: secret-key Konflux Cert Manager Configuration KonfluxCertManager configuration for certificate management\n# Title: Konflux Cert Manager Configuration # Description: KonfluxCertManager configuration for certificate management apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxCertManager metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-cert-manager spec: # CreateClusterIssuer controls whether cluster issuer resources are created # Defaults to true if not specified createClusterIssuer: true Konflux Default Tenant Configuration KonfluxDefaultTenant configuration for default tenant\n# Title: Konflux Default Tenant Configuration # Description: KonfluxDefaultTenant configuration for default tenant apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxDefaultTenant metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konfluxdefaulttenant-sample spec: # TODO(user): Add fields here Konflux Enterprise Contract Configuration KonfluxEnterpriseContract configuration for enterprise contract policies\n# Title: Konflux Enterprise Contract Configuration # Description: KonfluxEnterpriseContract configuration for enterprise contract policies apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxEnterpriseContract metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-enterprise-contract spec: {} Konflux Image Controller Configuration KonfluxImageController configuration for image management\n# Title: Konflux Image Controller Configuration # Description: KonfluxImageController configuration for image management apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxImageController metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-image-controller spec: {} Konflux Info Configuration KonfluxInfo full configuration for information display\n# Title: Konflux Info Configuration # Description: KonfluxInfo full configuration for information display apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxInfo metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-info spec: publicInfo: # Environment type: development, production, or staging environment: production # Visibility level: public or private visibility: public # Optional status page URL statusPageUrl: \"https://status.konflux.example.com\" # Integration configurations integrations: # GitHub App integration github: application_url: \"https://github.com/apps/my-konflux-app/installations/new\" # SBOM server configuration sbom_server: url: \"https://sbom.example.com/content\" sbom_sha: \"https://sbom.example.com/sha\" # Image Controller configuration image_controller: enabled: true notifications: # Webhook notification example - title: \"Build Complete Notification\" event: \"build_complete\" method: \"webhook\" config: url: \"https://webhook.example.com/build/complete\" # Email notification example - title: \"Repository Push Notification\" event: \"repo_push\" method: \"email\" config: email: \"devops-team@example.com\" # Another webhook with different event - title: \"Image Scan Complete\" event: \"image_scan_complete\" method: \"webhook\" config: url: \"https://webhook.example.com/scan\" timeout: \"30s\" # RBAC role definitions rbac: # Admin role with custom display name - name: \"konflux-admin-user-actions\" description: \"Full access to Konflux resources including secrets and administrative operations\" displayName: \"Administrator\" # Maintainer role (displayName defaults to name) - name: \"konflux-maintainer-user-actions\" description: \"Manage workspace resources without access to sensitive or destructive actions\" # Viewer role - name: \"konflux-viewer-user-actions\" description: \"Read-only access to view CI results and workspace information\" displayName: \"Viewer Role\" # Custom role - name: \"konflux-custom-role\" description: \"Custom role for specific use case with limited permissions\" # Banner configurations banner: items: # Simple informational banner (always visible) - summary: \"Welcome to Konflux-CI! This is a production environment.\" type: info # Warning banner with time-based scheduling (weekdays 9 AM - 5 PM EST) - summary: \"**Scheduled Maintenance**: System maintenance will occur on Friday, March 15th from 2:00 AM to 4:00 AM EST.\" type: warning startTime: \"09:00\" endTime: \"17:00\" timeZone: \"America/New_York\" dayOfWeek: 1 # Monday # Danger banner for specific date (one-time event) - summary: \"**CRITICAL**: Security patch deployment in progress. Some services may be temporarily unavailable.\" type: danger startTime: \"14:00\" endTime: \"18:00\" timeZone: \"UTC\" year: 2025 month: 3 dayOfMonth: 15 # Info banner for specific day of week (recurring weekly) - summary: \"Weekly team standup reminder: Every Monday at 10:00 AM.\" type: info startTime: \"09:00\" endTime: \"11:00\" timeZone: \"America/Los_Angeles\" dayOfWeek: 1 # Monday (0=Sunday, 1=Monday, etc.) # Warning banner for specific month and day (recurring annually) - summary: \"**Annual Review Period**: Performance reviews are due by end of month.\" type: warning startTime: \"00:00\" endTime: \"23:59\" timeZone: \"UTC\" month: 12 dayOfMonth: 31 # Info banner with Markdown formatting - summary: | **New Feature Available**: - Enhanced build pipeline visualization - Improved security scanning - [View Documentation](https://docs.konflux.example.com) type: info startTime: \"08:00\" endTime: \"20:00\" timeZone: \"Europe/London\" Konflux Integration Service Configuration KonfluxIntegrationService configuration with custom resource limits and environment variables\n# Title: Konflux Integration Service Configuration # Description: KonfluxIntegrationService configuration with custom resource limits and environment variables apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxIntegrationService metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-integration-service spec: integrationControllerManager: replicas: 3 manager: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 512m memory: 1Gi env: - name: EXAMPLE_CUSTOM_CONFIG valueFrom: configMapKeyRef: name: integration-service-config key: custom-config - name: EXAMPLE_SECRET_VALUE valueFrom: secretKeyRef: name: integration-service-secret key: secret-key Konflux Internal Registry Configuration KonfluxInternalRegistry configuration for internal registry\n# Title: Konflux Internal Registry Configuration # Description: KonfluxInternalRegistry configuration for internal registry apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxInternalRegistry metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-internal-registry spec: # TODO(user): Add fields here Konflux Namespace Lister Configuration KonfluxNamespaceLister configuration for namespace lister with custom resource limits and environment variables\n# Title: Konflux Namespace Lister Configuration # Description: KonfluxNamespaceLister configuration for namespace lister with custom resource limits and environment variables apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxNamespaceLister metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-namespace-lister spec: namespaceLister: replicas: 3 namespaceLister: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 512m memory: 1Gi env: - name: LOG_LEVEL value: \"0\" - name: CACHE_RESYNC_PERIOD value: \"10m\" - name: EXAMPLE_CUSTOM_CONFIG valueFrom: configMapKeyRef: name: namespace-lister-config key: custom-config - name: EXAMPLE_SECRET_VALUE valueFrom: secretKeyRef: name: namespace-lister-secret key: secret-key Konflux RBAC Configuration KonfluxRBAC configuration for role-based access control\n# Title: Konflux RBAC Configuration # Description: KonfluxRBAC configuration for role-based access control apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxRBAC metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-rbac spec: {} Konflux Release Service Configuration KonfluxReleaseService configuration with custom resource limits and environment variables\n# Title: Konflux Release Service Configuration # Description: KonfluxReleaseService configuration with custom resource limits and environment variables apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxReleaseService metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-release-service spec: releaseControllerManager: replicas: 3 manager: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 100m memory: 256Mi env: - name: DEFAULT_RELEASE_PVC valueFrom: configMapKeyRef: name: release-service-manager-properties key: DEFAULT_RELEASE_PVC - name: EXAMPLE_CUSTOM_CONFIG valueFrom: configMapKeyRef: name: release-service-config key: custom-config - name: EXAMPLE_SECRET_VALUE valueFrom: secretKeyRef: name: release-service-secret key: secret-key Konflux UI Configuration KonfluxUI configuration with ingress, proxy, and dex settings\n# Title: Konflux UI Configuration # Description: KonfluxUI configuration with ingress, proxy, and dex settings apiVersion: konflux.konflux-ci.dev/v1alpha1 kind: KonfluxUI metadata: labels: app.kubernetes.io/name: konflux-operator app.kubernetes.io/managed-by: kustomize name: konflux-ui spec: # Ingress configuration ingress: enabled: true ingressClassName: \"nginx\" host: \"konflux-ui.example.com\" annotations: cert-manager.io/cluster-issuer: \"letsencrypt-prod\" nginx.ingress.kubernetes.io/ssl-redirect: \"true\" nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" nginx.ingress.kubernetes.io/proxy-body-size: \"10m\" tlsSecretName: \"konflux-ui-tls\" # Proxy deployment configuration proxy: replicas: 3 nginx: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 500m memory: 512Mi env: - name: NGINX_WORKER_PROCESSES value: \"4\" - name: NGINX_WORKER_CONNECTIONS value: \"1024\" - name: NGINX_KEEPALIVE_TIMEOUT value: \"65\" oauth2Proxy: resources: requests: cpu: 50m memory: 128Mi limits: cpu: 200m memory: 256Mi env: - name: OAUTH2_PROXY_PROVIDER value: \"oidc\" - name: OAUTH2_PROXY_OIDC_ISSUER_URL value: \"https://dex.example.com/idp/\" - name: OAUTH2_PROXY_CLIENT_ID value: \"oauth2-proxy\" - name: OAUTH2_PROXY_CLIENT_SECRET valueFrom: secretKeyRef: name: oauth2-proxy-secret key: client-secret # Dex deployment configuration dex: replicas: 2 dex: resources: requests: cpu: 100m memory: 256Mi limits: cpu: 500m memory: 512Mi env: - name: DEX_LOG_LEVEL value: \"debug\" - name: DEX_STORAGE_TYPE value: \"kubernetes\" config: hostname: \"dex.example.com\" port: \"9443\" enablePasswordDB: true passwordConnector: \"local\" configureLoginWithOpenShift: true connectors: - type: \"github\" id: \"github\" name: \"GitHub\" config: clientID: \"$GITHUB_CLIENT_ID\" clientSecret: \"$GITHUB_CLIENT_SECRET\" redirectURI: \"https://dex.example.com/idp/callback\" orgs: - name: \"my-org\" teams: - \"developers\" - \"admins\" - name: \"another-org\" teams: - \"contributors\" - type: \"oidc\" id: \"google\" name: \"Google\" config: clientID: \"$GOOGLE_CLIENT_ID\" clientSecret: \"$GOOGLE_CLIENT_SECRET\" redirectURI: \"https://dex.example.com/idp/callback\" issuer: \"https://accounts.google.com\" groups: - \"admin@example.com\" - type: \"ldap\" id: \"ldap\" name: \"LDAP\" config: host: \"ldap.example.com:636\" bindDN: \"cn=admin,dc=example,dc=com\" bindPW: \"$LDAP_BIND_PASSWORD\" userSearch: baseDN: \"ou=Users,dc=example,dc=com\" filter: \"(objectClass=person)\" username: \"uid\" idAttr: \"uid\" emailAttr: \"mail\" nameAttr: \"cn\" groupSearch: baseDN: \"ou=Groups,dc=example,dc=com\" filter: \"(objectClass=groupOfNames)\" nameAttr: \"cn\" userMatchers: - userAttr: \"DN\" groupAttr: \"member\" staticPasswords: - email: \"user1@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: REDACTED username: \"user1\" userID: \"7138d2fe-724e-4e86-af8a-db7c4b080e20\" - email: \"user2@konflux.dev\" # bcrypt hash of the string \"password\": $(echo password | htpasswd -BinC 10 admin | cut -d: -f2) hash: REDACTED username: \"user2\" userID: \"ea8e8ee1-2283-4e03-83d4-b00f8b821b64\" - email: \"admin@konflux.dev\" # bcrypt hash of the string \"admin\": $(echo admin | htpasswd -BinC 10 admin | cut -d: -f2) hash: REDACTED username: \"admin\" userID: \"admin-12345-67890-abcdef\" ","categories":"","description":"Example configurations for Konflux Operator Custom Resources","excerpt":"Example configurations for Konflux Operator Custom Resources","ref":"/konflux-ci/docs/examples/","tags":"","title":"Examples"}]