Date: 2022-12-14
Superceded by ADR 32. Decoupling Deployment
In our old KCP architecture, we had a design for provisioning a new deployment target in support of new Environments. This design was to be implemented in GITOPSRVCE-228 by an environment controller that would create and manage sub-workspaces of the user’s main Konflux workspace, and that would provide a serviceaccount to Argo in order to deploy the user’s application to those sub-workspaces. Now, without KCP, we need a new design.
The Environment CR serves two purposes:
Some use cases to consider for Environments:
We are going split the two original purposes of the Environment CR into different APIs.
The new DeploymentTarget API is designed to emulate storage management APIs (see persistent-volumes and storage provisioner design for reference).
A deployment target, usually a K8s api endpoint. The credentials for connecting to the target will
be stored in a secret which will be referenced in the clusterCredentialsSecret field. A DT Can be
created manually by a user, or dynamically using a provisioner. The phase
section shows the lifecycle phase of the DT.
Immutable object: no
Scope: namespace
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTarget
metadata:
name: prod-dt
spec:
deploymentTargetClassName: isolation-level-namespace
kubernetesCredentials:
defaultNamespace: team-a--prod-dtc
apiURL: …
clusterCredentialsSecret: team-a--prod-dtc--secret
claimRef:
name: prod-dtc
status:
phase: Bound
Phases:
Pending: DT is not yet available for binding.
Available: DT waits for a Claim to be bound to.
Bound: The DT was bounded to a DTC.
Released: The DT was previously bound to a DTC which got deleted. external resources were not freed.
Failed: DT was released from its claim, but there was a failure during the release of external resources.
Represents a request for a DeploymentTarget. The phase
section shows the lifecycle phase of the DTC.
Immutable object: no
Scope: namespace
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClaim
metadata:
name: prod-dtc
spec:
deploymentTargetClassName: isolation-level-namespace
status:
phase: Bound
Phases:
Pending: DTC wait for the binding controller or user to bind it with a DT that satisfies it.
Bound: The DTC was bounded to a DT that satisfies it by the binding controller.
Lost: The DTC lost its bounded DT. The DT doesn’t exist anymore because it got deleted.
Referred from a DeploymentTarget and DeploymentTargetClaim. Defines DeploymentTarget properties that should be abstracted from the controller/user that creates a DTC and wants a DT to be provisioned automatically for it.
In the example below you can see a class that represents a DT that grants the requestor access to a namespace. The requestor isn’t aware of the actual location the DT is going to be provisioned. The parameters section can be used to forward additional information to the provisioner. The reclaimPolicy field will tell the provisioner what to do with the DT once its corresponding DTC is deleted, the values can be Retain or Delete.
Immutable object: yes
Scope: cluster
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClass
metadata:
name: isolation-level-namespace
spec:
provisioner: appstudio.redhat.com/devsandbox
parameters: {}
reclaimPolicy: Delete
Environment objects refer to a DTC using the deploymentTargetClaim
field. The environment controller will wait for the DTC to get to the
bound
phase, once it is bound, it will reach the DT and read the
target’s connection details from the kubernetesCredentials field and
configure Argo/Gitops services to use them.
Immutable object: no
Scope: namespace
apiVersion: appstudio.redhat.com/v1alpha1
kind: Environment
metadata:
name: prod
spec:
displayName: “Production for Team A”
deploymentStrategy: AppStudioAutomated
parentEnvironment: staging
tags: [prod]
configuration:
target:
deploymentTargetClaim:
claimName: prod-dtc
Binds DeploymentTargetClaim to a DeploymentTarget that satisfies its requirements.
It watches for DTC resources and tries to find a matching DT for each one of them. In addition, it’s responsible for updating the Status
sections of the DT and DTC (it also watches DTC objects). It marks DTC objects that requires dynamic provisioning.
A DT that was created dynamically for a specific DTC will always be attached to it.
DT and DTC have one to one bindings.
Watch for DTC objects. If the DTC was marked for a dynamic provisioning by the provisioner, it reads the parameters from the [DTCLS] mentioned in the DTC, provisions the target and creates a DT object which references the DTC that started the process.
When a DTC is deleted, if it was bound to a DT created by the provisioner, it reclaims the DT and the actual cluster that was created for it based on the reclaimPolicy configuration.
Watch for Environment objects. If the Environment references a DTC that is Bound
and that
DTC references a DT that contains a spec.kubernetesCredentials.clusterCredentialsSecret
field,
then ensure that there exists a [GitOpsDeploymentManagedEnvironment] resource that also references
that secret.
User,Controller/CRD | DT | DTC | DTCLS | Environment |
---|---|---|---|---|
Binder | watch/list/get/update | watch/list/get/update | ||
Provisioner | create/delete | watch | get | |
Environment | get | get | watch | |
Integration | create/delete | create/delete | ||
User | create/delete | create/delete | create/delete/update |
Const Name in code | Key | Values | Applied by | Applied on | Purpose |
---|---|---|---|---|---|
AnnTargetProvisioner | provionser.appstudio.redhat.com/dt-provisioner | A name of a provisioner | Binding controller | DTC | Indicates that a DT should be dynamically provisioned for the DTC by the provisioner whose name appear in the value |
AnnBindCompleted | dt.appstudio.redhat.com/bind-complete | “yes” | Binding controller | DTC | Indicates that the binding controller completed the binding process |
AnnBoundByController | dt.appstudio.redhat.com/bound-by-controller | “yes” | Binding controller | DTC | Indicates that the binding controller bind the DTC to a DT. In practice it means that the controller set the DTC.spec.VolumeName to the value of DT.Name |
AnnDynamicallyProvisioned | provionser.appstudio.redhat.com/provisioned-by | A name of a provisioner | Provisioner | DT | Indicates that the provisioner whose name appears in the value provisioned the DT. |
Const Name in code | Key | Applied by | Applied on | Purpose |
---|---|---|---|---|
finalizerDT | provionser.appstudio.redhat.com/finalizer | Provisioner | DT | Delays the deletion of a DT so the provisioner can free up external resources |
devsandbox
[DTCLS]. The devsandbox provisioner responds to that request and generates a SpaceRequest
, ultimately resulting in a new namespace for each environment. The spacerequest controller provisions a serviceaccount in the new namespace and places a kubeconfig including the token for this serviceaccount in the origin namespace. The SpaceRequest
in the origin namespace is updated to reference that Secret
and is marked ready by the SpaceRequest
controller. The devsandbox deployment target provisioner controller sees that and creates the devsandbox DT referencing the Secret
and marks the devsandbox DT as ready. The deployment target binder sees that, and attaches the new DTs to the DTCs. The environment controller sees this and creates a [GitOpsDeploymentManagedEnvironment] that references the Secret
found from traversing from the Environment to the DTC to the DT.Secret
. The DT has the details and a reference to the secret used to connect to his/hers cluster. In addition, it contains the name of the DTC it should be bounded to. The user then refer to the DTC from the Environment that should use it.Users may mutate existing DeploymentTargets and DeploymentTargetClaims in order to, for instance, request that their provisioned cluster is scaled up to include more resources. However, implementation of resource request changes is provided on a per-provisioner basis. Some may support it, and some may not. Most all of our provisioners in the MVP will make no changes to a DeploymentTarget’s external resources in the event of a resource request change to either the DeploymentTargetClaim or the DeploymentTarget.
In the rare case that a provisioner does support resizing external resources - the user should request resource changes on the DeploymentTargetClaim, which should then cause the provisioner to resize the external resources modeled by the DeploymentTarget. Lastly, the provisioner should update the resources in the spec of the DeploymentTarget to reflect the external change.
An addendum about what features can be left out until later iterations.
For our “MVP” target in early CY23, we’re going to ignore the details of this proposal. It sketches an architecture that we only want to roll out between MVP and Service Preview.
The result of this phase is automation that binds DT and DTC, and automatically provisions an environment on the Sandbox cluster (using the SpaceRequest API) and creates a DT.
Work on the components of Phase 1 (the sandbox provisioner, the binding controller, the gitops service controller changes and the API changes) can all proceed in parallel while work on the SpaceRequest API advances.
When done, Phase 1 enables adjusting the integration service controller to create and delete new Environments and DTCs for integration tests.
The result of this phase is the ability to specify parameters that are needed from the deployment target, such as memory, CPU, CPU architecture and the number of nodes.
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTarget
metadata:
name: prod-dt
spec:
deploymentTargetClassName: isolation-level-namespace
resources:
requests:
memory: 16Gi
cpu: 8m
limits:
memory: 16Gi
cpu: 8m
arch: x86–64
kubernetesCredentials:
defaultNamespace: team-a--prod-dtc
apiURL: …
clusterCredentialsSecret: team-a--prod-dtc--secret
claimRef:
name: prod-dtc
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClaim
metadata:
name: prod-dtc
spec:
deploymentTargetClassName: isolation-level-namespace
resources:
requests:
memory: 16Gi
cpu: 8m
limits:
memory: 16Gi
cpu: 8m
arch: x86–64
status:
phase: Bound
Hypershift cluster with 3 nodes example:
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClass
metadata:
name: isolation-level-cluster-small
spec:
provisioner: appstudio.redhat.com/hypershift
parameters:
numOfNodes: 3
reclaimPolicy: Delete
allowTargetResourceExpansion: false
The result of this phase is to automatically provisioning of Hypershift cluster using Konflux’s
credentials. We call it provided compute
(compute that we provide, not the user) and it’s included
as part of the offering. This compute can be used for both long lived clusters and for ephemeral
clusters used by the integration service.
For long lived clusters, the maintenance model for them is yet to be determined.
Hypershift cluster with 3 nodes example:
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClass
metadata:
name: isolation-level-cluster-small
spec:
provisioner: appstudio.redhat.com/hypershift
parameters:
numOfNodes: 3
reclaimPolicy: Delete
allowTargetResourceExpansion: false
Hypershift cluster with 6 nodes example:
apiVersion: appstudio.redhat.com/v1alpha1
kind: DeploymentTargetClass
metadata:
name: isolation-level-cluster-large
spec:
provisioner: appstudio.redhat.com/hypershift
parameters:
numOfNodes: 6
reclaimPolicy: Delete
allowTargetResourceExpansion: false