Pulumi - How do we patch a deployment created with helm chart, when values do not contain the property to be updated - pulumi

I've code to deploy a helm chart using pulumi kubernetes.
I would like to patch the StatefulSet (change serviceAccountName) after deploying the chart. Chart doesn't come with an option to specify service account for StatefulSet.
here's my code
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues
}, {
dependsOn: psmdbOperator
})
const set = psmdbChart.getResource('apps/v1/StatefulSet', `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`);
I'm using Percona Server for MongoDB Operator helm charts. It uses Operator to manage StatefulSet, which also defines CRDs.
I've tried pulumi transformations. In my case Chart doesn't contain a StatefulSet resource instead a CRD.
If it's not possible to update ServiceAccountName on StatefulSet using transformations, is there any other way I can override it?
any help is appreciated.
Thanks,

Pulumi has a powerful feature called Transformations which is exactly what you need here(Example). A transformation is a callback that gets invoked by the Pulumi runtime and can be used to modify resource input properties before the resource is created.
I've not tested the code but you should get the idea:
import * as k8s from "#pulumi/kubernetes";
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues,
transformations: [
// Set name of StatefulSet
(obj: any, opts: pulumi.CustomResourceOptions) => {
if (obj.kind === "StatefulSet" && obj.metadata.name === `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`) {
obj.spec.template.spec.serviceAccountName = "customServiceAccount"
}
},
],
}, {
dependsOn: psmdbOperator
})

Seems Pulumi doesn't have straight forward way to patch the existing kubernetes resource. Though this is still possible with multiple steps.
From Github Comment
Import existing resource
pulumi up to import
Make desired changes to imported resource
pulumi up to apply changes
It seems they plan on supporting functionality similar to kubectl apply -f for patching resources.

Related

Creating grafana dashboards using terraform/cdktf

I can create influxdb datasources and alerts using cdktf for grafana.
The only thing missing are the actual dashboards.
So far I have been using grafonnet, which appears to be deprecated.
Is it possible to create dashboards and panels using cdktf yet, if so, how?
You can use the grafana_dashboard resource from the grafana provider. For this you have to add the provider if you haven't already, e.g. by running cdktf provider add grafana.
Your code could look like this
import { Dashboard } from "./.gen/providers/grafana/lib/dashboard";
import { TerraformAsset } from "cdktf";
import * as path from "path";
// in your stack
new Dashboard(this, "metrics", {
config: Fn.file(
// Copies the file so that it can be used in the context of the
// Stack deployment
new TerraformAsset(this, "metrics-file", {
path: path.resolve(__dirname, "config.json")
}).path
)
})

Helm reads wrong Kubeversion: >=1.22.0-0 for v1.23.0 as v1.20.0

How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?
Pulumi up Diagnostics:
kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0
ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml
Source:
...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
Any ideas as to fixing this oddity between the version error and the actual cluster version?
I've tried:
Deleting everything and starting over.
Updating to the latest ArgoCD install yaml.
I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at pulumi's k8s repository.
Looking at the history of https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.
import * as k8s from "#pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in Chart.yaml and install the chart from your local path.
Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.

How do I deploy the AWS EFS CSI Driver Helm chart from https://kubernetes-sigs.github.io/aws-efs-csi-driver/ using Pulimi

I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at AWS EFS SIG Repo using Pulumi. With Source from AWS EFS CSI Driver Github Source. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.
Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `1.3.6`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}
I've tried several variations on the above code, chopping of the -driver in the name, removing aws-cfs-csi-driver from the repo property, changing to latest for the version.
When I do a pulumi up I get: failed to pull chart: chart "aws-efs-csi-driver" version "1.3.6" not found in https://kubernetes-sigs.github.io/aws-efs-csi-driver/ repository
$ helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
$ pulumi version
v3.24.1
You're using the wrong version in your chart invocation.
The version you're selecting is the application version, ie the release version of the underlying application. You need to set the Chart version, see here which is defined here
the following works:
const csiDrive = new kubernetes.helm.v3.Release("csi", {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
});
If you want to use the existing code you have, try this:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}

Terraform Unable to find Helm Release charts

I'm running Kubernetes on GCP and doing changes via Terraform v0.11.14
When running terraform plan I'm getting the error messages here
Error: Error refreshing state: 2 errors occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: helm_release.cert-manager: error installing: the server could not find the requested resource
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: helm_release.nginx: error installing: the server could not find the requested resource
Here's a copy of my helm.tf
resource "helm_release" "nginx" {
depends_on = ["google_container_node_pool.tally-np"]
name = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
}
resource "helm_release" "cert-manager" {
depends_on = ["google_container_node_pool.tally-np"]
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
set {
name = "ingressShim.defaultIssuerName"
value = "letsencrypt-production"
}
set {
name = "ingressShim.defaultIssuerKind"
value = "ClusterIssuer"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${google_container_cluster.cluster.zone} && kubectl create -f ${path.module}/letsencrypt-prod.yaml"
}
}
I've read that Helm deprecated most of the old chart repos so I tried adding the repositories and installing the charts locally under the namespace kube-system but so far the issue is still persisting.
Here's the list of versions for Terraform and it's providers
Terraform v0.11.14
provider.google v2.17.0
provider.helm v0.10.2
provider.kubernetes v1.9.0
provider.random v2.2.1
As the community is moving towards Helm v3, the maintainers have depreciated the old helm model where we had a single mono repo called stable. The new model is like each product having its own repo. On November 13, 2020 the stable and incubator charts repository reached the end of development and became archives.
The archived charts are now hosted at a new URL. To continue using the archived charts, you will have to make some tweaks in your helm workflow.
Sample workaround:
helm repo add new-stable https://charts.helm.sh/stable
helm fetch new-stable/prometheus-operator

Does Fabric8io K8s java client support patch() or rollingupdate() using YAML snippets?

I am trying to program the patching/rolling upgrade of k8s apps by taking deployment snippets as input. I use patch() method to apply the snippet onto an existing deployment as part of rollingupdate using fabric8io's k8s client APIS.. Fabric8.io kubernetes-client version 4.10.1
I'm also using some loadYaml helper methods from kubernetes-api 3.0.12.
Here is my sample snippet - adminpatch.yaml file:
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
spec:
containers:
- name: ${PATCH_IMAGE_NAME}
image: ${PATCH_IMAGE_URL}
imagePullPolicy: Always
I'm sending the above file content (with all the placeholders replaced) to patchDeployment() method as string.
Here is my call to fabric8 patch() method:
public static String patchDeployment(String deploymentName, String namespace, String deploymentYaml) {
try {
Deployment deploymentSnippet = (Deployment) getK8sObject(deploymentYaml);
if(deploymentSnippet instanceof Deployment) {
logger.debug("Valid deployment object.");
Deployment deployment = getK8sClient().apps().deployments().inNamespace(namespace).withName(deploymentName)
.rolling().patch(deploymentSnippet);
System.out.println(deployment.toString());
return getLastConfig(deployment.getMetadata(), deployment);
}
} catch (Exception Ex) {
Ex.printStackTrace();
}
return "Failed";
}
It throws the below exception:
> io.fabric8.kubernetes.client.KubernetesClientException: Failure
> executing: PATCH at:
> https://10.44.4.126:6443/apis/apps/v1/namespaces/default/deployments/patch-demo.
> Message: Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable. Received status: Status(apiVersion=v1, code=422,
> details=StatusDetails(causes=[StatusCause(field=spec.selector,
> message=Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, reason=FieldValueInvalid, additionalProperties={})],
> group=apps, kind=Deployment, name=patch-demo, retryAfterSeconds=null,
> uid=null, additionalProperties={}), kind=Status,
> message=Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, metadata=ListMeta(_continue=null, remainingItemCount=null,
> resourceVersion=null, selfLink=null, additionalProperties={}),
> reason=Invalid, status=Failure, additionalProperties={}).
I also tried the original snippet(with labels and selectors) with kubectl patch deployment <DEPLOYMENT_NAME> -n <MY_NAMESPACE> --patch "$(cat adminpatch.yaml) and this applies the same snippet fine.
I could not get much documentation on fabric8io k8s client patch() java API. Any help will be appreciated.
With latest improvements in Fabric8 Kubernetes Client, you can do it both via patch() and rolling() API apart from using createOrReplace() which is mentioned in older answer.
Patching JSON/Yaml String using patch() call:
As per latest release v5.4.0, Fabric8 Kubernetes Client does support patch via raw string. It can be either YAML or JSON, see PatchTest.java. Here is an example using raw JSON string to update image of a Deployment:
try (KubernetesClient kubernetesClient = new DefaultKubernetesClient()) {
kubernetesClient.apps().deployments()
.inNamespace(deployment.getMetadata().getNamespace())
.withName(deployment.getMetadata().getName())
.patch("{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"patch-demo-ctr-2\",\"image\":\"redis\"}]}}}}");
}
Rolling Update to change container image:
However, if you just want to do rolling update; You might want to use rolling() API instead. Here is how it would look like for updating image of an existing Deployment:
try (KubernetesClient client = new DefaultKubernetesClient()) {
// ... Create Deployment
// Update Deployment for a single container Deployment
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.updateImage("gcr.io/google-samples/hello-app:2.0");
}
Rolling Update to change multiple images in multi-container Deployment:
If you want to update Deployment with multiple containers. You would need to use updateImage(Map<String, String>) method instead. Here is an example of it's usage:
try (KubernetesClient client = new DefaultKubernetesClient()) {
Map<String, String> containerToImageMap = new HashMap<>();
containerToImageMap.put("nginx", "stable-perl");
containerToImageMap.put("hello", "hello-world:linux");
client.apps().deployments()
.inNamespace(namespace)
.withName("multi-container-deploy")
.rolling()
.updateImage(containerToImageMap);
}
Rolling Update Restart an existing Deployment
If you need to restart your existing Deployment you can just use the rolling().restart() DSL method like this:
try (KubernetesClient client = new DefaultKubernetesClient()) {
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.restart();
}
Here is the related bug in Fabric8io rolling API: https://github.com/fabric8io/kubernetes-client/issues/1868
As of now, one way I found to achieve patching with fabri8io APIs is to :
Get the running deployment object
add/replace containers in it with new containers
use the createOrReplace() API to redeploy the deployment object
But your patch understandably could be more than just an update to the containers field. In that case, processing each editable field becomes messy.
I went ahead with using the official K8s client's patchNamespacedDeployment() API to implement patching. https://github.com/kubernetes-client/java/blob/356109457499862a581a951a710cd808d0b9c622/examples/src/main/java/io/kubernetes/client/examples/PatchExample.java