How do I deploy the AWS EFS CSI Driver Helm chart from https://kubernetes-sigs.github.io/aws-efs-csi-driver/ using Pulimi - kubernetes

I would like to be able to deploy the AWS EFS CSI Driver Helm chart hosted at AWS EFS SIG Repo using Pulumi. With Source from AWS EFS CSI Driver Github Source. I would like to avoid having almost everything managed with Pulumi except this one part of my infrastructure.
Below is the TypeScript class I created to manage interacting with the k8s.helm.v3.Release class:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `1.3.6`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}
I've tried several variations on the above code, chopping of the -driver in the name, removing aws-cfs-csi-driver from the repo property, changing to latest for the version.
When I do a pulumi up I get: failed to pull chart: chart "aws-efs-csi-driver" version "1.3.6" not found in https://kubernetes-sigs.github.io/aws-efs-csi-driver/ repository
$ helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}
$ pulumi version
v3.24.1

You're using the wrong version in your chart invocation.
The version you're selecting is the application version, ie the release version of the underlying application. You need to set the Chart version, see here which is defined here
the following works:
const csiDrive = new kubernetes.helm.v3.Release("csi", {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
});
If you want to use the existing code you have, try this:
import * as k8s from '#pulumi/kubernetes';
import * as eks from '#pulumi/eks';
export default class AwsEfsCsiDriverHelmRepo extends k8s.helm.v3.Release {
constructor(cluster: eks.Cluster) {
super(`aws-efs-csi-driver`, {
chart: `aws-efs-csi-driver`,
version: `2.2.3`,
repositoryOpts: {
repo: `https://kubernetes-sigs.github.io/aws-efs-csi-driver/`,
},
namespace: `kube-system`,
}, { provider: cluster.provider });
}
}

Related

Creating grafana dashboards using terraform/cdktf

I can create influxdb datasources and alerts using cdktf for grafana.
The only thing missing are the actual dashboards.
So far I have been using grafonnet, which appears to be deprecated.
Is it possible to create dashboards and panels using cdktf yet, if so, how?
You can use the grafana_dashboard resource from the grafana provider. For this you have to add the provider if you haven't already, e.g. by running cdktf provider add grafana.
Your code could look like this
import { Dashboard } from "./.gen/providers/grafana/lib/dashboard";
import { TerraformAsset } from "cdktf";
import * as path from "path";
// in your stack
new Dashboard(this, "metrics", {
config: Fn.file(
// Copies the file so that it can be used in the context of the
// Stack deployment
new TerraformAsset(this, "metrics-file", {
path: path.resolve(__dirname, "config.json")
}).path
)
})

Helm reads wrong Kubeversion: >=1.22.0-0 for v1.23.0 as v1.20.0

How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?
Pulumi up Diagnostics:
kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0
ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml
Source:
...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
Any ideas as to fixing this oddity between the version error and the actual cluster version?
I've tried:
Deleting everything and starting over.
Updating to the latest ArgoCD install yaml.
I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at pulumi's k8s repository.
Looking at the history of https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.
import * as k8s from "#pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in Chart.yaml and install the chart from your local path.
Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.

How to automate deployment to ECS Fargate when new image is pushed to ECR repository

Firstly, this is specific to CDK - I know there are plenty of questions/answers around this topic out there but none of them are CDK specific.
Given that best practices dictate that a Fargate deployment shouldn't look for the 'latest' tag in an ECR repository, how could one set up a CDK pipeline when using ECR as a source?
In a multi-repository application where each service is in it's own repository (where those repositories would have their own CDK CodeBuild deployments to set up building and pushing to ECR), how would the infrastructure CDK pipeline be aware of new images being pushed to an ECR repository and be able to deploy that new image to the ECS Fargate service?
Since a task definition has to specify an image tag (else it'll look for 'latest' which may not exist), this seems to be impossible.
As a concrete example, say I have the following 2 repositories:
CdkInfra
One of these repositories would be created for each customer to create the full environment for their application
SomeService
Actual application code
Only one of this repository should exist and re-used by multiple CdkInfra projects
cdk directory defining the CodeBuild project so when a push to master is detected, the service is built and the image pushed to ECR
The expected workflow would be as such:
SomeService repository is updated and so a new image is pushed to ECR
The CdkInfra pipeline should detect that a tracked ECR repository has a new image
The CdkInfra pipeline updates the Fargate task definition to reference the new image's tag
The Fargate service pulls the new image and deploys it
I know there is currently a limit with CodeDeploy not supporting ECS deployments due to CFN not supporting them, but it seems that CodePipelineActions has the ability to set up an EcrSourceAction which may be able to achieve this, however I've been unable to get this to work so far.
Is this possible at all, or am I stuck waiting until CFN support ECS CodeDeploy functionality?
You could store the name of the latest tag in an AWS Systems Manager (SSM) parameter (see the list here), and dynamically update it when you deploy new images to ECR.
Then, you could use the AWS SDK to fetch the value of the parameter during your CDK deploy, and then pass that value to your Fargate deployment.
The following CDK stack written in Python uses the value of the YourSSMParameterName parameter (in my AWS account) as the name of an S3 bucket:
from aws_cdk import (
core as cdk
aws_s3 as s3
)
import boto3
class MyStack(cdk.Stack):
def __init__(self, scope, construct_id, **kwargs):
super().__init__(scope, construct_id, **kwargs)
ssm = boto3.client('ssm')
res = ssm.get_parameter(Name='YourSSMParameterName')
name = res['Parameter']['Value']
s3.Bucket(
self, '...',
bucket_name=name,
)
I tested that and it worked beautifully.
Alright so after some hackery I've managed to do this.
Firstly, the service itself (in this case it's a Spring Boot project) gets a cdk directory in it's root. This basically just sets up the CI part of the CI/CD pipeline:
const appName: string = this.node.tryGetContext('app-name');
const ecrRepo = new ecr.Repository(this, `${appName}Repository`, {
repositoryName: appName,
imageScanOnPush: true,
removalPolicy: cdk.RemovalPolicy.DESTROY,
});
const bbSource = codebuild.Source.bitBucket({
// BitBucket account
owner: 'mycompany',
// Name of the repository this project belongs to
repo: 'reponame',
// Enable webhook
webhook: true,
// Configure so webhook only fires when the master branch has an update to any code other than this CDK project (e.g. Spring source only)
webhookFilters: [codebuild.FilterGroup.inEventOf(codebuild.EventAction.PUSH).andBranchIs('master').andFilePathIsNot('./cdk/*')],
});
const buildSpec = {
version: '0.2',
phases: {
pre_build: {
// Get the git commit hash that triggered this build
commands: ['env', 'export TAG=${CODEBUILD_RESOLVED_SOURCE_VERSION}'],
},
build: {
commands: [
// Build Java project
'./mvnw clean install -Dskiptests',
// Log in to ECR repository that contains the Corretto image
'aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 489478819445.dkr.ecr.us-west-2.amazonaws.com',
// Build docker images and tag them with the commit hash as well as 'latest'
'docker build -t $ECR_REPO_URI:$TAG -t $ECR_REPO_URI:latest .',
// Log in to our own ECR repository to push
'$(aws ecr get-login --no-include-email)',
// Push docker images to ECR repository defined above
'docker push $ECR_REPO_URI:$TAG',
'docker push $ECR_REPO_URI:latest',
],
},
post_build: {
commands: [
// Prepare the image definitions artifact file
'printf \'[{"name":"servicename","imageUri":"%s"}]\' $ECR_REPO_URI:$TAG > imagedefinitions.json',
'pwd; ls -al; cat imagedefinitions.json',
],
},
},
// Define the image definitions artifact - is required for deployments by other CDK projects
artifacts: {
files: ['imagedefinitions.json'],
},
};
const buildProject = new codebuild.Project(this, `${appName}BuildProject`, {
projectName: appName,
source: bbSource,
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
environmentVariables: {
// Required for tagging/pushing image
ECR_REPO_URI: { value: ecrRepo.repositoryUri },
},
},
buildSpec: codebuild.BuildSpec.fromObject(buildSpec),
});
!!buildProject.role &&
buildProject.role.addToPrincipalPolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['ecr:*'],
resources: ['*'],
}),
);
Once this is set up, the CodeBuild project has to be built manually once so the ECR repo has a valid 'latest' image (otherwise the ECS service won't get created correctly).
Now in the separate infrastructure codebase you can create the ECS cluster and service as normal, getting the ECR repository from a lookup:
const repo = ecr.Repository.fromRepositoryName(this, 'SomeRepository', 'reponame'); // reponame here has to match what you defined in the bbSource previously
const cluster = new ecs.Cluster(this, `Cluster`, { vpc });
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Service', {
cluster,
serviceName: 'servicename',
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repo, 'latest'),
containerName: repo.repositoryName,
containerPort: 8080,
},
});
Finally create a deployment construct which listens to ECR events, manually converts the generated imageDetail.json file into a valid imagedefinitions.json file, then deploys to the existing service.
const sourceOutput = new cp.Artifact();
const ecrAction = new cpa.EcrSourceAction({
actionName: 'ECR-action',
output: sourceOutput,
repository: repo, // this is the same repo from where the service was originally defined
});
const buildProject = new codebuild.Project(this, 'BuildProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
privileged: true,
},
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'cat imageDetail.json | jq "[. | {name: .RepositoryName, imageUri: .ImageURI}]" > imagedefinitions.json',
'cat imagedefinitions.json',
],
},
},
artifacts: {
files: ['imagedefinitions.json'],
},
}),
});
const convertOutput = new cp.Artifact();
const convertAction = new cpa.CodeBuildAction({
actionName: 'Convert-Action',
input: sourceOutput,
outputs: [convertOutput],
project: buildProject,
});
const deployAction = new cpa.EcsDeployAction({
actionName: 'Deploy-Action',
service: service.service,
input: convertOutput,
});
new cp.Pipeline(this, 'Pipeline', {
stages: [
{ stageName: 'Source', actions: [ecrAction] },
{ stageName: 'Convert', actions: [convertAction] },
{ stageName: 'Deploy', actions: [deployAction] },
],
});
Obviously this isn't as clean as it otherwise could be once CloudFormation supports this fully, but it works pretty well.
My view on this situation is that if you are using CDK (actually is CloudFormation) to deploy latest image from ECR is very difficult.
What I end up is putting all Docker image build and CDK deploy as a one build script
In my case, is a Java application, I build the war file and prepare the DockerFile in a /docker directory
FROM tomcat:8.0
COPY deploy.war /usr/local/tomcat/webapps/
Then have the CDK script to pick up and build the image in Runtime.
const taskDefinition = new ecs.FargateTaskDefinition(this, 'taskDefinition', {
cpu: 256,
memoryLimitMiB: 1024
});
const container = taskDefinition.addContainer('web', {
image: ecs.ContainerImage.fromDockerImageAsset(
new DockerImageAsset(this, "image", {
directory: "docker"
})
)
});
This will put the image into a specific CDK ECR repository and deploy.
Therefore, I don't relies on the ECR for keeping different version of my build. Each time I need to deploy or rollback, just do it directly from the build script.

Pulumi - How do we patch a deployment created with helm chart, when values do not contain the property to be updated

I've code to deploy a helm chart using pulumi kubernetes.
I would like to patch the StatefulSet (change serviceAccountName) after deploying the chart. Chart doesn't come with an option to specify service account for StatefulSet.
here's my code
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues
}, {
dependsOn: psmdbOperator
})
const set = psmdbChart.getResource('apps/v1/StatefulSet', `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`);
I'm using Percona Server for MongoDB Operator helm charts. It uses Operator to manage StatefulSet, which also defines CRDs.
I've tried pulumi transformations. In my case Chart doesn't contain a StatefulSet resource instead a CRD.
If it's not possible to update ServiceAccountName on StatefulSet using transformations, is there any other way I can override it?
any help is appreciated.
Thanks,
Pulumi has a powerful feature called Transformations which is exactly what you need here(Example). A transformation is a callback that gets invoked by the Pulumi runtime and can be used to modify resource input properties before the resource is created.
I've not tested the code but you should get the idea:
import * as k8s from "#pulumi/kubernetes";
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues,
transformations: [
// Set name of StatefulSet
(obj: any, opts: pulumi.CustomResourceOptions) => {
if (obj.kind === "StatefulSet" && obj.metadata.name === `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`) {
obj.spec.template.spec.serviceAccountName = "customServiceAccount"
}
},
],
}, {
dependsOn: psmdbOperator
})
Seems Pulumi doesn't have straight forward way to patch the existing kubernetes resource. Though this is still possible with multiple steps.
From Github Comment
Import existing resource
pulumi up to import
Make desired changes to imported resource
pulumi up to apply changes
It seems they plan on supporting functionality similar to kubectl apply -f for patching resources.

Terraform Unable to find Helm Release charts

I'm running Kubernetes on GCP and doing changes via Terraform v0.11.14
When running terraform plan I'm getting the error messages here
Error: Error refreshing state: 2 errors occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: helm_release.cert-manager: error installing: the server could not find the requested resource
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: helm_release.nginx: error installing: the server could not find the requested resource
Here's a copy of my helm.tf
resource "helm_release" "nginx" {
depends_on = ["google_container_node_pool.tally-np"]
name = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
}
resource "helm_release" "cert-manager" {
depends_on = ["google_container_node_pool.tally-np"]
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
set {
name = "ingressShim.defaultIssuerName"
value = "letsencrypt-production"
}
set {
name = "ingressShim.defaultIssuerKind"
value = "ClusterIssuer"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${google_container_cluster.cluster.zone} && kubectl create -f ${path.module}/letsencrypt-prod.yaml"
}
}
I've read that Helm deprecated most of the old chart repos so I tried adding the repositories and installing the charts locally under the namespace kube-system but so far the issue is still persisting.
Here's the list of versions for Terraform and it's providers
Terraform v0.11.14
provider.google v2.17.0
provider.helm v0.10.2
provider.kubernetes v1.9.0
provider.random v2.2.1
As the community is moving towards Helm v3, the maintainers have depreciated the old helm model where we had a single mono repo called stable. The new model is like each product having its own repo. On November 13, 2020 the stable and incubator charts repository reached the end of development and became archives.
The archived charts are now hosted at a new URL. To continue using the archived charts, you will have to make some tweaks in your helm workflow.
Sample workaround:
helm repo add new-stable https://charts.helm.sh/stable
helm fetch new-stable/prometheus-operator