I have a statefulset created using the terraform helm provider. I need to update the value of an attribute (serviceName) in the statefulset but I keep getting the following error
Error: failed to replace object: StatefulSet.apps "value" is invalid: spec: Forbidden:
updates to statefulset spec for fields other than 'replicas', 'template', and
'updateStrategy' are forbidden
Error is pretty descriptive and I understand that serviceName property can't be changed but then how do I update this property? I am totally fine with having a downtime and also letting helm delete all the old pods and create new ones.
I have tried setting force_update and recreate_pods properties to true in my helm chart with no luck. Manually deleting the old helm chart is not an option for me.
I maintain the Kustomization provider and unlike the helm integration into Terraform it tracks each individual Kubernetes resource in the Terraform state. Therefor, it will show changes to the actual Kubernetes resources in the plan. And, most important for your issue here, it will also generate destroy-and-recreate plans for cases where you have to change immutable fields.
It's a bit of a migration effort, but you can make it easier for you by using the helm template command to write YAML to disk and then point the Kustomize provider at the YAML.
As part of Kubestack, the Terraform framework for AKS, EKS and GKE I maintain, I also provide a convenience module.
You could use it like this, to have it apply the output of helm template:
module "example_stateful_set" {
source = "kbst.xyz/catalog/custom-manifests/kustomization"
version = "0.1.0"
configuration_base_key = "default"
configuration = {
default = {
resources = [
"${path.root}/path/to/helm/template/output.yaml"
]
}
}
}
Finally, you will have to import the existing Kubernetes resources into the Terraform state, so that the provider can start managing the existing resources.
Related
Is it possible to have a helm template that gets applied as in kubectl patch?
This would mean that the values provided are merged into the existing resource, rather than overriding the complete resource.
As an example, if there was a resource living in the cluster like:
foo:
- bar
huu:
- har
I'd like to only update this partly by patching foo: [bar] with e.g. foo: [pear] on applying the chart without any knowledge regarding the rest of this resource.
Also if this was an antipattern I'd be very thankful to get any hints on how to achieve this without manually running kubectl patch.
Helm needs to know, that this resource is managed by Helm. So you have to adopt the resource by adding labels to it:
labels:
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/release-name: your-release
There's also a Helm plugin to adopt existing resources with helm: https://github.com/HamzaZo/helm-adopt
I'm evaluating crossplane to use as our go to tool to deploy our clients different solutions and have struggled with one issue:
We want to install crossplane to one cluster on GCP (which we create manually) and use that crossplane to provision new cluster on which we can install helm charts and deploy as usual.
The main problem so far is that we haven't figured out how to tell crossplane to install the helm charts into other clusters than itself.
This is what we have tried so for:
The provider-config in the example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: helm-provider
spec:
credentials:
source: InjectedIdentity
...which works but installs everything into the same cluster as crossplane.
and the other example:
apiVersion: helm.crossplane.io/v1beta1
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
source: Secret
secretRef:
name: cluster-credentials
namespace: crossplane-system
key: kubeconfig
...which required a lot of makefile scripting to easier generate a kubeconfig for the new cluster and with that kubecoinfig still gives a lot of errors (but does begin to create something in the new cluster, but it doesnt work all the way. Gettings errors like: " PodUnschedulable Cannot schedule pods: gvisor}).
I have only tried crossplane for a couple of days so I'm aware that I might be approaching this from a completely wrong angle but I do like the promise of crossplane and its approach compared to Terraform and alike.
So the question is: I'm thinking completely wrong or I'm missing something obvious.
The second test with the kubeconfig feels quite complicated right now (many steps in correct order to achieve it).
Thanks
As you've noticed, ProviderConfig with InjectedIdentity is for the case where provider-helm installs the helm release into the same cluster.
To deploy to other clusters, provider-helm needs a kubeconfig file of the remote cluster which needs to be provided as a Kubernetes secret and referenced from ProviderConfig. So, as long as you've provided a proper kubeconfig to an external cluster that is accessible from your Crossplane cluster (a.k.a. control plane), provider-helm should be able to deploy the release to the remote cluster.
So, it looks like you're on the right track regarding configuring provider-helm, and since you observed something getting deployed to the external cluster, you provided a valid kubeconfig, and provider-helm could access and authenticate to the cluster.
The last error you're getting sounds like some incompatibility between your cluster and release, e.g. the external cluster only allows pods with gvisor and the application that you want to install with provider helm does not have some labels accordingly.
As a troubleshooting step, you might try installing that helm chart with exactly same configuration to the external cluster via helm cli, using the same kubeconfig you built.
Regarding the inconvenience of building the Kubeconfig you mentioned, provider-helm needs a way to access to that external Kubernetes cluster, and since kubeconfig is the most common way for this purpose. However, if you see another alternative that makes things easier for some common use cases, this could be implemented and it would be great if you could create a feature request in the repo for this.
Finally, I am wondering how you're creating those external clusters. If it makes sense to create them with Crossplane as well, e.g. if GKE with provider-gcp, then, you can compose a helm ProviderConfig together with a GKE Cluster resource which would just create the appropriate secret and ProviderConfig when you create a new cluster, you can check this as an example: https://github.com/crossplane-contrib/provider-helm/blob/master/examples/in-composition/composition.yaml#L147
Is it possible to run multiple IngressController in the same Namespace with the same IngressClass?
I have multiple IngressController with different LoadBalancer IP Addresses and would like to continue with this setup.
I upgraded the first IngressController to the latest version.
Updating the second/third/.. IngressController fails because of:
rendered manifests contain a resource that already exists. Unable to continue with update: IngressClass "nginx" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nginx-ingress-lb-02": current value is "nginx-ingress-lb-01"
Any idea how to fix this?
The issue you mention here is mainly with Helm, preventing you from overwriting some resources - your IngressClass - that belongs to another helm deployment.
One way to work around this may be to use helm "--dry-run" option. Once you have the list of objects written into a file: remove the IngressClass, then apply that file.
Another way may be to patch the chart deploying your controller. As a contributor to the Traefik helm chart, I know that we would install IngressClasses named after the Traefik deployment we operate. The chart you're using, for Nginx, apparently does not implement support for that scenario. Which doesn't mean it shouldn't work.
Now, answering your first question, is it possible to run multiple IngressController in the same Namespace with the same IngressClass: yes.
You may have several Ingress Controllers, one that watches for Ingresses in namespace A, another in namespace B, both ingresses referencing the same class. Deploying those ingresses into the same namespace is possible - although implementing NetworkPolicies, isolating your controllers into their own namespace would help in distinguishing who's who.
An options that works for me, when deploying multiple ingress controllers with Helm, is setting controller.ingressClassResource.enabled: false in every Helm deployment, except the first.
The comments in the default values.yaml aren't entirely clear on this, but after studying the chart I found that controller.ingressClassResource.enabled is only evaluated to determine whether or not to create the IngressClass, not whether or not to attach the controller.ingressClassResource.controllerValue to the controller. (This is true at least for helm-chart-4.0.13).
So, the first Helm deployment, if you don't override any of the default controller.ingressClassResource settings, the following values will be used to create an IngressClass, and attach the controllerValue to the controller:
controller:
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
For all other controllers that you want to run with the same IngressClass, simply set controller.ingressClassResource.enabled: false, to prevent Helm from trying (and failing) to create the IngressClass again.
Is there some tool available that could tell me whether a K8s YAML configuration (to-be-supplied to kubectl apply) is valid for the target Kubernetes version without requiring a connection to a Kubernetes cluster?
One concrete use-case here would be to detect incompatibilities before actual deployment to a cluster, just because some already-deprecated label has been finally dropped in a newer Kubernetes version, e.g. as has happened for Helm and the switch to Kubernetes 1.16 (see Helm init fails on Kubernetes 1.16.0):
Dropped:
apiVersion: extensions/v1beta1
New:
apiVersion: apps/v1
I want to check these kind of incompatibilities within a CI system, so that I can reject it before even attempting to deploy it.
just run below command to validate the syntax
kubectl create -f <yaml-file> --dry-run
In fact the dry-run option is to validate the YAML syntax and the object schema. You can grab the output into a variable and if there is no error then rerun the command without dry-run
You could use kubeval
https://kubeval.instrumenta.dev/
I don't think kubectl support client-side only validation yet (02/2022)
I'm using Terraform to provision an EKS cluster (mostly following the example here). At the end of the tutorial, there's a method of outputting the configmap through the terraform output command, and then applying it to the cluster via kubectl apply -f <file>. I'm attempting to wrap this kubectl command into the Terraform file using the kubernetes_config_map resource, however when running Terraform for the first time, I receive the following error:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
The strange thing is, every subsequent terraform apply works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit depends_on argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.
provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time.
I attempted to get more information by enabling the TRACE debug flag for terraform, however the only output I got was the exact same error as above.
Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543
So, in others words, I changed the cluster's name in aws_eks_cluster_auth block to a static name, and worked. Well, perhaps this is a bug on TF.
This seems like a timing issue while bootstrapping your cluster. Your kube-apiserver initially doesn't think there's a configmaps resource.
It's likely that the Role and RoleBinding that it's using the create the ConfigMap has not been fully configured in the cluster to allow it to create a ConfigMap (possibly within the EKS infrastructure) which uses the iam-authenticator and the following policies:
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
The depends Terraform clause will not do much since it seems like the timing is happening within the EKS service.
I suggest you try the terraform-aws-eks module which uses the same resource described in the doc. You can also browse through the code if you'd like to figure out how they solve the problem that you are seeing.