Does Fabric8io K8s java client support patch() or rollingupdate() using YAML snippets? - kubernetes

I am trying to program the patching/rolling upgrade of k8s apps by taking deployment snippets as input. I use patch() method to apply the snippet onto an existing deployment as part of rollingupdate using fabric8io's k8s client APIS.. Fabric8.io kubernetes-client version 4.10.1
I'm also using some loadYaml helper methods from kubernetes-api 3.0.12.
Here is my sample snippet - adminpatch.yaml file:
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
spec:
containers:
- name: ${PATCH_IMAGE_NAME}
image: ${PATCH_IMAGE_URL}
imagePullPolicy: Always
I'm sending the above file content (with all the placeholders replaced) to patchDeployment() method as string.
Here is my call to fabric8 patch() method:
public static String patchDeployment(String deploymentName, String namespace, String deploymentYaml) {
try {
Deployment deploymentSnippet = (Deployment) getK8sObject(deploymentYaml);
if(deploymentSnippet instanceof Deployment) {
logger.debug("Valid deployment object.");
Deployment deployment = getK8sClient().apps().deployments().inNamespace(namespace).withName(deploymentName)
.rolling().patch(deploymentSnippet);
System.out.println(deployment.toString());
return getLastConfig(deployment.getMetadata(), deployment);
}
} catch (Exception Ex) {
Ex.printStackTrace();
}
return "Failed";
}
It throws the below exception:
> io.fabric8.kubernetes.client.KubernetesClientException: Failure
> executing: PATCH at:
> https://10.44.4.126:6443/apis/apps/v1/namespaces/default/deployments/patch-demo.
> Message: Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable. Received status: Status(apiVersion=v1, code=422,
> details=StatusDetails(causes=[StatusCause(field=spec.selector,
> message=Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, reason=FieldValueInvalid, additionalProperties={})],
> group=apps, kind=Deployment, name=patch-demo, retryAfterSeconds=null,
> uid=null, additionalProperties={}), kind=Status,
> message=Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, metadata=ListMeta(_continue=null, remainingItemCount=null,
> resourceVersion=null, selfLink=null, additionalProperties={}),
> reason=Invalid, status=Failure, additionalProperties={}).
I also tried the original snippet(with labels and selectors) with kubectl patch deployment <DEPLOYMENT_NAME> -n <MY_NAMESPACE> --patch "$(cat adminpatch.yaml) and this applies the same snippet fine.
I could not get much documentation on fabric8io k8s client patch() java API. Any help will be appreciated.

With latest improvements in Fabric8 Kubernetes Client, you can do it both via patch() and rolling() API apart from using createOrReplace() which is mentioned in older answer.
Patching JSON/Yaml String using patch() call:
As per latest release v5.4.0, Fabric8 Kubernetes Client does support patch via raw string. It can be either YAML or JSON, see PatchTest.java. Here is an example using raw JSON string to update image of a Deployment:
try (KubernetesClient kubernetesClient = new DefaultKubernetesClient()) {
kubernetesClient.apps().deployments()
.inNamespace(deployment.getMetadata().getNamespace())
.withName(deployment.getMetadata().getName())
.patch("{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"patch-demo-ctr-2\",\"image\":\"redis\"}]}}}}");
}
Rolling Update to change container image:
However, if you just want to do rolling update; You might want to use rolling() API instead. Here is how it would look like for updating image of an existing Deployment:
try (KubernetesClient client = new DefaultKubernetesClient()) {
// ... Create Deployment
// Update Deployment for a single container Deployment
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.updateImage("gcr.io/google-samples/hello-app:2.0");
}
Rolling Update to change multiple images in multi-container Deployment:
If you want to update Deployment with multiple containers. You would need to use updateImage(Map<String, String>) method instead. Here is an example of it's usage:
try (KubernetesClient client = new DefaultKubernetesClient()) {
Map<String, String> containerToImageMap = new HashMap<>();
containerToImageMap.put("nginx", "stable-perl");
containerToImageMap.put("hello", "hello-world:linux");
client.apps().deployments()
.inNamespace(namespace)
.withName("multi-container-deploy")
.rolling()
.updateImage(containerToImageMap);
}
Rolling Update Restart an existing Deployment
If you need to restart your existing Deployment you can just use the rolling().restart() DSL method like this:
try (KubernetesClient client = new DefaultKubernetesClient()) {
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.restart();
}

Here is the related bug in Fabric8io rolling API: https://github.com/fabric8io/kubernetes-client/issues/1868
As of now, one way I found to achieve patching with fabri8io APIs is to :
Get the running deployment object
add/replace containers in it with new containers
use the createOrReplace() API to redeploy the deployment object
But your patch understandably could be more than just an update to the containers field. In that case, processing each editable field becomes messy.
I went ahead with using the official K8s client's patchNamespacedDeployment() API to implement patching. https://github.com/kubernetes-client/java/blob/356109457499862a581a951a710cd808d0b9c622/examples/src/main/java/io/kubernetes/client/examples/PatchExample.java

Related

Helm reads wrong Kubeversion: >=1.22.0-0 for v1.23.0 as v1.20.0

How to deploy on K8 via Pulumi using the ArgoCD Helm Chart?
Pulumi up Diagnostics:
kubernetes:helm.sh/v3:Release (argocd):
error: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
THE CLUSTER VERSION IS: v1.23.0 verified on AWS. And NOT 1.20.0
ArgoCD install yaml used with CRD2Pulumi: https://raw.githubusercontent.com/argoproj/argo-cd/master/manifests/core-install.yaml
Source:
...
cluster = eks.Cluster("argo-example") # version="1.23"
# Cluster provider
provider = k8s.Provider(
"eks",
kubeconfig=cluster.kubeconfig.apply(lambda k: json.dumps(k))
#kubeconfig=cluster.kubeconfig
)
ns = k8s.core.v1.Namespace(
'argocd',
metadata={
"name": "argocd",
},
opts=pulumi.ResourceOptions(
provider=provider
)
)
argo = k8s.helm.v3.Release(
"argocd",
args=k8s.helm.v3.ReleaseArgs(
chart="argo-cd",
namespace=ns.metadata.name,
repository_opts=k8s.helm.v3.RepositoryOptsArgs(
repo="https://argoproj.github.io/argo-helm"
),
values={
"server": {
"service": {
"type": "LoadBalancer",
}
}
},
),
opts=pulumi.ResourceOptions(provider=provider, parent=ns),
)
Any ideas as to fixing this oddity between the version error and the actual cluster version?
I've tried:
Deleting everything and starting over.
Updating to the latest ArgoCD install yaml.
I could reproduce your issue, though I am not quite sure what causes the mismatch between versions. Better open an issue at pulumi's k8s repository.
Looking at the history of https://github.com/argoproj/argo-helm/blame/main/charts/argo-cd/Chart.yaml, you can see that the kubeversion requirement has been added after 5.9.1. So using that version successfully deploys the helm chart. E.g.
import * as k8s from "#pulumi/kubernetes";
const namespaceName = "argo";
const namespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: namespaceName,
}
});
const argo = new k8s.helm.v3.Release("argo", {
repositoryOpts: {
repo: "https://argoproj.github.io/argo-helm"
},
chart: "argo-cd",
version: "5.9.1",
namespace: namespace.metadata.name,
})
(Not Recommended) Alternatively, you could also clone the source code of the chart, comment out the kubeVersion requirement in Chart.yaml and install the chart from your local path.
Upgrade helm. I had a similar issue where my k8s was 1.25 but helm complained it was 1.20. Tried everything else, upgrading helm worked.

rego opa policy to check if resources are provided for deployment in kubernetes

I'm checking if key resources.limits is provided in deployment kubernetes using OPA rego code. Below is the code, I'm trying to fetch the resources.limits key and it is always returning TRUE. Regardless of resources provided or not.
package resourcelimits
violation[{"msg": msg}] {
some container; input.request.object.spec.template.spec.containers[container]
not container.resources.limits.memory
msg := "Resources for the pod needs to be provided"
You can try something like this:
import future.keywords.in
violation[{"msg": msg}] {
input.request.kind.kind == "Deployment"
some container in input.request.object.spec.template.spec.containers
not container.resources.limits.memory
msg := sprintf("Container '%v/%v' does not have memory limits", [input.request.object.metadata.name, container.name])
}

Pulumi - How do we patch a deployment created with helm chart, when values do not contain the property to be updated

I've code to deploy a helm chart using pulumi kubernetes.
I would like to patch the StatefulSet (change serviceAccountName) after deploying the chart. Chart doesn't come with an option to specify service account for StatefulSet.
here's my code
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues
}, {
dependsOn: psmdbOperator
})
const set = psmdbChart.getResource('apps/v1/StatefulSet', `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`);
I'm using Percona Server for MongoDB Operator helm charts. It uses Operator to manage StatefulSet, which also defines CRDs.
I've tried pulumi transformations. In my case Chart doesn't contain a StatefulSet resource instead a CRD.
If it's not possible to update ServiceAccountName on StatefulSet using transformations, is there any other way I can override it?
any help is appreciated.
Thanks,
Pulumi has a powerful feature called Transformations which is exactly what you need here(Example). A transformation is a callback that gets invoked by the Pulumi runtime and can be used to modify resource input properties before the resource is created.
I've not tested the code but you should get the idea:
import * as k8s from "#pulumi/kubernetes";
// install psmdb database chart
const psmdbChart = new k8s.helm.v3.Chart(psmdbChartName, {
namespace: namespace.metadata.name,
path: './percona-helm-charts/charts/psmdb-db',
// chart: 'psmdb-db',
// version: '1.7.0',
// fetchOpts: {
// repo: 'https://percona.github.io/percona-helm-charts/'
// },
values: psmdbChartValues,
transformations: [
// Set name of StatefulSet
(obj: any, opts: pulumi.CustomResourceOptions) => {
if (obj.kind === "StatefulSet" && obj.metadata.name === `${psmdbChartName}-${psmdbChartValues.replsets[0].name}`) {
obj.spec.template.spec.serviceAccountName = "customServiceAccount"
}
},
],
}, {
dependsOn: psmdbOperator
})
Seems Pulumi doesn't have straight forward way to patch the existing kubernetes resource. Though this is still possible with multiple steps.
From Github Comment
Import existing resource
pulumi up to import
Make desired changes to imported resource
pulumi up to apply changes
It seems they plan on supporting functionality similar to kubectl apply -f for patching resources.

Terraform Unable to find Helm Release charts

I'm running Kubernetes on GCP and doing changes via Terraform v0.11.14
When running terraform plan I'm getting the error messages here
Error: Error refreshing state: 2 errors occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.cert-manager: helm_release.cert-manager: error installing: the server could not find the requested resource
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: 1 error occurred:
* module.cls-xxx-us-central1-a-dev.helm_release.nginx: helm_release.nginx: error installing: the server could not find the requested resource
Here's a copy of my helm.tf
resource "helm_release" "nginx" {
depends_on = ["google_container_node_pool.tally-np"]
name = "ingress-nginx"
chart = "ingress-nginx/ingress-nginx"
namespace = "kube-system"
}
resource "helm_release" "cert-manager" {
depends_on = ["google_container_node_pool.tally-np"]
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
set {
name = "ingressShim.defaultIssuerName"
value = "letsencrypt-production"
}
set {
name = "ingressShim.defaultIssuerKind"
value = "ClusterIssuer"
}
provisioner "local-exec" {
command = "gcloud container clusters get-credentials ${var.cluster_name} --zone ${google_container_cluster.cluster.zone} && kubectl create -f ${path.module}/letsencrypt-prod.yaml"
}
}
I've read that Helm deprecated most of the old chart repos so I tried adding the repositories and installing the charts locally under the namespace kube-system but so far the issue is still persisting.
Here's the list of versions for Terraform and it's providers
Terraform v0.11.14
provider.google v2.17.0
provider.helm v0.10.2
provider.kubernetes v1.9.0
provider.random v2.2.1
As the community is moving towards Helm v3, the maintainers have depreciated the old helm model where we had a single mono repo called stable. The new model is like each product having its own repo. On November 13, 2020 the stable and incubator charts repository reached the end of development and became archives.
The archived charts are now hosted at a new URL. To continue using the archived charts, you will have to make some tweaks in your helm workflow.
Sample workaround:
helm repo add new-stable https://charts.helm.sh/stable
helm fetch new-stable/prometheus-operator

What is the difference between a resourceVersion and a generation?

In Kubernetes object metadata, there are the concepts of resourceVersion and generation. I understand the notion of resourceVersion: it is an optimistic concurrency control mechanism—it will change with every update. What, then, is generation for?
resourceVersion changes on every write, and is used for optimistic concurrency control
in some objects, generation is incremented by the server as part of persisting writes affecting the spec of an object.
some objects' status fields have an observedGeneration subfield for controllers to persist the generation that was last acted on.
In a Deployment context:
In Short
resourceVersion is the version of a k8s resource, while generation is the version of the deployment which you can use to undo, pause and so on using kubectl cli.
Source code for kubectl rollout: https://github.com/kubernetes/kubectl/blob/master/pkg/cmd/rollout/rollout.go#L50
The Long Version
resourceVersion
K8s server saves all the modifications to any k8s resource. Each modification has a version which is called resourceVersion.
k8s languages libraries provide a way to receive in real-time events of ADD, DELETE, MODIFY events of any resource. You also have BOOKMARK event, but let's leave that for a moment aside.
On any modification operation, you receive the new k8s resource with updated resourceVersion. You can use this resourceVersion and start a watch starting from this resourceVersion, so you won't miss any events between the time the k8s server sent you back the first response, until the watch has started.
K8s doesn't preserve the history for every resource for ever. I think that it will save for 5m, but I'm not sure exactly.
resourceVersion will change after any modification of the object.
The reason of it's existence is to avoid concurrency problems where multiple clients try to modify the same k8s resource. This pattern is pretty common also in databases and you can find more info about it:
Optimistic concurrency control (https://en.wikipedia.org/wiki/Optimistic_concurrency_control)
https://www.programmersought.com/article/1104647506/
observedGeneration
You didn't talk about it in your question but thats important piece of information we need to clarify before moving on to generation.
It is the version of the replicaSet which this deployment is currently tracking on.
When the deployment is still creating for the first time, this value won't exist (good discussion on this can be found here: https://github.com/kubernetes/kubernetes/issues/47871).
This value can be found under status:
....
apiVersion: apps/v1
kind: Deployment
.....
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-02-07T19:04:17Z"
lastUpdateTime: "2021-02-07T19:04:17Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-02-07T19:04:15Z"
lastUpdateTime: "2021-02-07T19:17:09Z"
message: ReplicaSet "deployment-bcb437a4-59bb9f6f69" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 3. <<<--------------------
readyReplicas: 1
replicas: 1
updatedReplicas: 1
p.s, from this article: https://thenewstack.io/kubernetes-deployments-work/
observedGeneration is equal to the deployment.kubernetes.io/revision annotation. It is the observedGeneration.
It looks correct because deployment.kubernetes.io/revision does not exist when the deployment is first created and not yet ready, and also it has the same value as observedGeneration when the deployment is updated.
generation
It represents the version of the "new" replicaSet which this deployment should track on.
When a deployment is created for the first time, the value of this will be equal to 1. When the observedGeneration will be set to 1, it means that replicate set is ready (This question is not about how to know if a deployment was successful (or not) so I'm not getting into what is "ready", which is some terminology I created for this answer - but be sure that there are additional conditions to check if a deployment was successful or not).
Same goes for any change in the deployment k8s resource which will trigger re-deployment. the generation value will be incremented by 1, and then it will take some time until observedGeneration will be equal to generation value.
More info on observedGeneration and generation in the context of kuebctl rollout status (to check if a deployment "finished") from kubectl source code:
https://github.com/kubernetes/kubectl/blob/a2d36ec6d62f756e72fb3a5f49ed0f720ad0fe83/pkg/polymorphichelpers/rollout_status.go#L75
if deployment.Generation <= deployment.Status.ObservedGeneration {
cond := deploymentutil.GetDeploymentCondition(deployment.Status, appsv1.DeploymentProgressing)
if cond != nil && cond.Reason == deploymentutil.TimedOutReason {
return "", false, fmt.Errorf("deployment %q exceeded its progress deadline", deployment.Name)
}
if deployment.Spec.Replicas != nil && deployment.Status.UpdatedReplicas < *deployment.Spec.Replicas {
return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d out of %d new replicas have been updated...\n", deployment.Name, deployment.Status.UpdatedReplicas, *deployment.Spec.Replicas), false, nil
}
if deployment.Status.Replicas > deployment.Status.UpdatedReplicas {
return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d old replicas are pending termination...\n", deployment.Name, deployment.Status.Replicas-deployment.Status.UpdatedReplicas), false, nil
}
if deployment.Status.AvailableReplicas < deployment.Status.UpdatedReplicas {
return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d of %d updated replicas are available...\n", deployment.Name, deployment.Status.AvailableReplicas, deployment.Status.UpdatedReplicas), false, nil
}
return fmt.Sprintf("deployment %q successfully rolled out\n", deployment.Name), true, nil
}
return fmt.Sprintf("Waiting for deployment spec update to be observed...\n"), false, nil
I must say that I'm not sure when a observedGeneration can be higher than generation. Maybe folks can help me out in the comments.
To sum it all up: Illustration from this great article: https://thenewstack.io/kubernetes-deployments-work/
More Info:
https://kubernetes.slack.com/archives/C2GL57FJ4/p1612651711106700?thread_ts=1612650049.105300&cid=C2GL57FJ4
Some more information about Bookmarks Events (Which are related to resourceVersion: What k8s bookmark solves?
How do you rollback deployments in Kubernete: https://learnk8s.io/kubernetes-rollbacks