rego opa policy to check if resources are provided for deployment in kubernetes - kubernetes

I'm checking if key resources.limits is provided in deployment kubernetes using OPA rego code. Below is the code, I'm trying to fetch the resources.limits key and it is always returning TRUE. Regardless of resources provided or not.
package resourcelimits
violation[{"msg": msg}] {
some container; input.request.object.spec.template.spec.containers[container]
not container.resources.limits.memory
msg := "Resources for the pod needs to be provided"

You can try something like this:
import future.keywords.in
violation[{"msg": msg}] {
input.request.kind.kind == "Deployment"
some container in input.request.object.spec.template.spec.containers
not container.resources.limits.memory
msg := sprintf("Container '%v/%v' does not have memory limits", [input.request.object.metadata.name, container.name])
}

Related

fetch and update particular field using terraform

i have a scenario,
How to fetch particular field value and also update particular field value?
For example :
Im deploying an application using terraform "kubernetes_deployment" resource configured with environment variables(endpoint=abc) and replicas=2.
resource "kubernetes_deployment" “app” {
…..….
spec {
replicas = 2
template {
spec {
….
env {
name = “ENDPOINT”
value = “abc”
}
}
Once i deployed using terraform script, another script might change configurations replicas=5 and environment values(endpoint=xyz)
Now i need to update only replicas to 20(if replicas < 20) through terraform script without changing the environment values(endpoint=abc)?
resource "kubernetes_deployment" “app” {
…..….
spec {
replicas = 20 -> only this has to reflect in apply
template {
spec {
….
env {
name = “ENDPOINT”
value = “abc”
}
}
How can i fetch particular field(replicas) to compare if replicas count > 20 and update only replicas count?
Can someone with more Terraform experience help me on this?
Inside the "kubernetes_deployment" resource block, consider adding a lifecycle block. Use it to ignore changes to resource attributes that can be made outside of Terraform's knowledge.
Provide a list of resource attributes to "ignore_changes", which Terrform would ignore in subsequent runs. The arguments are the relative address of the attributes in the resource. Map and list elements can be referenced using index notation.
lifecycle {
ignore_changes = [spec["env"]]
}
Reference: https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes

Controller get wrong namespace name for multipe operators instances in different namespaces

I developed a k8s Operator, after I deploy the first Operator in first namespace, it works well. Then I deploy the 2nd Operator in second namespace, I saw the 2nd controller to get the request that's namespace still is the first name, but the expected namespace should be second.
Please see the following code, when I play with second operator in the second namespace, request's namespace still is the first namespace.
func (r *AnexampleReconciler) Reconcile(request ctrl.Request) (ctrl.Result, error) {
log := r.Log.WithValues("Anexample", request.NamespacedName)
instance := &v1alpha1.Anexample{}
err := r.Get(context.TODO(), request.NamespacedName, instance)
if err != nil {
if errors.IsNotFound(err) {
log.Info("Anexample resource not found. Ignoring since object must be deleted.")
return reconcile.Result{}, nil
}
log.Error(err, "Failed to get Anexample.")
return reconcile.Result{}, err
}
I suspect it might be related to election, but I don't understand them.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
what happen in Controller? How to fix it?
We are seeing a similar issue. The issue is about getting the wrong namespace. Might be a bug in controller-runtime.
request.NamespacedName from controller-runtime is returning the wrong namespace.
request.Namespaced depends on the namespace of the custom resource which you are deploying.
Operators are deployed in namespaces, but can still be configured to listen to custom resources in all namespaces.
This should not be related to the election but with the way you setup your manager.
You didn't specify a namespace in the ctrl.Options for the manager, so it will listen to CR changes in all namespaces.
If you want your operator to only listen to one single namespace, pass the namespace to to manager.
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
LeaderElection: enableLeaderElection,
LeaderElectionID: "2eeda3e4.com.aaa.bbb.ccc",
Namesace: "<namespace-of-operator-two>",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
See also here: https://developers.redhat.com/blog/2020/06/26/migrating-a-namespace-scoped-operator-to-a-cluster-scoped-operator#migration_guide__namespace_scoped_to_cluster_scoped

Nextflow doesn't use the right service account to deploy workflows to kubernetes

We're trying to use nextflow on a k8s namespace other than our default, the namespace we're using is nextflownamespace. We've created our PVC and ensured the default service account has an admin rolebinding. We're getting an error that nextflow can't access the PVC:
"message": "persistentvolumeclaims \"my-nextflow-pvc\" is forbidden:
User \"system:serviceaccount:mynamespace:default\" cannot get resource
\"persistentvolumeclaims\" in API group \"\" in the namespace \"nextflownamespace\"",
In that error we see that system:serviceaccount:mynamespace:default is incorrectly pointing to our default namespace, mynamespace, not nextflownamespace which we created for nextflow use.
We tried adding debug.yaml = true to our nextflow.config but couldn't find the YAML it submits to k8s to validate the error. Our config file looks like this:
profiles {
standard {
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}
We did verify that when we change the namespace to another arbitrary value the error message used the new arbitrary namespace, but the service account name continued to point to the users default namespace erroneously.
We've tried every variant of profiles.standard.k8s.serviceAccount = "system:serviceaccount:nextflownamespace:default" that we could think of but didn't get any change with those attempts.
I think it's best to avoid using nested config profiles with Nextflow. I would either remove the 'standard' layer from your profile or just make 'standard' a separate profile:
profiles {
standard {
process.executor = 'local'
}
k8s {
executor = "k8s"
namespace = "nextflownamespace"
cpus = 1
memory = 1.GB
debug.yaml = true
}
aws{
endpoint = "https://s3.nautilus.optiputer.net"
}
}

Does Fabric8io K8s java client support patch() or rollingupdate() using YAML snippets?

I am trying to program the patching/rolling upgrade of k8s apps by taking deployment snippets as input. I use patch() method to apply the snippet onto an existing deployment as part of rollingupdate using fabric8io's k8s client APIS.. Fabric8.io kubernetes-client version 4.10.1
I'm also using some loadYaml helper methods from kubernetes-api 3.0.12.
Here is my sample snippet - adminpatch.yaml file:
kind: Deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
spec:
containers:
- name: ${PATCH_IMAGE_NAME}
image: ${PATCH_IMAGE_URL}
imagePullPolicy: Always
I'm sending the above file content (with all the placeholders replaced) to patchDeployment() method as string.
Here is my call to fabric8 patch() method:
public static String patchDeployment(String deploymentName, String namespace, String deploymentYaml) {
try {
Deployment deploymentSnippet = (Deployment) getK8sObject(deploymentYaml);
if(deploymentSnippet instanceof Deployment) {
logger.debug("Valid deployment object.");
Deployment deployment = getK8sClient().apps().deployments().inNamespace(namespace).withName(deploymentName)
.rolling().patch(deploymentSnippet);
System.out.println(deployment.toString());
return getLastConfig(deployment.getMetadata(), deployment);
}
} catch (Exception Ex) {
Ex.printStackTrace();
}
return "Failed";
}
It throws the below exception:
> io.fabric8.kubernetes.client.KubernetesClientException: Failure
> executing: PATCH at:
> https://10.44.4.126:6443/apis/apps/v1/namespaces/default/deployments/patch-demo.
> Message: Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable. Received status: Status(apiVersion=v1, code=422,
> details=StatusDetails(causes=[StatusCause(field=spec.selector,
> message=Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, reason=FieldValueInvalid, additionalProperties={})],
> group=apps, kind=Deployment, name=patch-demo, retryAfterSeconds=null,
> uid=null, additionalProperties={}), kind=Status,
> message=Deployment.apps "patch-demo" is invalid: spec.selector:
> Invalid value:
> v1.LabelSelector{MatchLabels:map[string]string{"app":"nginx",
> "deployment":"3470574ffdbd6e88d426a77dd951ed45"},
> MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
> immutable, metadata=ListMeta(_continue=null, remainingItemCount=null,
> resourceVersion=null, selfLink=null, additionalProperties={}),
> reason=Invalid, status=Failure, additionalProperties={}).
I also tried the original snippet(with labels and selectors) with kubectl patch deployment <DEPLOYMENT_NAME> -n <MY_NAMESPACE> --patch "$(cat adminpatch.yaml) and this applies the same snippet fine.
I could not get much documentation on fabric8io k8s client patch() java API. Any help will be appreciated.
With latest improvements in Fabric8 Kubernetes Client, you can do it both via patch() and rolling() API apart from using createOrReplace() which is mentioned in older answer.
Patching JSON/Yaml String using patch() call:
As per latest release v5.4.0, Fabric8 Kubernetes Client does support patch via raw string. It can be either YAML or JSON, see PatchTest.java. Here is an example using raw JSON string to update image of a Deployment:
try (KubernetesClient kubernetesClient = new DefaultKubernetesClient()) {
kubernetesClient.apps().deployments()
.inNamespace(deployment.getMetadata().getNamespace())
.withName(deployment.getMetadata().getName())
.patch("{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"patch-demo-ctr-2\",\"image\":\"redis\"}]}}}}");
}
Rolling Update to change container image:
However, if you just want to do rolling update; You might want to use rolling() API instead. Here is how it would look like for updating image of an existing Deployment:
try (KubernetesClient client = new DefaultKubernetesClient()) {
// ... Create Deployment
// Update Deployment for a single container Deployment
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.updateImage("gcr.io/google-samples/hello-app:2.0");
}
Rolling Update to change multiple images in multi-container Deployment:
If you want to update Deployment with multiple containers. You would need to use updateImage(Map<String, String>) method instead. Here is an example of it's usage:
try (KubernetesClient client = new DefaultKubernetesClient()) {
Map<String, String> containerToImageMap = new HashMap<>();
containerToImageMap.put("nginx", "stable-perl");
containerToImageMap.put("hello", "hello-world:linux");
client.apps().deployments()
.inNamespace(namespace)
.withName("multi-container-deploy")
.rolling()
.updateImage(containerToImageMap);
}
Rolling Update Restart an existing Deployment
If you need to restart your existing Deployment you can just use the rolling().restart() DSL method like this:
try (KubernetesClient client = new DefaultKubernetesClient()) {
client.apps().deployments()
.inNamespace(namespace)
.withName(deployment.getMetadata().getName())
.rolling()
.restart();
}
Here is the related bug in Fabric8io rolling API: https://github.com/fabric8io/kubernetes-client/issues/1868
As of now, one way I found to achieve patching with fabri8io APIs is to :
Get the running deployment object
add/replace containers in it with new containers
use the createOrReplace() API to redeploy the deployment object
But your patch understandably could be more than just an update to the containers field. In that case, processing each editable field becomes messy.
I went ahead with using the official K8s client's patchNamespacedDeployment() API to implement patching. https://github.com/kubernetes-client/java/blob/356109457499862a581a951a710cd808d0b9c622/examples/src/main/java/io/kubernetes/client/examples/PatchExample.java

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}