openshift does not honor the USER directive in Dockerfile - kubernetes

I'm new to Openshift/k8s. The docker image I'm running in openshift is using USER blabla. But when I exec into the pod, it use a different rather than the one in Dockerfile.
I'm wondering why? and how can I work around this ?
Thanks

For security, cluster administrators have the option to force containers to run with cluster assigned uids. By default, most containers run using a uid from a range assigned to the project.
This is controlled by the configured SecurityContextConstraints.
To allow containers to run as the user declared in their dockerfile (even though this can expose the cluster, security-wise), allow the pod's service account access to the anyuid SecurityContextConstraint (oadm policy add-scc-to-user anyuid system:serviceaccount:<your ns>:<your service account>

Related

Aws ECS Fargate enforce readonlyfilesystem

I need to enforce on ECS Fargate services 'readonlyrootFileSystem' to reduce Security hub vulnerabilities.
I thought it was an easy task by just setting it true in the task definition.
But it backfired as the service does not deploy because the commands in the dockerfile are not executed because they do not have access to folders and also this is incompatible with ssm execute commands, so I won't be able to get inside the container.
I managed to set the readonlyrootFileSystem To true and have my service back on by mounting a volume. To do I mounted a tmp volume that is used by the container to install dependencies at start and a data volume to store data (updates).
So now according to the documentation the security hub vulnerability should be fixed as the rule needs that variable not be False but still security hub is flagging the task as non complaint.
---More update---
the task definition of my service spins also a datadog image for monitoring. That also needs to have its filesystem as readonly to satisfy security hub.
Here I cannot solve as above because datadog agent needs access to /etc/ folder and if I mount a volume there I will lose files and the service wont' start.
is there a way out of this?
Any ideas?
In case someone stumbles into this.
The solution (or workaround, call it as you please), was to set readonlyrootFileSystem True for both container and sidecard (datadog in this case) and use bind mounts.
The rules for monitoring ECS using datadog can be found here
The bind mount that you need to add for your service depend on how you have setup your dockerfile.
in my case it was about adding a volume for downloading data.
Moreover since with readonly FS ECS exec (SSM) does not work, if you want this you also have to add mounts: if added two mounts in /var/lib/amazon and /var/log/amazon. This will allow to have ssm (docker exec basically into your container)
As for datadog, I just needed to fix the mounts so that the agent could work. In my case, since it was again a custom image, I mounted a volume on /etc/datadog-agent.
happy days!

Trying to connect to Digital Ocean Kubernates Cluster - .kube/config: not a directory

I'm trying to connect to a Digital Ocean Kubernates cluster using doctl but when I run
doctl kubernetes cluster kubeconfig save <> I get an error saying .kube/config: not a directory. I've authenticated using doctl and when I run doctl account get I see my account info. I'm confused as to what the problem is. Is this some sort of permission issue or did I miss a config step somewhere?
kubectl (by default) stores a configuration in ${HOME}/.kube/config. It appears you don't have the file and the command doesn't create it if it doesn't exist; I recommend you try creating ${HOME}/.kube first as doctl really ought to create the config file if it doesn't exist.
kubectl facilitates interacting with multiple clusters as multiple users in multiple namespaces through the use a tuple called 'context' which combines a cluster with a user with a(n optional) namespace. The command lets you switch between these easily.
After you're done with a cluster, generally (!) you must tidy up its entires in ${HOME}/.kube/config too as these configs tend to grow over time.
You can change the location of the kubectl config file using an environment variable (KUBECONFIG).
See Organizing Cluster Access Using kubeconfig Files

How to create GKE using a service account in another project

I have a project A in which I have created a service account.
I want to create a GKE in project B.
I followed the steps of service account impersonation listed here https://cloud.google.com/iam/docs/impersonating-service-accounts
in project A,
the default-service-accounts of project B have roles/iam.serviceAccountTokenCreator and roles/iam.serviceAccountUser on the service account I created which is my-service-account
in project B,
my-service-account has Kubernetes admin role
When I try to create, I end up with the error
Error: Error waiting for creating GKE NodePool: All cluster resources were brought up, but: only 0 nodes out of 1 have registered; cluster may be unhealthy.
I am using terraform to create this cluster and the service account being used by terraform has kubernetes admin and service account user role.
This is what it shows in the console
GKE error
Edit:
I tried using Gcloud command line to create GKE
gcloud beta container --project "my-project" clusters create "test-gke-sa" --zone "us-west1-a" --no-enable-basic-auth --cluster-version "1.18.16-gke.502" --release-channel "regular" --machine-type "e2-standard-16" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-stackdriver-kubernetes --enable-private-nodes --master-ipv4-cidr "192.168.0.16/28" --enable-ip-alias --network "projects/infgprj-sbo-n-hostgs-gl-01/global/networks/my-network" --subnetwork "projects/my-network/regions/us-west1/subnetworks/my-subnetwork" --cluster-secondary-range-name "gke1-pods" --services-secondary-range-name "gke1-services" --default-max-pods-per-node "110" --no-enable-master-authorized-networks --addons HorizontalPodAutoscaling,HttpLoadBalancing,GcePersistentDiskCsiDriver --enable-autoupgrade --enable-autorepair --max-surge-upgrade 1 --max-unavailable-upgrade 0 --enable-shielded-nodes --shielded-secure-boot --node-locations "us-west1-a" --service-account="my-service-account#project-a.iam.gserviceaccount.com"
Got the same errors.
I see that the node-pool is created, but not nodes. (or atleast they are not attached to the node-pool?)
here are some more pics of errors
VM page
GKE page
Solution: Finally, I figured, what was wrong. I had given token creator role to only default service accounts. It started working when I gave the same role to default service agents as well. So basically
role = "roles/iam.serviceAccountTokenCreator",
members = [
"serviceAccount:{project-number}-compute#developer.gserviceaccount.com",
"serviceAccount:service-{project-number}#container-engine-robot.iam.gserviceaccount.com",
"serviceAccount:service-{project-number}#compute-system.iam.gserviceaccount.com",
]
Just to confirm that it's a service account error and not something involving Terraform, I recommend that you:
A. impersonate Project A's service account and confirm that you are who you're trying to be with this command - gcloud auth list (the active account is the one with the star next to it), and then
B. try creating a cluster in Project B with gcloud container clusters create - here are the reference docs but you can also:
go to Console > Kubernetes Engine
click on "Create,"
scroll down to the bottom of the form and click on the "COMMAND LINE" link to launch a modal that generates the syntax of the CLI command you'd want to run
copy, paste, tweak to make it create only one node and what other basic settings you want to change...make sure it's specifying --project=project-B
run the command
That will likely give you a more helpful error message. Or at least a different one, so, hurray?
Usually the above error may be caused by following reasons
1] If Shared VPC, verify IAM permissions are correct.
2] Verify Auto generated Ingress Firewall Rules are created
Usually three firewall rules are created
gke-${cluster_name}-${random_char}-all : Firewall Rule for pod to pod communication
gke-${cluster_name}-${random_char}-master : Rule for Master to talk to Nodes
gke-${cluster_name}-${random_char}-vms : Node to Node communication
random char: Random Character
3] Check firewall rules for denial of egress.
By default GCP creates a firewall rule of allowing all egress. If the you delete the rule or denies all egress, then you must configure a firewall rule that allows egress on the master CIDR block via tcp ports 443, 10250. Private Cluster Firewall Rules Private Cluster Firewall Rules documents how to obtain the master CIDR block.
-If you enable other GKE Add-Ons you may require adding additional egress firewall rules.
4] Check DNS Configuration for communication to Google APIs.
Leverage Kubelet logs to check for any curl failed request. Ex: Unable to resolve host or Connection Timeout during kubelet installation. There may be a chance that dns configuration is incorrect (ex resolve Private Google API's or hitting public google APIs). A dig command or looking at 'etc/resolv.conf' for dns servers should confirm where requests are being routed to.

Does Kubernetes runAsUser security context setting, override the user setting in the container image?

After reading about kubernetes pod security context in the k8s documentation (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) - I have a question that I could not find an answer to.
The security context allows adding runAsUser setting with a user ID. Can this be used to run a container image that has NOT been modified to run as a non-root user. Meaning if the runAsUser is set to say 1000 and the container image that runs in this pod, does not use a USER directive / or basically is built to run as root, will the runAsUser setting override the container image?
Will the container run with user 1000 or will it continue to run as root?
Working on to setup Kubernetes and try this scenario in a cluster but would like to understand the concept and what the expected behavior is.
Yes this will run the container with the provided user id (ignoring the user in the container image).

How to get Kubernetes cluster name from K8s API using client-go

How to get Kubernetes cluster name from K8s API mentions that
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
(from within the cluster), or
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
(from outside the cluster), can be used to retrieve the cluster name. That works.
Is there a way to perform the same programmatically using the k8s client-go library? Maybe using the RESTClient()? I've tried but kept getting the server could not find the requested resource.
UPDATE
What I'm trying to do is to get the cluster-name from an app that runs either in a local computer or within a k8s cluster. the k8s client-go allows to initialise the clientset via in cluster or out of cluster authentication.
With the two commands mentioned at the top that is achievable. I was wondering if there was a way from the client-go library to achieve the same, instead of having to do kubectl or curl depending on where the service is run from.
The data that you're looking for (name of the cluster) is available at GCP level. The name itself is a resource within GKE, not Kubernetes. This means that this specific information is not available using the client-go.
So in order to get this data, you can use the Google Cloud Client Libraries for Go, designed to interact with GCP.
As a starting point, you can consult this document.
First you have to download the container package:
➜ go get google.golang.org/api/container/v1
Before you will launch you code you will have authenticate to fetch the data:
Google has a very good document how to achieve that.
Basically you have generate a ServiceAccount key and pass it in GOOGLE_APPLICATION_CREDENTIALS environment:
➜ export GOOGLE_APPLICATION_CREDENTIALS=sakey.json
Regarding the information that you want, you can fetch the cluster information (including name) following this example.
Once you do do this you can launch your application like this:
➜ go run main.go -project <google_project_name> -zone us-central1-a
And the result would be information about your cluster:
Cluster "tom" (RUNNING) master_version: v1.14.10-gke.17 -> Pool "default-pool" (RUNNING) machineType=n1-standard-2 node_version=v1.14.10-gke.17 autoscaling=false%
Also it is worth mentioning that if you run this command:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
You are also interacting with the GCP APIs and can go unauthenticated as long as it's run within a GCE machine/GKE cluster. This provided automatic authentication.
You can read more about it under google`s Storing and retrieving instance metadata document.
Finally, one great advantage of doing this with the Cloud Client Libraries, is that it can be launched externally (as long as it's authenticated) or internally within pods in a deployment.
Let me know if it helps.
If you're running inside GKE, you can get the cluster name through the instance attributes: https://pkg.go.dev/cloud.google.com/go/compute/metadata#InstanceAttributeValue
More specifically, the following should give you the cluster name:
metadata.InstanceAttributeValue("cluster-name")
The example shared by Thomas lists all the clusters in your project, which may not be very helpful if you just want to query the name of the GKE cluster hosting your pod.