Whenever you start a Kubernetes cluster at one of the big clouds (EKS at AWS, GKE at GCP, AKS at Azure, or Kubernetes at Digitalocean), you can generate a kubeconfig file from them, which grants you full access.
It is now very nice to work with them, but I am always concerned about what I can do if someone manages to steal it. What can I do then?
I never found a button at one of the big clouds to revoke access of the stolen kubeconfig and to regenerate a new one. Is there anything with which I can make that aspect more secure - if you have a documentation at hand, that would be appreciated.
In GKE at GCP the Kubeconfig file which is generated while the cluster creation is located in $HOME/.kube/config. The kubeconfig directory is default to $HOME/.kube/config where $HOME refers to the /home/.
1. If you want to remove user from kubeconfig file use the following command:
$ kubectl --kubeconfig=<kubeconfig-name> config unset users.<name>
2. If you want to regenerate the Kubeconfig file with the previous Kubeconfig file contents try authorizing the cluster using the command:
$ gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-id>
3. If you want to restrict users to kubeconfig file, add permissions to kubeconfig file using the following permission commands:
$ chmod 644 <kubeconfig-file> - which means that the owner can read and write the file, and all others on the system can only read it.
$ chmod 640 <kubeconfig-file> - that the owner has read and write permissions, the group has read permissions, and all other users have no rights to the file.
$ chmod 600 <kubeconfig-file> - only the owner of the file has full read and write access to it. Once a file permission is set to 600, no one else can access the file.
NOTE: Revoking the contents of Kubeconfig file after the kubeconfig file deletion is not possible, you can regenerate the contents of Kubeconfig file only by authorizing the cluster.
Refer to the documentation for more information.
Related
I'm attempting to write MLflow artifacts to an NFS-mounted PVC. It's a new PVC mounting at /opt/mlflow, but MLflow seems to have permission writing to it. The specific error I'm getting is
PermissionError: [Errno 13] Permission denied: '/opt/mlflow'
I ran the same deployment with an S3-backed artifact store, and that worked just fine. That was on my home computer though, and I don't have the ability to do that at work. The MLflow documentation seems to indicate that I don't need any special syntax for NFS mounts.
Independent of MLflow you can approach this in a standard file permission way.
Exec into your pod and view the permissions at that file path
kubectl exec -it <pod>
ls -l /opt/mlflow
Within your pod/container see what user you are running as
whoami
If your user doesn't have access to that filepath, then you could adjust the file permissions by mounting the pvc into a different pod that runs under a user with permission to adjust them. Or you could try using fsGroup to control the permissions of the files mounted, which you can read more about here.
I installed gcloud SDK with brew cask install google-cloud-sdk
$ gcloud container clusters get-credentials my-gke-cluster --region europe-west4-c
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
Unable to write file [/Users/xxxxx/my-repo]: [Errno 21] Is a directory: '/Users/xxxxx/my-repo'
Now all permissions of the folder and recursive files are restricted to 600 (drw-------). Tried to reinstall gcloud but with no effect on its behavior.
I assume you're using macOS and I'm unfamiliar with it.
The gcloud container clusters get-credentials command should write to a file called ${HOME}/.kube/config.
The error suggests that it's trying to write the credentials to /Users/xxxxx/my-repo and this is determined by the value of ${KUBECONFIG}. Have you changed either ${KUBECONFIG} or your ${HOME} environment variable? You should be able to printf "HOME=${HOME}\nKUBECONFIG=${KUBECONFIG}" to inspect these.
You may be able to choose a different destination by adjust the value of KUBECONFIG. Perhaps set this to /Users/xxxxx and try the command again.
Ultimately, this is some sugar to update the local configuration file. It should be possible to create this manually if needs be. If the above don't work, I can update this answer with more details.
I have a Rancher installation with LDAP integration. Some of our users should be able to work with kubectl but should not be able to access the Rancher web-GUI. How can I generate the kubeconfig files for those users?
Usually the users can get the kubeconfig file themselves in the GUI, but how does the process look like without Rancher GUI access? Is there a way to generate those kubeconfig files with an admin user?
Thanks for your help.
look here :
get_kubeconfig_custom_cluster_rancher2.sh
Each user is having his own namespace with associated secret. You can get the kubeconfig file as mentionned in the script with something like :
docker exec $CONTID kubectl get secret c-$CLUSTERID -n cattle-system -o json | jq -r .data.cluster | base64 -d | jq -r .metadata.state > kubeconfig
However, this is working only in the case of a local user created by an admin. A ldap/AD user should login one time before having an existing ID.
I'm performing usual operation of fetching kubernetes cluster credentials from GCP. The gcloud command doesn't fetch the credentials and surprisingly updates the ownership of the local directory:
~/tmp/1> ls
~/tmp/1> gcloud container clusters get-credentials production-ng
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) Unable to write file [/home/vladimir/tmp/1]: [Errno 21] Is a directory: '/home/vladimir/tmp/1'
~/tmp/1> ls
ls: cannot open directory '.': Permission denied
Other commands, like gcloud container clusters list work fine. I've tried to reinstall the gcloud.
This happens if your KUBECONFIG has an empty entry, like :/Users/acme/.kube/config
gcloud resolves the empty value as the current directory, changes permissions and tries to write to it
Reported at https://issuetracker.google.com/issues/143911217
It happened to be a problem with kubectl. Reinstalling it solved this strange issue.
If you, like me, have stuck with strange gcloud behavior, following points could help to track an issue:
Checking alias command and if it's really pointing to the intended binary;
Launch separate docker container with gsutil and feed it your config files. If the gcloud container clusters get-credentials ... runs smoothly there, than it's the problem with binaries (not configuration):
docker run -it \
-v $HOME/.config:/root/.config \
-v $HOME/.kube:/root/.kube google/cloud-sdk:217.0.0-alpine sh
Problem with binary can be solved just by reinstalling/updating;
If it's a problem with configs, then you could back them up and reinstall kubectl / gsutil from scratch using not just apt-get remove ..., but apt-get purge .... Be aware: purge removes config files!
Hope this would help somebody else.
I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).