I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way
Related
Goal
I want to allow unauthenticated access to the following OIDC endpoints in my K3s cluster (from other pods inside the cluster mostly but access from outside is also acceptable):
https://kubernetes.default.svc/.well-known/openid-configuration
https://kubernetes.default.svc/openid/v1/jwks
Problem
By default, Kubernetes requires an authorization token for accessing those endpoints and despite my efforts to enable unauthenticated access, I cannot seem to get the unauthenticated access to work.
What I have tried
According to the Kubernetes documentation on Service account issuer discovery, one must create a ClusterRoleBinding that maps the ClusterRole system:service-account-issuer-discovery to the Group system:unauthenticated.
I also found this helpful example for a different use-case but they're exposing the OIDC endpoints anonymously just like I want to do:
OIDC issuer discovery for Kubernetes service accounts
Based on both of those, I created my ClusterRoleBinding with:
kubectl create clusterrolebinding service-account-issuer-discovery-unauthenticated --clusterrole=system:service-account-issuer-discovery --group=system:unauthenticated
This results in the following spec:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2022-11-28T15:24:41Z"
name: service-account-issuer-discovery-unauthenticated
resourceVersion: "92377634"
uid: 75402324-a8cf-412f-923e-a7a87ed082c2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:service-account-issuer-discovery
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:unauthenticated
I also confirmed RBAC is enabled by running this which showed an entry:
kubectl api-versions | grep 'rbac.authorization.k8s.io/v1'
Unfortunately, despite creating that ClusterRoleBinding, it still seems to be requiring an authorization token to access the OIDC endpoints. For example, when I call from outside the cluster:
curl -vk https://my-cluster-hostname:6443/.well-known/openid-configuration
Output:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
Accessing from the Ingress controller inside the pod (by execing into it), I get the same error.
curl -vk https://kubernetes.default.svc/.well-known/openid-configuration
I am using K3s for this cluster... Is there something special about K3s that I have not accounted for? Do I need to do something to get this ClusterRoleBinding to take effect? Something else I missed?
The issue was that the --anonymous-auth api server setting is set to false by default.
I was able to adjust this with my already-installed K3s server nodes by editing the systemd unit for the k3s service. Step-by-step guide:
Edit the systemd unit with:
sudo vim /etc/systemd/system/k3s.service
You'll see the K3s unit:
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target
After=network-online.target
[Install]
WantedBy=multi-user.target
[Service]
Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
'--cluster-init' \
'--disable=traefik' \
'--write-kubeconfig-mode' \
'644' \
Add the last two lines to the ExecStart:
ExecStart=/usr/local/bin/k3s \
server \
'--cluster-init' \
'--disable=traefik' \
'--write-kubeconfig-mode' \
'644' \
'--kube-apiserver-arg' \
'--anonymous-auth=true' \
Reload the systemd unit:
sudo systemctl daemon-reload
Finally, restart the k3s service:
sudo systemctl restart k3s
References:
Control Plane Execution and Arguments - You can see the default for K3s there
API discovery roles - also mentions this flag at the top
I set up K3s on a server with:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --cluster-init --disable=traefik --write-kubeconfig-mode 644" sh -s -
Then I grabbed the kube config from /etc/rancher/k3s/k3s.yaml and copy it to my local machine so I can interact with the cluster from my machine rather than the server node I installed K3s on. I had to swap out references to 127.0.0.1 and change it to the actual hostname of the server I installed K3s on as well but other than that it worked.
I then hooked up 2 more server nodes to the cluster for a High Availability setup using:
curl -sfL https://get.k3s.io | K3S_TOKEN={token} INSTALL_K3S_EXEC="server --server {https://{hostname or IP of server 1}:6443 --disable=traefik --write-kubeconfig-mode 644" sh -s -
Now on my local machine again I run kubectl get pods (for example) and that works but I want a highly available setup so I placed a TCP Load Balancer (NGINX actually) in front of my cluster. Now I am trying to connect to the Kubernetes API through that proxy / load balancer and unfortunately, since my ~/.kube/config has a client certificate for authentication, this no longer works because my load balancer / proxy that lives in front of that server cannot pass my client cert on to the K3s server.
My ~/.kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {omitted}
server: https://my-cluster-hostname:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: {omitted}
client-key-data: {omitted}
I also grabbed that client cert and key in my kube config, exported it to a file, and hit the API server with curl and it works when I directly hit the server nodes but NOT when I go through my proxy / load balancer.
What I would like to do instead of using the client certificate approach is use token authentication as my proxy would not interfere with that. However, I am not sure how to get such a token. I read the Kubernetes Authenticating guide and specifically I tried creating a new service account and getting the token associated with it as described in the Service Account Tokens section but that also did not work. I also dug through K3s server config options to see if there was any mention of static token file, etc. but didn't find anything that seemed likely.
Is this some limitation of K3s or am I just doing something wrong (likely)?
My kubectl version output:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.7", GitCommit:"132a687512d7fb058d0f5890f07d4121b3f0a2e2", GitTreeState:"clean", BuildDate:"2021-05-12T12:40:09Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7+k3s1", GitCommit:"ac70570999c566ac3507d2cc17369bb0629c1cc0", GitTreeState:"clean", BuildDate:"2021-11-29T16:40:13Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
I figured out an approach that works for me by reading through the Kubernetes Authenticating Guide in more detail. I settled on the Service Account Tokens approach as it says:
Normally these secrets are mounted into pods for in-cluster access to
the API server, but can be used from outside the cluster as well.
My use is for outside the cluster.
First, I created a new ServiceAccount called cluster-admin:
kubectl create serviceaccount cluster-admin
I then created a ClusterRoleBinding to assign cluster-wide permissions to my ServiceAccount (I named this cluster-admin-manual because K3s already had created one called cluster-admin that I didn't want to mess with):
kubectl create clusterrolebinding cluster-admin-manual --clusterrole=cluster-admin --serviceaccount=default:cluster-admin
Now you have to get the Secret that is created for you when you created your ServiceAccount:
kubectl get serviceaccount cluster-admin -o yaml
You'll see something like this returned:
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-12-20T15:55:55Z"
name: cluster-admin
namespace: default
resourceVersion: "3973955"
uid: 66bab124-8d71-4e5f-9886-0bad0ebd30b2
secrets:
- name: cluster-admin-token-67jtw
Get the Secret content with:
kubectl get secret cluster-admin-token-67jtw -o yaml
In that output you will see the data/token property. This is a base64 encoded JWT bearer token. Decode it with:
echo {base64-encoded-token} | base64 --decode
Now you have your bearer token and you can add a user to your ~/.kube/config with the following command. You can also paste that JWT into jwt.io to take a look at the properties and make sure you base64 decoded it properly.
kubectl config set-credentials my-cluster-admin --token={token}
Then make sure your existing context in your ~/.kube/config has the user set appropriately (I did this manually by editing my kube config file but there's probably a kubectl config command for it). For example:
- context:
cluster: my-cluster
user: my-cluster-admin
name: my-cluster
My user in the kube config looks like this:
- name: my-cluster-admin
user:
token: {token}
Now I can authenticate to the cluster using the token instead of relying on a transport layer specific mechanism (TLS with Mutual Auth) that my proxy / load-balancer does not interfere with.
Other resources I found helpful:
Kubernetes — Role-Based Access Control (RBAC) Overview by Anish Patel
I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.
I work in a multi-tenant node app, I know to create a new namespace in Kubernetes is possible to run a kubectl command as follow:
kubectl create namespace <namespace name>
How can I create a new namespace from node Microservices when a new customer make a sign up for a new account?
Is there some kubectl API to make a request from an external app?
Is necessary for the user to log out from app, destroy the pods created in kubernetes?
It could be as simple as calling from a shell in your app:
kubectl create namespace <your-namespace-name>
Essentially, kubectl talks to the kube-apiserver.
You can also directly call the kube-apiserver. This is an example to list the pods:
$ curl -k -H 'Authorization: Bearer <token>' \
https://$KUBERNETES_SERVICE_HOST:6443/api/<api-version>/namespaces/default/pods
More specifically to create a namespace:
$ curl -k -H -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer <token>' \
https://$KUBERNETES_SERVICE_HOST:6443/api/v1/namespaces/ -d '
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "mynewnamespace"
}
}'
In case you are wondering about the <token>, it's a Kubernetes Secret typically belonging to a ServiceAccount and bound to a ClusterRole that allows you to create namespaces.
You can create a Service Account like this:
$ kubectl create serviceaccount namespace-creator
Then you'll see the token like this (a token is automatically generated):
$ kubectl describe sa namespace-creator
Name: namespace-creator
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: namespace-creator-token-xxxxx
Tokens: namespace-creator-token-xxxxx
Events: <none>
Then you would get the secret:
$ kubectl describe secret namespace-creator-token-xxxxx
Name: namespace-creator-token-xxxx
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: namespace-creator
kubernetes.io/service-account.uid: <redacted>
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 7 bytes
token: <REDACTED> <== This is the token you need for Authorization: Bearer
Your ClusterRole should look something like this:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace-creator
rules:
- apiGroups: ["*"]
resources: ["namespaces"]
verbs: ["create"]
Then you would bind it like this:
$ kubectl create clusterrolebinding namespace-creator-binding --clusterrole=namespace-creator --serviceaccount=namespace-creator
When it comes to writing code you can use any HTTP client library in any language to call the same endpoints.
There are also libraries like the client-go library that takes care of the plumbing of connecting to a kube-apiserver.
you can create namespace using below command:
kubectl create namespace << namespace_name>>.
Please find below some examples
kubectl create namespace dev
kubectl create namespace test
kubectl create namespace prod
To see namespace created:
kubectl get namespace
or
kubectl get ns
or
kubectl get namespaces
To avoid mentioning namespace in every kubectl command execution like while creating pod,deployment or any other kubernetes object, set namespace like as mentioned below:
kubectl config set-context --current --namespace=test
I hope this helped!!
Depends on the language in whcih your Microservice is implemeneted , you can just use the Client library inside that Microservice or write a new microservice in language of your choice, and as Answered above , use a service account with ClusterRoleBinding that can create namespaces. and you are good to go.
Client Libraries here:
Python
Go
Java
Javascript
You can create a name space either on the command line or using a definition file:
To create a namespace using the command line:
kubectl create namespace dev
Another way to create a namespace is using the definition file:
Note: Although the above methods create the namespace "dev", your default namespace may still be "default". (meaning your default namespace is NOT dev).
If you want to change your default namespace to dev so that you don't have to always specify it, you can do the following:
kubectl config set-context $(kubectl config current-context) --namespace=dev
This command first identifies the current context and then sets the namespace to the desired one.
Now that you have changed your default namespace to dev when you run
kubectl get pods
It will list all the pods in the dev namespace.
I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.
I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data?
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
Nordstrom/kubelogin
pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user Alice
kubectl create sa alice
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
Get ca.crt from secret (using OSX base64 with -D flag for decode)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
Get information from your kubectl config (current-context, server..)
# get current context
c=$(kubectl config current-context)
# get cluster name of context
name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
# get endpoint of current context
endpoint=$(kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}")
On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:
Install kubectl
brew install kubectl
Set cluster (run in directory where ca.crt is stored)
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=alice
Switch current-context to alice-staging for the user
kubectl config use-context alice-staging
To control user access with policies (using ABAC), you need to create a policy file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.
You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC.
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/