Use the Kubernetes REST API without kubectl - kubernetes

You can simply interact with K8s using its REST API. For example to get pods:
curl http://IPADDR/api/v1/pods
However I can't find any example of authentication based only on curl or REST. All the examples show the usage of kubectl as proxy or as a way to get credentials.
If I already own the .kubeconfig, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using kubectl?

The kubeconfig file you download when you first install the cluster includes a client certificate and key. For example:
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
If you extract the client-certificate-data and client-key-data to
files, you can use them to authenticate with curl. To extract the
data:
$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
And then using curl:
$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
Alternately, if your .kubeconfig has tokens in it, like this:
[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
Then you can use that token as a bearer token:
$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).

Related

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

How to Add Users to Kubernetes (kubectl)?

I've created a Kubernetes cluster on AWS with kops and can successfully administer it via kubectl from my local machine.
I can view the current config with kubectl config view as well as directly access the stored state at ~/.kube/config, such as:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.{CLUSTER_NAME}
name: {CLUSTER_NAME}
contexts:
- context:
cluster: {CLUSTER_NAME}
user: {CLUSTER_NAME}
name: {CLUSTER_NAME}
current-context: {CLUSTER_NAME}
kind: Config
preferences: {}
users:
- name: {CLUSTER_NAME}
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: {CLUSTER_NAME}-basic-auth
user:
password: REDACTED
username: admin
I need to enable other users to also administer. This user guide describes how to define these on another users machine, but doesn't describe how to actually create the user's credentials within the cluster itself. How do you do this?
Also, is it safe to just share the cluster.certificate-authority-data?
For a full overview on Authentication, refer to the official Kubernetes docs on Authentication and Authorization
For users, ideally you use an Identity provider for Kubernetes (OpenID Connect).
If you are on GKE / ACS you integrate with respective Identity and Access Management frameworks
If you self-host kubernetes (which is the case when you use kops), you may use coreos/dex to integrate with LDAP / OAuth2 identity providers - a good reference is this detailed 2 part SSO for Kubernetes article.
kops (1.10+) now has built-in authentication support which eases the integration with AWS IAM as identity provider if you're on AWS.
for Dex there are a few open source cli clients as follows:
Nordstrom/kubelogin
pusher/k8s-auth-example
If you are looking for a quick and easy (not most secure and easy to manage in the long run) way to get started, you may abuse serviceaccounts - with 2 options for specialised Policies to control access. (see below)
NOTE since 1.6 Role Based Access Control is strongly recommended! this answer does not cover RBAC setup
EDIT: Great, but outdated (2017-2018), guide by Bitnami on User setup with RBAC is also available.
Steps to enable service account access are (depending on if your cluster configuration includes RBAC or ABAC policies, these accounts may have full Admin rights!):
EDIT: Here is a bash script to automate Service Account creation - see below steps
Create service account for user Alice
kubectl create sa alice
Get related secret
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
Get ca.crt from secret (using OSX base64 with -D flag for decode)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -D > ca.crt
Get service account token from secret
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -D)
Get information from your kubectl config (current-context, server..)
# get current context
c=$(kubectl config current-context)
# get cluster name of context
name=$(kubectl config get-contexts $c | awk '{print $3}' | tail -n 1)
# get endpoint of current context
endpoint=$(kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}")
On a fresh machine, follow these steps (given the ca.cert and $endpoint information retrieved above:
Install kubectl
brew install kubectl
Set cluster (run in directory where ca.crt is stored)
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
Set user credentials
kubectl config set-credentials alice-staging --token=$user_token
Define the combination of alice user with the staging cluster
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=alice
Switch current-context to alice-staging for the user
kubectl config use-context alice-staging
To control user access with policies (using ABAC), you need to create a policy file (for example):
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "system:serviceaccount:default:alice",
"namespace": "default",
"resource": "*",
"readonly": true
}
}
Provision this policy.json on every master node and add --authorization-mode=ABAC --authorization-policy-file=/path/to/policy.json flags to API servers
This would allow Alice (through her service account) read only rights to all resources in default namespace only.
You say :
I need to enable other users to also administer.
But according to the documentation
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Regular users cannot be added to a cluster through an API call.
You have to use a third party tool for this.
== Edit ==
One solution could be to manually create a user entry in the kubeconfig file. From the documentation :
# create kubeconfig entry
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
--username=$username \
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kube/config
# create context entry
$ kubectl config set-context $CONTEXT_NAME \
--cluster=$CLUSTER_NICK \
--user=$USER_NICK \
--kubeconfig=/path/to/standalone/.kube/config
bitnami guide works for me, even if you use minikube. Most important is you cluster supports RBAC.
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

Openshift 3.1: List projects of a serviceaccount using REST - returns empty list

I'm playing around with the openshift REST api and service account with the goal to set up something to partly automate the administration of openshift via REST calls.
Currently still on Openshift 3.1.
Where I got so far:
I managed to create a serviceaccount, give it access and use it for REST calls. I cannot, however, make it have a list of projects.
# account exists with name robot in namespace default
$ oc describe serviceaccount robot
Name: robot
Namespace: default
Labels: <none>
...
# tried a couple of approaches for granting access:
$ oc policy add-role-to-user admin system:serviceaccounts:robot -n default
$ oc policy add-role-to-user admin robot -n default
$ oc policy add-role-to-user admin system:serviceaccounts:default:robot -n default
# get token en do REST call
$ SECRET=`oc describe serviceaccount robot | grep -i tokens | awk '{print $2}'`
$ TOKEN=`oc describe secret $SECRET | grep -i ^token | awk '{print $2}'`
$ curl -X GET -H "Authorization: Bearer $TOKEN" https://$OPENSHIFT_HOSTNAME/oapi/v1/projects --insecure
{
"kind": "ProjectList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/oapi/v1/projects"
},
"items": []
}
I don't understand wy my list of projects is still empty. I've tried adding the serviceaccount to multiple other projects, but the list stays empty.
You are very close to solving it. The user that you used for your policy needs to contain a namespace:
$ oc policy add-role-to-user admin system:serviceaccount:default:robot -n default
This is the namespace of the project in which the robot has been created. In your first oc describe serviceaccount robot the namespace default is used.
Also I see that you retrieve a secret of cloudforms service account, but I guess it is just a typo.
See also more docs about it: OpenShift Origin ServiceAccounts

Access Pod information using Master Public IP in Kubernetes

I can get Pods information using http://localhost:8001/api/v1/pods from inside my cluster.
Is there any way to get pod informations using http://master-public-ip:8001/api/v1/pods ?
By default, the master only exposes HTTPS to the public internet, not HTTP. You should be able to hit https://admin:password#master-public-ip/api/v1/pods/, where password is the generated password for the admin user. This can be found either in the .kube/config file on your machine, or in the /srv/kubernetes/known_tokens.csv file on the master.
E.g. on the master VM:
$ cat /srv/kubernetes/known_tokens.csv
mYpASSWORD,admin,admin
unused,kubelet,kubelet
...
Or on your machine:
$ cat ~/.kube/config
...
- name: my-cluster
user:
client-certificate-data: ...
client-key-data: ...
password: mYpASSWORD
username: admin
...
$ curl --insecure https://admin:mYpASSWORD#master-public-ip/api/v1/pods/
...
To avoid using --insecure (i.e. actually verify the server certificate that your master is presenting), you can use the --cacert flag to specify the cluster certificate authority from your .kube/config file.
$ cat ~/.kube/config
...
- cluster:
certificate-authority-data: bIgLoNgBaSe64eNcOdEdStRiNg
server: https://master-public-ip
name: my-cluster
...
$ echo bIgLoNgBaSe64eNcOdEdStRiNg | base64 -d > ca.crt
$ curl --cacert=ca.crt https://admin:mYpASSWORD#master-public-ip/api/v1/pods/
...