Migrating Kubernetes cluster to other OpenStack region - kubernetes

I am trying to migrate Kubernetes cluster (master and worker instances) to different OpenStack region. I managed to start cluster after some simple modifications (changed cloud-config, node labels). There is one problem left - storage. In this setup I use OpenStack Internal Cloud Provider which manages cinder volumes as PV for pods. New region uses different zone name and volume types. Also volume ids have changed. It is not possible to change this values by modifying SC and PV definitions via e.g. kubectl.
I wonder if it is possible to change this directly in etcd database?
So far, I tried to modify PV definition, but it appears that Kubernetes inserts also additional characters and modifying it is not so obvious.
What I did:
Get PV definition from etcd and saved to file:
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl --cert /etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://kube-dev02-master01:2379 get /registry/persistentvolumes/pvc-1625baa0-e36c-4e2b-ad3d-0dfecc910ae0 --print-value-only > pv1.txt.
I changed region name, zone name and volume id (with vi).
Loaded modified value to etcd:
docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.4.3-0 etcdctl --cert /etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints https://kube-dev02-master01:2379 put /registry/persistentvolumes/pvc-1625baa0-e36c-4e2b-ad3d-0dfecc910ae0 "$(cat pv1.txt)"
Checked PV from kubectl:
[kubeadmin#kube-dev02-master01 ~]$ kubectl get pv
Error from server: illegal base64 data at input byte 5
So it seems that something could be wrong with encoding, but I do not know where.
Output of PV value stored in etcd
Kubernetes v1.17.5, etcd v.3.4.3-0 installed with Kubeadm.

Related

Local cluster k8s not added in Rancher - "No secret assigned to service account cattle-system/rancher"

Local K8s - 1.24.2
Rancher - 2.6.6
The rancher is installed on a virtual machine on the same subnet as K8s.
Install rancher single with volume:
docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -v /opt/rancher:/var/lib/rancher --privileged rancher/rancher:latest
After im trying to add existing cluster, because im havent no self-signed certificates, im use:
% curl --insecure -sfL https://<My-IP>:8446/v3/import/dtdsrjczmb2bg79r82x9qd8g9gjc8d5fl86drc8m9zhpst2d9h6pfn.yaml | kubectl apply -f -
namespace/cattle-system created serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding
created secret/cattle-credentials-3035218 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.extensions/cattle-cluster-agent created
daemonset.extensions/cattle-node-agent created
ALL need things are created.
Then k8s trying to start pod and pod cant start with error - "starting" time="2022-07-11T13:19:14Z" level=fatal msg="looking up cattle-system/cattle ca/token: no secret exists for service account cattle-system/cattle""
I checked the cattle service account doesn't have the secret. Looks the function above expecting ServiceAccountRootCAKey and ServiceAccountTokenKey. How can I get this info from my Rancher server? Can I add this secret into the service account, will that solve the issue?

kubectl -- create -h works but kubectl create -h doesn't

Running 'minikube' on windows 10, why minikube kubectl create -h doesn' work but minikube kubectl -- create -h does (w.r.t. showing help for create)
This is the way minikube works:
Minikube has a subcommand kubectl that will exectute the kubectl bundled with minikube (because you can also have one installed outside of minikube, on your plain system).
Minikube has to know the exact command to pass to its kubectl, thus minikube splits the command with --.
It's used to differentiate minikube arguments and kubectl's.

Kubernetes: How to get other pods' name from within a pod?

I'd like to find out the other pods' name running in the same single-host cluster. All pods are single-application containers. I have pod A (written in Java) that acts as a TCP/IP server and a few other pods (written in C++) connect to the server.
In pod A, I can get IP address of clients (other pods). How do I get their pods' name? I can't run kubectl commands because pod A has no kubectl installed.
Thanks,
You can directly call kube-apiserver with cURL.
First you need to have a serviceaccount binded to clusterrole to be able to send requests to apiserver.
kubectl create clusterrole listpods --verb=get,list,watch --resource=pods
kubectl create clusterrolebinding listpods --clusterrole=listpods --serviceaccount=default:default
Get a shell inside a container
kubectl exec -it deploy/YOUR_DEPLOYMENT -- sh
Define necessary parameters for your cURL, run below commands inside container
APISERVER=https://kubernetes.default.svc
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
TOKEN=$(cat ${SERVICEACCOUNT}/token)
CACERT=${SERVICEACCOUNT}/ca.crt
Send pods list request to apiserver
curl -s -k -H "Authorization: Bearer $TOKEN" -H 'Accept: application/json' $APISERVER/api/v1/pods | jq "[ .items[] | .metadata.name ]"
Done ! It will return you a json list of "kubectl get pods"
For more examples, you can check OpenShift RestAPI Reference. Also, if you are planning to do some programmatic stuff, I advice you to checkout official kubernetes-clients.
Credits for jq improvement to #moonkotte

How to get Kubernetes secret from one cluster to apply to another?

For my e2e tests I'm spinning up a separate cluster into which I'd like to import my production TLS certificate. I'm having trouble to switch the context between the two clusters (export/get from one and import/apply (in)to another) because the cluster doesn't seem to be visible.
I extracted a MVCE using a GitLab CI and the following .gitlab-ci.yml where I create a secret for demonstration purposes:
stages:
- main
- tear-down
main:
image: google/cloud-sdk
stage: main
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json --project secret-transfer
- gcloud config set compute/zone us-central1-a
- gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- kubectl create secret generic secret-1 --from-literal=key=value
- gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
- gcloud config set container/use_client_certificate True
- gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl get secret letsencrypt-prod --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
- gcloud config set container/cluster secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- kubectl apply --cluster=secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -f secret-1.yml
tear-down:
image: google/cloud-sdk
stage: tear-down
when: always
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud config set project secret-transfer
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone us-central1-a
- gcloud container clusters delete --quiet secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
- gcloud container clusters delete --quiet secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
I added secret-transfer-[1/2]-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID before kubectl statements in order to avoid error: no server found for cluster "secret-transfer-1-...-...", but it doesn't change the outcome.
I created a project secret-transfer, activated the Kubernetes API and got a JSON key for the Compute Engine service account which I'm providing in the environment variable GOOGLE_KEY. The output after checkout is
$ echo "$GOOGLE_KEY" > key.json
$ gcloud config set project secret-transfer
Updated property [core/project].
$ gcloud auth activate-service-account --key-file key.json --project secret-transfer
Activated service account credentials for: [131478687181-compute#developer.gserviceaccount.com]
$ gcloud config set compute/zone us-central1-a
Updated property [compute/zone].
$ gcloud container clusters create secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-1-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-1-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-1-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-1-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-1-9b219ea8-9 us-central1-a 1.12.8-gke.10 34.68.118.165 f1-micro 1.12.8-gke.10 3 RUNNING
$ kubectl create secret generic secret-1 --from-literal=key=value
secret/secret-1 created
$ gcloud container clusters create secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID --project secret-transfer --machine-type=f1-micro
WARNING: In June 2019, node auto-upgrade will be enabled by default for newly created clusters and node pools. To disable it, use the `--no-enable-autoupgrade` flag.
WARNING: Starting in 1.12, new clusters will have basic authentication disabled by default. Basic authentication can be enabled (or disabled) manually using the `--[no-]enable-basic-auth` flag.
WARNING: Starting in 1.12, new clusters will not have a client certificate issued. You can manually enable (or disable) the issuance of the client certificate using the `--[no-]issue-client-certificate` flag.
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Starting in 1.12, default node pools in new clusters will have their legacy Compute Engine instance metadata endpoints disabled by default. To create a cluster with legacy instance metadata endpoints disabled in the default node pool, run `clusters create` with the flag `--metadata disable-legacy-endpoints=true`.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster secret-transfer-2-9b219ea8-9 in us-central1-a...
...done.
Created [https://container.googleapis.com/v1/projects/secret-transfer/zones/us-central1-a/clusters/secret-transfer-2-9b219ea8-9].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/secret-transfer-2-9b219ea8-9?project=secret-transfer
kubeconfig entry generated for secret-transfer-2-9b219ea8-9.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
secret-transfer-2-9b219ea8-9 us-central1-a 1.12.8-gke.10 104.198.37.21 f1-micro 1.12.8-gke.10 3 RUNNING
$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
$ kubectl get secret secret-1 --cluster=secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID -o yaml > secret-1.yml
error: no server found for cluster "secret-transfer-1-9b219ea8-9"
I'm expecting kubectl get secret to work because both clusters exist and the --cluster argument points to the right cluster.
Generally gcloud commands are used to manage gcloud resources and handle how you authenticate with gcloud, whereas kubectl commands affect how you interact with Kubernetes clusters, whether or not they happen to be running on GCP and/or created in GKE. As such, I would avoid doing:
$ gcloud config set container/use_client_certificate True
Updated property [container/use_client_certificate].
$ gcloud config set container/cluster \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID
Updated property [container/cluster].
It's not doing what you probably think it's doing (namely, changing anything about how kubectl targets clusters), and might mess with how future gcloud commands work.
Another consequence of gcloud and kubectl being separate, and in particular kubectl not knowing intimately about your gcloud settings, is that the cluster name from gcloud perspective is not the same as from the kubectl perspective. When you do things like gcloud config set compute/zone, kubectl doesn't know anything about that, so it has to be able to identify clusters uniquely which may have the same name but be in different projects and zone, and maybe not even in GKE (like minikube or some other cloud provider). That's why kubectl --cluster=<gke-cluster-name> <some_command> is not going to work, and it's why you're seeing the error message:
error: no server found for cluster "secret-transfer-1-9b219ea8-9"
As #coderanger pointed out, the cluster name that gets generated in your ~/.kube/config file after doing gcloud container clusters create ... has a more complex name, which currently has a pattern something like gke_[project]_[region]_[name].
So you could run commands with kubectl --cluster gke_[project]_[region]_[name] ... (or kubectl --context [project]_[region]_[name] ... which would be more idiomatic, although both will happen to work in this case since you're using the same service account for both clusters), however that requires knowledge of how gcloud generates these strings for context and cluster names.
An alternative would be to do something like:
$ KUBECONFIG=~/.kube/config1 gcloud container clusters create \
secret-transfer-1-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl create secret secret-1 --from-literal=key=value
$ KUBECONFIG=~/.kube/config2 gcloud container clusters create \
secret-transfer-2-$CI_COMMIT_SHORT_SHA-$CI_PIPELINE_IID \
--project secret-transfer --machine-type=f1-micro
$ KUBECONFIG=~/.kube/config1 kubectl get secret secret-1 -o yaml > secret-1.yml
$ KUBECONFIG=~/.kube/config2 kubectl apply -f secret-1.yml
By having separate KUBECONFIG files that you control, you don't have to guess any strings. Setting the KUBECONFIG variable when creating a cluster will result in creating that file and gcloud putting the credentials for kubectl to access that cluster in that file. Setting the KUBECONFIG environment variable when running kubectl command will ensure kubectl uses the context as set in that particular file.
You probably mean to be using --context rather than --cluster. The context sets both the cluster and user in use. Additionally the context and cluster (and user) names created by GKE are not just the cluster identifier, it's gke_[project]_[region]_[name].

How to access Kubernetes API when using minkube?

What is correct way to kubernetes cluster setup using minikube through the kubernetes api ?
At the moment, I can't find a port through which the kubernetes cluster can be accessed.
The easiest way to access the Kubernetes API with when running minikube is to use
kubectl proxy --port=8080
You can then access the API with
curl http://localhost:8080/api/
This also allows you to browse the API in your browser. Start minikube using
minikube start --extra-config=apiserver.Features.EnableSwaggerUI=true
then start kubectl proxy, and navigate to http://localhost:8080/swagger-ui/ in your browser.
You can access the Kubernetes API with curl directly using
curl --cacert ~/.minikube/ca.crt --cert ~/.minikube/client.crt --key ~/.minikube/client.key https://`minikube ip`:8443/api/
but usually there is no advantage in doing so. Common browsers are not happy with the certificates minikube generates, so if you want to access the API with your browser you need to use kubectl proxy.
Running minikube start will automatically configure kubectl.
You can run minikube ip to get the IP that your minikube is on. The API server runs on 8443 by default.
Update: To access the API server directly, you'll need to use the custom SSL certs that have been generated. by minikube. The client certificate and key are typically stored at: ~/.minikube/apiserver.crt and ~/.minikube/apiserver.key. You'll have to load them into your HTTPS client when you make requests.
If you're using curl use the --cert and the --key options to use the cert and key file. Check the docs for more details.
Update2: The client certificate and key are typically stored at: ~/.minikube/profiles/minikube directory when you use the version >= 0.19 (more informations). You probably need to set the --insecure options to the curl client because of the self-signed certificate.
I went through lots of answers, but lots of them are wrong.
Before we do, we need IP and token.
How to get IP: minikube ip
How to generate Token:
$export secret=kubectl get serviceaccount default -o json | jq -r '.secrets[].name'
$kubectl get secret $secret -o yaml | grep "token:" | awk {'print $2'} | base64 -D > token
Note: base64 uses -D for mac, but -d for Linux.
Then, the correct command is:
#curl -v -k -H --cacert ~/.minikube/ca.crt -H "Authorization: Bearer $(cat ~/YOUR_TOKEN)" "https://{YOUR_IP}:8443/api/v1/pods"
User Sven Marnach got me in the right direction however to get the correct server ip, crt and key location I ran kubectl config view.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/user/.minikube/ca.crt
server: https://127.0.0.1:32792
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /Users/user/.minikube/profiles/minikube/client.crt
client-key: /Users/user/.minikube/profiles/minikube/client.key
$ curl --cacert ~/.minikube/ca.crt --cert ~/.minikube/profiles/minikube/client.crt --key ~/.minikube/profiles/minikube/client.key https://127.0.0.1:32792/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "172.17.0.2:8443"
}
]
}
$ curl -s --cacert ~/.minikube/ca.crt --cert ~/.minikube/profiles/minikube/client.crt --key ~/.minikube/profiles/minikube/client.key https://127.0.0.1:32792/api/v1/pods | jq .items[].metadata | jq '"\(.name), \(.namespace), \(.selfLink)"'
"shell-demo, default, /api/v1/namespaces/default/pods/shell-demo"
"coredns-f9fd979d6-6b2nx, kube-system, /api/v1/namespaces/kube-system/pods/coredns-f9fd979d6-6b2nx"
"etcd-minikube, kube-system, /api/v1/namespaces/kube-system/pods/etcd-minikube"
"kube-apiserver-minikube, kube-system, /api/v1/namespaces/kube-system/pods/kube-apiserver-minikube"
"kube-controller-manager-minikube, kube-system, /api/v1/namespaces/kube-system/pods/kube-controller-manager-minikube"
"kube-proxy-bbck9, kube-system, /api/v1/namespaces/kube-system/pods/kube-proxy-bbck9"
"kube-scheduler-minikube, kube-system, /api/v1/namespaces/kube-system/pods/kube-scheduler-minikube"
"storage-provisioner, kube-system, /api/v1/namespaces/kube-system/pods/storage-provisioner"
Readers may also be interested in link.
For windows users, here is an alternative to the much simpler kubectl proxy command:
mount your local host's .minikube folder using "minikube mount [path-to-folder]:/host . This way, you will be able to access the certificates from within the node.If you don't know the exact path to this folder, you can get it by looking at the kubectl config view response.
On a different command prompt, take note of the IP of your kube api server. this can be done running from your host ( windows ) minikube ip. Note that this is the virtual IP within your minikube container.
Start a bash within the minikube container. docker exec -it {your-container-id} bash
Access to the folder you mounted on point 1). Now, simply curl to the Kubectl api server through its virtual ip from 2.):
curl https://{your-ip-from-2}:8443/api --key ./ca.key --cert ./ca.crt
Here we are passing the certs to be used. Notice how I am not using the proxy-client ones.
That's it. For learning purposes I think this is a more interesting method that directly proxying.
These instructions worked for me https://github.com/jenkinsci/kubernetes-plugin#configuration-on-minikube
Needed to generate & upload pfx file, along with the other steps mentioned there.
Most of the above answers are right in their own sense.
I will give my version of the answer:
1) What is the correct way to Kubernetes cluster setup using minikube through the Kubernetes API ?
Ans: I think this is pretty straight forward. Follow the installation steps mentions in the official k8s documentation for minikube installation
2) At the moment, I can't find a port through which the kubernetes cluster can be accessed.
Ans: This is too has a straight forward answer. You have to check your Kube config file. You can find it in your home directory ~/.kube/config. View this file and it will have the details.
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/username/.minikube/ca.crt
server: https://192.168.64.2:8443
name: minikube
contexts:
- context:
cluster: minikube
namespace: default
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /Users/username/.minikube/client.crt
client-key: /Users/username/.minikube/client.key
The server detail mentioned here is your api-server endpoint to hit.
You can view this information using the kubectl command as well like this kubectl config view
Use below curl to hit the api-server using curl
curl https://192.168.64.2:8443/api/v1/pod --key /Users/sanjay/.minikube/client.key --cert /Users/sanjay/.minikube/client.crt --cacert /Users/sanjay/.minikube/ca.crt
Note: replace the ip port and the path as per your config file in above command.
Based on xichen's and Seba's answers above, this is how to acquire a token from a terminal:
$ function get_token() { secret=$(kubectl get serviceaccount "$1" -o jsonpath='{.secrets[0].name}') && kubectl get secret "$secret" -o jsonpath='{.data.token}' | base64 --decode; }
$ get_token target_account
I hope this would be useful for those who must use kubectl below 1.24 due to minikube issue with enabling ingress as stated in this question.