put custom ssh public key into authorized_keys on minikube cluster - minikube

How to put a custom ssh public key into authorized_keys on minikube cluster? Why the changes to /home/docker/.ssh/authorized_keys lose after reboot? How to edit this file effectively?

This works (minikube v1.2.0):
minikube ssh
cd /var/lib/boot2docker
sudo cp userdata.tar userdata-backup.tar
cd /home/docker
echo YOUR_SSH_PUBLIC_KEY_HERE >> .ssh/authorized_keys
sudo tar cf /var/lib/boot2docker/userdata.tar .
because of minikube extract files from userdata.tar at boot, and the source code is here.

Related

How to reset CapRover password?

I just installed CapRover on my server and I forgot my password 🤦‍♂️
but I still have access via SSH normally.
How could I reset it?
You can run these commands as the documentation mentioned:
docker service scale captain-captain=0
Backup config
cp /captain/data/config-captain.json /captain/data/config-captain.json.backup
Delete old password
jq 'del(.hashedPassword)' /captain/data/config-captain.json > /captain/data/config-captain.json.new
cat /captain/data/config-captain.json.new > /captain/data/config-captain.json
rm /captain/data/config-captain.json.new
Set a temporary password
docker service update --env-add DEFAULT_PASSWORD=captain42 captain-captain
docker service scale captain-captain=1

Minikube: bash: /usr/local/bin/minikube: No such file or directory

I just installed Minikube for my Kubernetes local setup on Ubuntu 18.04 using the following command:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
However, when I run the command:
minikube start
I get the following error:
bash: /usr/local/bin/minikube: No such file or directory
I'm really wondering what the issue will be.
I just figured it out after some research and trial.
Here's how I fixed it:
I simply closed that terminal and opened a new one, and ran the command again:
minikube start
OR
minikube start --driver=virtualbox
And it worked fine.
Note: By default minikube attempts to use Docker as the driver, but you specify VirtualBox as your preferred driver, which has some advantages.
Another way would have been to reload the Ubuntu bash terminal:
bash --login
Note:
If all the above techniques do not work, you add the Minikube executable to your path:
sudo mv minikube /usr/local/bin
You can then verify the Minikube executable path:
which minikube.
That's all.
I hope this helps

How to change the default port of microk8s?

Microk8s is installed on default port 16443. I want to change it to 6443. I am using Ubuntu 16.04. I have installed microk8s using snapd and conjure-up.
None of the following options I have tried worked.
Tried to edit the port in /snap/microk8s/current/kubeproxy.config. As the volume is read-only, I could not edit it.
Edited the /home/user_name/.kube/config and restarted the cluster.
Tried using the command and restarted the cluster
sudo kubectl config set clusters.microk8s-cluster.server https://my_ip_address:6443.
Tried to use kubectl proxy --port=6443 --address=0.0.0.0 --accept-hosts=my_ip_address &. It listens on 6443, but only HTTP, not HTTPS traffic.
That was initially resolved in microk8s issue 43, but detailed in microk8s issue 300:
This is the right one to use for the latest microk8s:
#!/bin/bash
# define our new port number
API_PORT=8888
# update kube-apiserver args with the new port
# tell other services about the new port
sudo find /var/snap/microk8s/current/args -type f -exec sed -i "s/8080/$API_PORT/g" {} ';'
# create new, updated copies of our kubeconfig for kubelet and kubectl to use
mkdir -p ~/.kube && microk8s.config -l | sed "s/:8080/:$API_PORT/" | sudo tee /var/snap/microk8s/current/kubelet.config > ~/.kube/microk8s.config
# tell kubelet about the new kubeconfig
sudo sed -i 's#${SNAP}/configs/kubelet.config#${SNAP_DATA}/kubelet.config#' /var/snap/microk8s/current/args/kubelet
# disable and enable the microk8s snap to restart all services
sudo snap disable microk8s && sudo snap enable microk8s

Invalid x509 certificate for kubernetes master

I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
This command for new kubernetes >=1.8:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Also whould be better to add dns name into --apiserver-cert-extra-sans for avoid issues like this in next time.
For kubeadm v1.13.3
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=114.215.201.87
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
If you used kubespray to provision your cluster then you need to add a 'floating ip' (in your case its '114.215.201.87'). This variable is called supplementary_addresses_in_ssl_keys in the group_vars/k8s-cluster/k8s-cluster.yml file. After updating it, just re-run your ansible-playbook -b -v -i inventory/<WHATEVER-YOU-NAMED-IT>/hosts.ini cluster.yml.
NOTE: you still have to remove all the apiserver certs (rm /etc/kubernetes/pki/apiserver.*) from each of your master nodes prior to running!
Issue cause:
Your configs at $HOME/.kube/ are present with your old IP address.
Try running,
rm $HOME/.kube/* -rf
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
For Kubernetes 1.12.2/CentOS 7.4 the sequence is as follows:
rm /etc/kubernetes/pki/apiserver.*
kubeadm alpha phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=51.158.75.136
docker rm -f `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
Use the following command:
kubeadm init phase certs all
For me when I was trying to accessing via root (after sudo -i) I got the error.
I excited and with normal user it was working.
For me the following helped:
rm -rf ~/.minikube
minikube delete
minikube start
Probably items no 2 and 3 would have been sufficient

How do I connect bare metal worker to docker TSA host

I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac
Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web
I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up