I have a .pcap file on the master node, which I want to view in Wireshark on the local machine. I access the Kubernetes cluster via the Kops server by ssh from the local machine. I checked kubectl cp --help but provides a way to cp a file from remote pod to kops server.
If anyone knows how to bring a file from Master Node -> Kops Server -> Local machine, please share your knowledge! Thanks!
Solution is simple - scp, thanks to #OleMarkusWith's quick response.
All I did was:
On Kops Server:
scp admin#<master-node's-external-ip>:/path/to/file /dest/path
On local machine:
scp <kops-server-ip>:/path/to/file /dest/path
Related
I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows
Currently...
I have a GKE/kubernetes/k8s cluster in GCP. I have a bastion host (Compute Engine VM Instance) in GCP. I have allowlisted my bastion host's IP in the GKE cluster's Master authorized networks section. Hence, in order to run kubectl commands to my GKE, I first need to SSH into my bastion host by running the gcloud beta compute ssh command; then I run the gcloud container clusters get-credentials command to authenticate with GKE, then from there I can run kubectl commands like usual.
Later...
I want to be able to run kubectl commands to my GKE cluster directly from my local development CLI. In order to do that, I can add my local development machine IP as an allowlisted entry into my GKE's Master authorized networks, and that should be it. Then i can run the gcloud container clusters get-credentials first and then run kubectl commands like usual.
However...
I am looking for a way to avoid having to allowlist my local development machine IP. Every time i take my laptop somewhere new, i have to update the allowlist my new IP from there before i can run the gcloud container clusters get-credentials command before running kubectl commands.
I wonder...
Is there a way to assign a port number in the bastion-host that can be used to invoke kubectl commands to the remote GKE cluster securely? And then, i can just use the gcloud compute start-iap-tunnel command (which BTW takes care of all permission issues using Cloud IAM) from my local dev CLI to establish a ssh-tunnel to that specific port number in the bastion host. That way, for the GKE cluster, it is receiving kubectl commands from the bastion host (which is already allowlisted in its Master authorized networks). But behind the scene, i am authenticating with the bastion host from my local dev CLI (using my glcoud auth credentails) and invoking kubectl commands from there securely.
Is this possible? Any ideas from anyone?
This would help accessing to your secured GKE cluster from localhost
https://github.com/GoogleCloudPlatform/gke-private-cluster-demo
Once the bastion host is setup with tinyproxy as in the above doc, we can use the below shell functions to quickly enable/disable the bastion host access
enable_secure_kubectl() {
# Aliasing kubectl/helm commands to use local proxy
alias kubectl="HTTPS_PROXY=localhost:8888 kubectl"
alias helm="HTTPS_PROXY=localhost:8888 helm"
# Open SSH tunnel for 1 hour
gcloud compute ssh my-bastion-host -- -o ExitOnForwardFailure=yes -M -S /tmp/sslsock -L8888:127.0.0.1:8888 -f sleep 3600
# Get kubernetes credentials with internal ip for kube-apiserver in kubeconfig
gcloud container clusters get-credentials my-gke-cluster --region us-east1 --project myproject --internal-ip
}
disable_secure_kubectl() {
unalias kubectl
unalias helm
ssh -S /tmp/sslsock -O exit my-bastion-host
}
Now i did three things:
First, install kubectl on one linux machine,
Second, copy the admin.conf file from the remote k8s server to the ~/.kube/ file on the linux host,
Third, running kubectl get nodes under Linux reports an error. .
wanlei#kf-test:~/.kube$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I want to know what steps I have missed. .
The goal is to use kubectl from my linux host to manage k8s on the remote host
You need to place the kubeconfig file at .kube/config location i.e there should be a file with name config at .kube directory.That's where kubectl looks for the kubeconfig file by default.
Alternative to above would be defining KUBECONFIG environment variable to point to a kubeconfig file in a different location.
I am exploring and learning about containers and kubernetes using podman and minikube on a linux workstation. I use podman to build images on the workstation and would like to deploy these images in minikube also running on the workstation using the kvm2 virtual machine driver. I also start minikube using the CRI-O container runtime.
What are efficient workflows to deploy these images from the workstation to minikube in this scenario? Docker is not running on the minikube VM so the reusing the Docker daemon as described in the minikube documentation is not an option. Sharing the host file system with minikube also appears to not be viable at this time when using kvm2.
Is running a local registry that is visible to both the workstation and the minikube vm the best option? Answers to How to use local docker images with Minikube? and (Kubernetes + Minikube) can't get docker image from local registry appear to offer good solutions for configuring a local registry.
Would skopeo be a solution?
Edit: this is a nice post describing how to set up a registry using podman: https://computingforgeeks.com/create-docker-container-registry-with-podman-letsencrypt/
thank you
Brad
Minikube documentation provides the foundation for a potential workflow at https://minikube.sigs.k8s.io/docs/tasks/docker_registry/. In order to use podman in lieu of docker I did the following
Start minikube, as instructed, with the --insecure-registry flag. I specifically use
minikube start --network-plugin=cni --enable-default-cni --bootstrapper=kubeadm --container-runtime=cri-o --cpus 4 --memory 4g --insecure-registry "192.168.39.0/24"
Enable the minikube registry addon.
minikube addons enable registry
Configure podman to use the insecure minikube registry by adding the registry to the insecure registries section of /etc/containers/registries.conf. This section now looks like
[registries.insecure]
registries = ['192.168.39.175:5000']
where 192.168.39.175 is the minikube ip. This ip may change following minikube restarts.
Follow the build, push and run commands in https://minikube.sigs.k8s.io/docs/tasks/docker_registry/ substituting podman for docker. This assumes the test-img container file exists.
Build: podman build --tag $(minikube ip):5000/test-img .
Push: podman push $(minikube ip):5000/test-img
Run: kubectl run test-img --image=$(minikube ip):5000/test-img
This worked but suffers from a serious complication: there is no apparent way at this time to set the IP address for the minikube VM when using kvm2. The IP will always be in the 192.168.39.0/24 subnet but that is the only certainty. Each time minikube is started the IP address of the registry will change which has significant implications for podman and the workflow in general.
More to come an another solution.
How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.