I need to run powershell as an container in kubernetes
I am using following deployment file sample.yaml
apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
When I run kubectl apply -f sample.yaml
I get the following error on kubectl get pods
powershell 0/1 CrashLoopBackOff 3 (50s ago) 92s
I did check the log kubectl logs powershell
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> ←[?1h
But when i run same image as a docker container with following command its working
docker run --rm -it mcr.microsoft.com/powershell:latest
If you want to keep a container for running, you should write like this yaml..
apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
command: ["pwsh"]
args: ["-Command", "Start-Sleep", "3600"]
[root#master1 ~]# kubectl get pod powershell
NAME READY STATUS RESTARTS AGE
powershell 1/1 Running 0 3m32s
[root#master1 ~]# kubectl exec -it powershell -- pwsh
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> date
Thu Oct 13 12:50:24 PM UTC 2022
PS />
The docker run the same image by allocating the interactive shell by using flag -it. So that's why the to container keeps running until you exit the command.
To achieve similar in Kubernetes you can use run command.
kubectl run -i --rm --tty power --image=mcr.microsoft.com/powershell:latest
interactive-bash-pod-within-a-kubernetes
Related
For some troubleshooting, I want to connect to my coredns pod. Is this possible?
$ microk8s kubectl get pod --namespace kube-system
NAME READY STATUS RESTARTS AGE
hostpath-provisioner-5c65fbdb4f-w6fmn 1/1 Running 1 7d22h
coredns-7f9c69c78c-mcdl5 1/1 Running 1 7d23h
calico-kube-controllers-f7868dd95-hbmjt 1/1 Running 1 7d23h
calico-node-rtprh 1/1 Running 1 7d23h
When I try, I get the following error msg:
$ microk8s kubectl --namespace kube-system exec --stdin --tty coredns-7f9c69c78c-mcdl5 -- /bin/bash
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "f1d08ed8494894d1281cd5c43dee36119225ab1ba414def333659538e5edc561": OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
User AndD has good mentioned in the comment:
Coredns Pod have no shell, I think. Check this to kind-of exec with a sidecar: How to get into CoreDNS pod kuberrnetes?
Yes. This image has no shell. You can read more about this situation in this thread:
The image does not contain a shell. Logs can be viewed with kubectl.
You have asked:
I want to connect to my coredns pod, is this possible?
Theoretically yes, but you need to make a workaroud with docker. It is described in this answer:
In short, do this to find a node where a coredns pod is running:
kubectl -n kube-system get po -o wide | grep coredns
ssh to one of those nodes, then:
docker ps -a | grep coredns
Copy the Container ID to clipboard and run:
ID=<paste ID here>
docker run -it --net=container:$ID --pid=container:$ID --volumes-from=$ID alpine sh
You will now be inside the "sidecar" container and can poke around. I.e.
cat /etc/coredns/Corefile
Additionally, you can check the logs, with kubectl. See also official documentation about DNS debugging.
Why does kubectl run dask --image daskdev/dask fail?
# starting the container with docker to make sure it basically works
➜ ~ docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#5b34ce038eb3:/# python
Python 3.8.0 (default, Nov 6 2019, 21:49:08)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dask
>>>
>>> exit()
(base) root#5b34ce038eb3:/# exit
exit
# now trying to fire up the container on a minikube cluster
➜ ~ kubectl run dask --image daskdev/dask
pod/dask created
# let's see what's going on with the Pod
➜ ~ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
dask 0/1 CrashLoopBackOff 1 13s
dask 0/1 Completed 2 24s
dask 0/1 CrashLoopBackOff 2 38s
# not sure why the logs look like something is missing
➜ ~ kubectl logs dask --tail=100
+ '[' '' ']'
+ '[' -e /opt/app/environment.yml ']'
+ echo 'no environment.yml'
+ '[' '' ']'
+ '[' '' ']'
+ exec
no environment.yml
So basically, if you will check result of kubectl describe pod dask, you will see that last state was Terminated with Exit Code 0, that literally means you container was launched successfully, did it job and finished also successfully. What else you expect to happen with pod?
IN addition, when you create pod using kubectl run dask --image daskdev/dask- it creates with the restartPolicy: Always by default!!!!
Always means that the container will be restarted even if it exited with a zero exit code (i.e. successfully).
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 02 Apr 2021 15:06:00 +0000
Finished: Fri, 02 Apr 2021 15:06:00 +0000
Ready: False
Restart Count: 3
Environment: <none>
There is no /opt/app/environment.yml in your container. If im not mistake, you should first configure it with prepare.sh. PLease check more here - DASK
section
#docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#431d69bb9a80:/# ls -la /opt/app/
total 12
drwxr-xr-x 2 root root 4096 Mar 27 15:43 .
drwxr-xr-x 1 root root 4096 Mar 27 15:43 ..
not sure why the logs look like something is missing ➜ ~ kubectl logs dask --tail=100
...
exec no environment.yml
There is already prepared helm DASK chart. Use it. It works fine:
helm repo add dask https://helm.dask.org/
helm repo update
helm install raffael-dask-release dask/dask
NAME: raffael-dask-release
LAST DEPLOYED: Fri Apr 2 15:43:38 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing DASK, released at name: raffael-dask-release.
This release includes a Dask scheduler, 3 Dask workers, and 1 Jupyter servers.
The Jupyter notebook server and Dask scheduler expose external services to
which you can connect to manage notebooks, or connect directly to the Dask
cluster. You can get these addresses by running the following:
export DASK_SCHEDULER="127.0.0.1"
export DASK_SCHEDULER_UI_IP="127.0.0.1"
export DASK_SCHEDULER_PORT=8080
export DASK_SCHEDULER_UI_PORT=8081
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_PORT:8786 &
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_UI_PORT:80 &
export JUPYTER_NOTEBOOK_IP="127.0.0.1"
export JUPYTER_NOTEBOOK_PORT=8082
kubectl port-forward --namespace default svc/raffael-dask-release-jupyter $JUPYTER_NOTEBOOK_PORT:80 &
echo tcp://$DASK_SCHEDULER:$DASK_SCHEDULER_PORT -- Dask Client connection
echo http://$DASK_SCHEDULER_UI_IP:$DASK_SCHEDULER_UI_PORT -- Dask dashboard
echo http://$JUPYTER_NOTEBOOK_IP:$JUPYTER_NOTEBOOK_PORT -- Jupyter notebook
NOTE: It may take a few minutes for the LoadBalancer IP to be available. Until then, the commands above will not work for the LoadBalancer service type.
You can watch the status by running 'kubectl get svc --namespace default -w raffael-dask-release-scheduler'
NOTE: It may take a few minutes for the URLs above to be available if any EXTRA_PIP_PACKAGES or EXTRA_CONDA_PACKAGES were specified,
because they are installed before their respective services start.
NOTE: The default password to login to the notebook server is `dask`. To change this password, refer to the Jupyter password section in values.yaml, or in the README.md.
If you still want create manually pod, use below... Main idea is set restartPolicy: Never.
apiVersion: v1
kind: Pod
metadata:
name: dask-tesssssst
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: Always
name: dask-tesssssst
Please check DASK KubeCluster official documentation for more examples. Last one I took exactly from there.
How would I display available schedulers in my cluster in order to use non default one using the schedulerName field?
Any link to a document describing how to "install" and use a custom scheduler is highly appreciated :)
Thx in advance
Schedulers can be found among your kube-system pods. You can then filter the output to your needs with kube-scheduler as the search key:
➜ ~ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6955765f44-9wfkp 0/1 Completed 15 264d
coredns-6955765f44-jmz9j 1/1 Running 16 264d
etcd-acid-fuji 1/1 Running 17 264d
kube-apiserver-acid-fuji 1/1 Running 6 36d
kube-controller-manager-acid-fuji 1/1 Running 21 264d
kube-proxy-hs2qb 1/1 Running 0 177d
kube-scheduler-acid-fuji 1/1 Running 21 264d
You can retrieve the yaml file with:
➜ ~ kubectl get pods -n kube-system <scheduler pod name> -oyaml
If you bootstrapped your cluster with Kubeadm you may also find the yaml files in the /etc/kubernetes/manifests:
➜ manifests sudo cat /etc/kubernetes/manifests/kube-scheduler.yaml
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.17.6
imagePullPolicy: IfNotPresent
---------
The location for minikube is similar but you do have to login in the minikube's virtual machine first with minikube ssh.
For more reading please have a look how to configure multiple schedulers and how to write custom schedulers.
You can try this one:
kubectl get pods --all-namespaces | grep scheduler
I am using GKE. I've launched the following traefik deployment through kubectl:
https://github.com/containous/traefik/blob/master/examples/k8s/traefik-deployment.yaml
The pod runs on the kube-system namespace.
I'm not able to ssh into the pod.
kubectl get po -n kube-system
traefik-ingress-controller-5bf599f65d-fl9gx 1/1 Running 0 30m
kubectl exec -it traefik-ingress-controller-5bf599f65d-fl9gx -n kube-system -- '\bin\bash'
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"\\\\bin\\\\bash\": executable file not found in $PATH"
command terminated with exit code 126
Am I missing something? The same thing for '-- sh' too.
rather use forward slashed / (your example has a backslash) such as in
kubectl exec -it traefik-ingress-controller-5bf599f65d-fl9gx -n kube-system -- '/bin/bash'
If this does still not work, try a different shell such as
kubectl exec -it traefik-ingress-controller-5bf599f65d-fl9gx -n kube-system -- '/bin/sh'
So, apparently the default traefik image is an amd64 version. I had to use the alpine version to ssh into it using:
kubectl exec -it _podname_ -- sh
It seems that this here is the right answer. You cannot exec a shell into the traefik container using the default image, you must use the alpine one.
DNS can resolve to sites external to the cluster
etcd is modified correctly for new containers, services, nodes, etc
here are some details:
[fedora#kubemaster ~]$ kubectl logs kube-dns-v10-q9mlb -c kube2sky --namespace=kube-system
I0118 17:42:24.639508 1 kube2sky.go:436] Etcd server found: http://127.0.0.1:4001
I0118 17:42:25.642366 1 kube2sky.go:503] Using https://10.254.0.1:443 for kubernetes master
I0118 17:42:25.642772 1 kube2sky.go:504] Using kubernetes API
[fedora#kubemaster ~]$
Showing that etcd is being properly populated:
[fedora#kubemaster ~]$ kubectl exec -t busybox -- nslookup kubelab.local
Server: 10.254.0.10
Address 1: 10.254.0.10
nslookup: can't resolve 'kubelab.local'
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1
fedora#kubemaster ~]$ etcdctl ls --recursive
/kubelab.local
/kubelab.local/network
/kubelab.local/network/config
/kubelab.local/network/subnets
/kubelab.local/network/subnets/172.16.46.0-24
/kubelab.local/network/subnets/172.16.12.0-24
/kubelab.local/network/subnets/172.16.70.0-24
/kubelab.local/network/subnets/172.16.21.0-24
/kubelab.local/network/subnets/172.16.54.0-24
/kubelab.local/network/subnets/172.16.71.0-24
To help a little further:
[fedora#kubemaster ~]$ kubectl exec --namespace=kube-system kube-dns-v10-6krfm -c skydns ps
PID USER COMMAND
1 root /skydns -machines=http://127.0.0.1:4001 -addr=0.0.0.0:53 -ns-rotate=false -domain=kubelab.local.
11 root ps
[fedora#kubemaster ~]$
I DID change cluster.local to kubelab.local, but I also made the changes prior to my kubenodes:
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cluster-dns=10.254.0.10 --cluster-domain=kubelab.local"
/etc/resolv.conf appears to be ok on a testhost (in this case, busybox per DNS documentation example):
[fedora#kubemaster ~]$ kubectl exec busybox -c busybox -i -t -- cat /etc/resolv.conf
search default.svc.kubelab.local svc.kubelab.local kubelab.local openstacklocal kubelab.com
nameserver 10.254.0.10
nameserver 192.168.1.70
options ndots:5
[fedora#kubemaster ~]$
Results = still a little frustrating:
[fedora#kubemaster ~]$ kubectl exec -t busybox -- nslookup kubelab.local
Server: 10.254.0.10
Address 1: 10.254.0.10
nslookup: can't resolve 'kubelab.local'
error: error executing remote command: Error executing command in container: Error executing in Docker Container: 1
[fedora#kubemaster ~]$
fedora#kubemaster ~]$ etcdctl ls --recursive
/kubelab.local
/kubelab.local/network
/kubelab.local/network/config
/kubelab.local/network/subnets
/kubelab.local/network/subnets/172.16.46.0-24
/kubelab.local/network/subnets/172.16.12.0-24
/kubelab.local/network/subnets/172.16.70.0-24
/kubelab.local/network/subnets/172.16.21.0-24
/kubelab.local/network/subnets/172.16.54.0-24
/kubelab.local/network/subnets/172.16.71.0-24
This is showing flannel config, not skydns.
you show the Replication controller info, but do you also have a Service setup?
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP