Not able to run kubectl cp command in Argo workflow - kubernetes

I am trying to run this command in my Argo workflow
kubectl cp /tmp/appendonly.aof redis-node-0:/data/appendonly.aof -c redis -n redis
but I get this error
Error from server (InternalError): an error on the server ("invalid upgrade response: status code 200") has prevented the request from succeeding (get pods redis-node-0)
surprisingly when I am copying the file from a pod to local system then it is working, like this command kubectl cp redis-node-0:/data/appendonly.aof tmp/appendonly.aof -c redis -n redis
Any idea what might be causing it?

Solution -
Not sure what was causing this issue but found this command in the docs that worked fine
tar cf - appendonly.aof | kubectl exec -i -n redis redis-node-0 -- tar xf - -C /data

Related

App pod logs with linkerd | unable to view

I was able to view the app container logs using kubectl -f logs and was able to login to the container using "k exec --stdin --tty -- /bin/bash".
After injecting linkerd, I could not login to the container. However my goal is to check the app logs.
When I use this "k logs -f linkerd-proxy" I could not see the app-related logs.
I tried injecting debug-sidecar as well.
Tried this - "k logs deploy/ linkerd-debug - " and as well as this "k exec -it -c linkerd-debug -- tshark -i any -f "tcp" -V -Y "http.request"
still I couldn't see the exact logs for my app in the pod. Please suggest.
Linkerd works by injecting an additional container into your pods; this is known as the "sidecar" pattern. Your application (or better said container) logs are still accessible, however, as a result of having more than one container in the pod, kubectl requires you to explicitly specify the container name.
For example, assuming you have a pod with two containers (linkerd-proxy and app), you'd have to specify app as the name of the container:
$ kubectl logs -f <pod-name> -c app
# You can specify the container name without the -c flag
$ kubectl logs -f <pod-name> app
# This will work for 'exec' too
$ kubectl exec <pod-name> -c app -it -- sh

rancher rke up errors on etcd host health checks remote error: tls: bad certificate

rke --debug up --config cluster.yml
fails with health checks on etcd hosts with error:
DEBU[0281] [etcd] failed to check health for etcd host [x.x.x.x]: failed to get /health for host [x.x.x.x]: Get "https://x.x.x.x:2379/health": remote error: tls: bad certificate
Checking etcd healthchecks
for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do
echo "Validating connection to ${endpoint}/health";
curl -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health";
done
Running on that master node
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
you can run it manually and see if it responds correctly
curl -w "\n" --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-etcd-x-x-x-x.pem --key /etc/kubernetes/ssl/kube-etcd-x-x-x-x-key.pem https://x.x.x.x:2379/health
Checking my self signed certificates hashes
# md5sum /etc/kubernetes/ssl/kube-ca.pem
f5b358e771f8ae8495c703d09578eb3b /etc/kubernetes/ssl/kube-ca.pem
# for key in $(cat /home/kube/cluster.rkestate | jq -r '.desiredState.certificatesBundle | keys[]'); do echo $(cat /home/kube/cluster.rkestate | jq -r --arg key $key '.desiredState.certificatesBundle[$key].certificatePEM' | sed '$ d' | md5sum) $key; done | grep kube-ca
f5b358e771f8ae8495c703d09578eb3b - kube-ca
versions on my master node
Debian GNU/Linux 10
rke version v1.3.1
docker version Version: 20.10.8
kubectl v1.21.5
v1.21.5-rancher1-1
I think my cluster.rkestate gone bad, are there any other locations where rke tool checks for certificates?
Currently I cannot do anything with this production cluster, and want to avoid downtime. I experimented on testing cluster different scenarios, I could do as last resort to recreate the cluster from scratch, but maybe I can still fix it...
rke remove && rke up
rke util get-state-file helped me to reconstruct bad cluster.rkestate file
and I was able to successfully rke up and add new master node to fix whole situation.
The problem can be solved by doing the following steps:
Remove kube_config_cluster.yml file where you run rke up command. (Since some data are missing in your K8s nodes)
Remove cluster.rkestate file.
Re-run rke up command.

Deleted ~/.kube/config

I accidentally deleted the config file from ~/.kube/config. Every kubectl command fails due to config missing.
Example:
kubectl get nodes
The connection to the server localhost:8080 was refused - did you
specify the right host or port?
I have already install k3s using:
export K3S_KUBECONFIG_MODE="644"
curl -sfL https://get.k3s.io | sh -s - --docker
and kubectl using:
snap install kubectl --classic
Does anyone know how to fix this?
The master copy is available at /etc/rancher/k3s/k3s.yaml. So, copy it back to ~/.kube/config
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Reference: https://rancher.com/docs/k3s/latest/en/cluster-access/

cop file from remote conatiner to localmachine?

how i acces my namespace: kubens namespace
how i access my pod: acces my container: kubectl exec -it hello-6b588fc8c-jz89q --container test -- bash
i wante to cp a file from the filebeat container. but not work i try
From the kubectl cp --help output, an example is provided for your use case (copying from a remote pod to your local filesystem):
# Copy /tmp/foo from a remote pod to /tmp/bar locally
kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
In your case, I believe the command would be
kubectl cp <namespace-of-pod>/dsp-onboarding-6b588fc8c-jz89q:/app/data/logs/dsp-onboarding.json.log . -c filebeat
Note the -c option is necessary in your case, since you want to cp the file off of a specific container in the pod.

How to Deploy our Customize Thingsboard to Kubenetes Engine?

After make docker image of cassandra, cassandra-setup, application and zookeeper from my custom thingsboard.
I tried to deploy that to Kubernetes Engine, there's no error, but not running well.
Here is my command for yaml from my github:
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/common.yaml > common.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/cassandra.yaml > cassandra.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/zookeeper.yaml > zookeeper.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/tb.yaml > tb.yaml
curl -L https://raw.githubusercontent.com/Firdauzfan/ThingsboardGSPE/master/docker/k8s/cassandra-setup.yaml > cassandra-setup.yaml
and here is my docker image:
https://hub.docker.com/u/firdauzfanani/
Example: when i run command kubectl create -f cassandra.yaml, cassandra engine just show running but not ready.
Status screenshot here
If it is shown as not ready even if it is running with no issue (es: you can ssh into it and all the services are running), could be an misconfiguration of your redinessprobe that I see defined in the YAML file as follow, but I have no clue regarding its behaviour. Consider that accordingly to documentation it should return 0.
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
On the other hand, if when you try to access the pod you face some kind of errors, I would suggest you if you didn't do it already to retrieve further information to carry on the troubleshooting running the following commands:
$ kubectl describe deployments
$ kubectl describe pods
$ kubectl describe services
This series of commands could help you in order to understand better what is going on.
Please run them and edit your initial post with the output and I can take a look to them.
To ssh into the pod run:
$ kubectl get pods (to retrieve pod name)
$ kubectl exec -ti PODNAME /bn/bash
UPDATE
I deployed your YAML files, the pods is running correctly (I believe) what is failing is the probe whose content is the following:
cat /ready-probe.sh
if [[ $(nodetool status | grep $POD_IP) == *"UN"* ]]; then
if [[ $DEBUG ]]; then
echo "UN";
fi
exit 0;
else
if [[ $DEBUG ]]; then
echo "Not Up";
fi
exit 1;