Kubernetes exec throw the WebSocket API - kubernetes

I'm trying to connect on a container throw the Kubernetes WebSocket API, from a container running within Kubernetes, without any success.
Install wscat:
apt-get update
apt-get install -y npm
ln -s /usr/bin/nodejs /usr/bin/node
npm install -g n
n stable
npm install -g wscat
Exec on Kubernetes API:
wscat -c "wss://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/my-pod-1623018646-kvc4b/exec?container=aws&stdin=1&stdout=1&stderr=1&tty=1&command=bash" \
--ca /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)"
error: Error: unexpected server response (400)
Do you know what I'm doing wrong?
Note that the following works:
curl https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods/my-pod-1623018646-kvc4b \
--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
-H "Authorization: Bearer $(</var/run/secrets/kubernetes.io/serviceaccount/token)"
Apparently some people are able to connect: https://stackoverflow.com/a/43841572/599728
Cheers

I just found out that the container name was wrong:
?container=aws Aws was not on this pod.

Related

Kubeflow kale failed to connect Rok module

I am trying to integrate kubeflow kale in jupyterlab. For that, I have installed the recommeded package using the below command
RUN pip3 --no-cache-dir install \
--upgrade pip \
urllib3==1.24.3 \
jupyter-client==6.1.5 \
nbformat==5.0.2 \
six==1.15 \
numpy==1.17.3 \
jupyter-console==6.0.0 \
jupyterlab==1.1.1 \
jupyterthemes \
xgboost \
kubeflow-fairing==1.0.0 \
[![enter image description here][1]][1]kubeflow-kale
# Kale installation
RUN jupyter labextension install kubeflow-kale-launcher
The docker image was build successfully. When I run this jupyterlab in the cluster I am getting the below error
Details: Rok Gateway Client module not found
Do I need to install any other plugins?
Please help anyone to fix this problem. Thanks in advance
You should to install ROK on Your Kubernetes cluster.
Without ROK you can't use it

keycloak internal server error when accessing token url

I ran the keycloak instance by
docker run -d --name keycloak \
-e ROOT_LOGLEVEL=INFO \
-e KEYCLOAK_LOGLEVEL=INFO \
-e KEYCLOAK_USER=admin \
-e KEYCLOAK_PASSWORD=admin \
-p 8080:8080 \
-it jboss/keycloak:master -b 0.0.0.0
docker logs -f keycloak
And then visit http://localhost:8080/auth/realms/master/protocol/openid-connect/token, get Internal Server Error:
So,
How to get the error log? docker logs keycloak stays at the startup information, now new request log.
Where is wrong, and how to fix the internal server error?
Why do you need GET request /auth/realms/master/protocol/openid-connect/token?
Token endpoint is for POST requests, not for GET request - see OIDC spec https://openid.net/specs/openid-connect-core-1_0.html#TokenRequest

Errors when using etcdctl on Kubernetes cluster: "certificates signed by unknown authority"

I have minikube running and I am trying to list the keys on my ETCD.
I downloaded the latest etcdctl client from github:
https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
I tried to run it with the certificates from /home/myuser/.minikube/certs:
./etcdctl --ca-file /home/myuser/.minikube/certs/ca.pem
--key-file /home/myuser/.minikube/certs/key.pem
--cert-file /home/myuser/.minikube/certs/cert.pem
--endpoints=https://10.240.0.23:2379 get /
I received an error:
Error: client: etcd cluster is unavailable or misconfigured; error
#0: x509: certificate signed by unknown authority
error #0: x509: certificate signed by unknown authority
Did I used the correct certificates ?
I tried different certificates like that:
./etcdctl --ca-file /var/lib/minikube/certs/ca.crt
--key-file /var/lib/minikube/certs/apiserver-etcd-client.key
--cert-file /var/lib/minikube/certs/apiserver-etcd-client.crt
--endpoints=https://10.240.0.23:2379 get /
I received the same error from before.
Any idea what is the problem ?
For minikube the correct path for etcd certificates is: /var/lib/minikube/certs/etcd/ so the command will be like that:
# kubectl -n kube-system exec -it etcd-minikube -- sh -c "ETCDCTL_API=3 ETCDCTL_CACERT=/var/lib/minikube/certs/etcd/ca.crt ETCDCTL_CERT=/var/lib/minikube/certs/etcd/server.crt ETCDCTL_KEY=/var/lib/minikube/certs/etcd/server.key etcdctl endpoint health"
I needed to use the ETCDCTL_API=3 before the commands.
I saw it being used in Kubernetes the Hard Way from this Github.
The location of the certificate are in: /etc/kubernetes/pki/etcd.
The command should work like that:
ETCDCTL_API=3 ./etcdctl --endpoints=https://172.17.0.64:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key get / --prefix
I tested it and it worked for me.
If you want to dump all etcd entries fully prefixed but from host/outside its container, you could also issue (here for minikube/local testing):
kubectl exec -it \
-n kube-system etcd-minikube \
-- sh -c 'ETCDCTL_CACERT=/var/lib/minikube/certs/etcd/ca.crt \
ETCDCTL_CERT=/var/lib/minikube/certs/etcd/server.crt \
ETCDCTL_KEY=/var/lib/minikube/certs/etcd/server.key \
ETCDCTL_API=3 \
etcdctl \
get \
--prefix=true /'
Try to execute below command:
$ cat /etc/etcd.env to list CA , CERT, KEY directories(actual path).
TLS settings
ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-k8s-m1.pem
ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-k8s-m1-key.pem
ETCD_CLIENT_CERT_AUTH=true
Then you will be possible to correct use certificates.
Then run command again:
./etcdctl --endpoints https://x.x.x.x:2379
--ca-file=/etc/ssl/etcd/ssl/ca.pem
--cert-file=/etc/ssl/etcd/ssl/member-k8s-m1.pem
--key-file=/etc/ssl/etcd/ssl/member-k8s-m1-key.pem
More information you can find here: etcd-certificates.

Starting Postgres in Docker Container

For testing, I'm trying to setup Postgres inside of a docker container so that our python app can run it's test suite against it.
Here's my Dockerfile:
# Set the base image to Ubuntu
FROM ubuntu:16.04
# Update the default application repository sources list
RUN apt-get update && apt-get install -y \
python2.7 \
python-pip \
python-dev \
build-essential \
libpq-dev \
libsasl2-dev \
libffi-dev \
postgresql
USER postgres
RUN /etc/init.d/postgresql start && \
psql -c "CREATE USER circle WITH SUPERUSER PASSWORD 'circle';" && \
createdb -O darwin circle_test
USER root
RUN service postgresql stop && service postgresql start
# Upgrade pip
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 5000
# Set the container entrypoint
ENTRYPOINT ["gunicorn", "--config", "/app/config/gunicorn.py", "--access-logfile", "-", "--error-logfile", "-", "app:app"]
When I run:
docker run --entrypoint python darwin:latest -m unittest discover -v -s test
I'm getting:
could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
The only way I can get it to work is if I ssh into the container, restart postgres and run the test suite directly.
Is there something I'm missing here?
In a Dockerfile you have
a configuration phase, the RUN directive (and some others)
the process(es) you start, that you put in either
CMD
or
ENTRYPOINT
see the docs
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
when a container has completed what it has to do in this start phase, it dies.
This is why the reference Dockerfile for PostgreSQL, at
https://github.com/docker-library/postgres/blob/3d4e5e9f64124b72aa80f80e2635aff0545988c6/9.6/Dockerfile
ends with
CMD ["postgres"]
if you want to start several processes, see supervisord or such tool (s6, daemontools...)
https://docs.docker.com/engine/admin/using_supervisord/

Couchbase REST API vs CLI

I'm trying to use the REST API on Couchbase 2.2 and I'm finding two things that I cannot seem to do via REST:
Init a new cluster when no other nodes exist.
CLI version:
couchbase-cli cluster-init -u admin -p mypw -c localhost:8091 --cluster-init-ramsize=1024
Remove a healthy node from the cluster.
CLI version:
couchbase-cli rebalance -u admin -p mypw -c 10.10.1.10:8091 --server-remove=10.10.1.12
As for removing a node, I've tried:
curl -u admin:mypw -d otpNode=ns_1#10.10.1.12 \
http://10.10.1.10:8091/controller/ejectNode
Which returns: "Cannot remove active server."
I've also tried:
curl -s -u Administrator:myclusterpw \
-d 'ejectedNodes=ns_1%4010.10.1.12&knownNodes=ns_1%4010.10.1.10%2Cns_1%4010.10.1.11' \
http://10.10.1.10:8091/controller/rebalance
Which returns: {"mismatch":1} (presumably due to the node actually not being marked for ejection?)
Am I crazy, or are there no ways to do these things using curl?
I span up a two node cluster on aws (10.170.76.236 and 10.182.151.86), I was able to remove node 10.182.151.86 using the below curl request
curl -v -u Administrator:password -X POST 'http://10.182.151.86:8091/controller/rebalance' -d 'ejectedNodes=ns_1#10.182.151.86&knownNodes=ns_1#10.182.151.86,ns_1#10.170.76.236'
That removes the node and performs the rebalance leaving only '10.170.76.236' as the single node. Running this request below results in 'Cannot remove active server' as you have experienced.
curl -u Administrator:password -d otpNode=ns_1#10.170.76.236 http://10.170.76.236:8091/controller/ejectNode
This is because you can't remove the last node as you can't perform a rebalance, this issue is covered here http://www.couchbase.com/issues/browse/MB-7517
I left the real IP's in that I used so the curl requests are as clear as possible, I've terminated the nodes now though :)
Combo of:
curl -X POST -u admin:password -d username=Administrator \
-d password=letmein \
-d port=8091 \
http://localhost:8091/settings/web
and
curl -X POST -u admin:password -d memoryQuota=400 \
http://localhost:8091/pools/default
Ticket raised against this indicates that the ejectnode command itself won't work by design.
Server needs to either be pending or failover state to use that command seemingly