How Can I Debug apiserver Startup When No Logs Are Generated? - kubernetes

I am trying to install the aws-encryption-provider following the steps at https://github.com/kubernetes-sigs/aws-encryption-provider. After I added the --encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml parameter to /etc/kubernetes/manifests/kube-apiserver.yaml the apiserver process did not restart. Nor do I see any error messages.
What technique can I use to see errors created when apiserver starts?

Realizing that the apiserver is running inside a docker container, I connected to one of my controller nodes using SSH. Then I started a container using the following command to get a shell prompt using the same docker image that apiserver is using.
docker run \
-it \
--rm \
--entrypoint /bin/sh \
--volume /etc/kubernetes:/etc/kubernetes:ro \
--volume /etc/ssl/certs:/etc/ssl/certs:ro \
--volume /etc/pki:/etc/pki:ro \
--volume /etc/pki/ca-trust:/etc/pki/ca-trust:ro \
--volume /etc/pki/tls:/etc/pki/tls:ro \
--volume /etc/ssl/etcd/ssl:/etc/ssl/etcd/ssl:ro \
--volume /etc/kubernetes/ssl:/etc/kubernetes/ssl:ro \
--volume /var/run/kmsplugin:/var/run/kmsplugin \
k8s.gcr.io/kube-apiserver:v1.18.5
Once inside that container, I could run the same command that is setup in kube-apiserver.yaml. This command was:
kube-apiserver \
--encryption-provider-config=/etc/kubernetes/aws-encryption-provider-config.yaml \
--advertise-address=10.250.203.201 \
...
--service-node-port-range=30000-32767 \
--storage-backend=etcd3 \
I elided the bulk of the command since you'll need to get specific values from your own kube-apiserver.yaml file.
Using this technique showed me the error message:
Error: error while parsing encryption provider configuration file
"/etc/kubernetes/aws-encryption-provider-config.yaml": error while parsing
file: resources[0].providers[0]: Invalid value:
config.ProviderConfiguration{AESGCM:(*config.AESConfiguration)(nil),
AESCBC:(*config.AESConfiguration)(nil), Secretbox:(*config.SecretboxConfiguration)
(nil), Identity:(*config.IdentityConfiguration)(nil), KMS:(*config.KMSConfiguration)
(nil)}: provider does not contain any of the expected providers: KMS, AESGCM,
AESCBC, Secretbox, Identity

Related

Why my local file is empty after mounting?

When i try to mount a database from postgresql, i see my local directory is empty.
This is my code:
winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v /c/src/ny:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13
When i run that code on MINGW64, i see docker produce a file named "ny;C" and it's empty.
Why is empty and why its named "ny;C" instead of "ny"? How can i fix that problem?

How to run schema registry container for sasl plain kafka cluster

I want to run the cp-schema-registry image on AWS ECS, so I am trying to get it to run on docker locally. I have a command like this:
docker run -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.address:9092,2.kafka.address:9092,3.kafka.address:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT \
confluentinc/cp-schema-registry:5.5.3
(I have replaced the kafka urls).
My consumers/producers connect to the cluster with params:
["sasl.mechanism"] = "PLAIN"
["sasl.username"] = <username>
["sasl.password"] = <password>
Docs seem to indicate there is a file I can create with these parameters, but I don't know how to pass this into the docker run command. Can this be done?
Thanks to OneCricketeer for the help above with the SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG var. The command ended up like this (I added port 8081:8081 so I could test with curl):
docker run -p 8081:8081 -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.broker:9092,2.kafka.broker:9092,3.kafka.broker:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_SSL \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM=PLAIN \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="pass";' confluentinc/cp-schema-registry:5.5.3
Then test with curl localhost:8081/subjects and get [] as a response.

Docker + Kong: [PostgreSQL error] failed to retrieve PostgreSQL server_version_num: connection refused

I'm currently running Docker 19.03.5 and trying to replicate the contents of this article, but i'm getting the following error in the third step:
First step:
docker network create kong-net
Second:
docker run -d --name kong-database \
--network=kong-net \
-p 5555:5432 \
-e “POSTGRES_USER=kong” \
-e “POSTGRES_DB=kong” \
postgres:9.6
Third:
docker run --rm \
--network=kong-net \
-e “KONG_DATABASE=postgres” \
-e “KONG_PG_HOST=kong-database” \
kong:latest kong migrations up
At this third step, if I use the verbose option, I can see the following error:
2019/12/02 15:51:25 [verbose] Kong: 1.4.0
Error:
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:93: [PostgreSQL error] failed to retrieve
PostgreSQL server_version_num: connection refused
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:93: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:87>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:44>
/usr/local/bin/kong:9: in function 'file_gen'
init_worker_by_lua:48: in function <init_worker_by_lua:46>
[C]: in function 'xpcall'
init_worker_by_lua:55: in function <init_worker_by_lua:53>
2019/12/02 15:51:25 [verbose] no config file found at /etc/kong/kong.conf
2019/12/02 15:51:25 [verbose] no config file found at /etc/kong.conf
2019/12/02 15:51:25 [verbose] no config file, skip loading
2019/12/02 15:51:25 [verbose] prefix in use: /usr/local/kong
My docker logs -f --tail 10 kong-database:
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2019-12-02 12:22:46 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: autovacuum launcher started
LOG: database system is ready to accept connections
I'm running Ubuntu 18.04 and there are no other networks or containers running.
The article you're referring is a bit outdated
Note for Kong < 0.15: with Kong versions below 0.15 (up to 0.14), use
the up sub-command instead of bootstrap. Also note that with Kong <
0.15, migrations should never be run concurrently; only one Kong node should be performing migrations at a time. This limitation is lifted
for Kong 0.15, 1.0, and above.
Reference https://hub.docker.com/_/kong
Kong docs https://docs.konghq.com/install/docker
The instructions below should work
Create a docker network
docker network create kong-net
Start a PostgreSQL container
docker run -d --name kong-database \
--network=kong-net \
-p 5555:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
-e "POSTGRES_PASSWORD=kong" \
postgres:12.2
Prepare your database
docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
kong:2.0.3 kong migrations bootstrap
Start Kong
docker run -d --name kong \
--network=kong-net \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PG_PASSWORD=kong" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong

Hasura use SSL certificates for Postgres connection

I can run Hashura from the Docker image.
docker run -d -p 8080:8080 \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://username:password#hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
hasura/graphql-engine:latest
But I also have a Postgres instance that can only be accessed with three certificates:
psql "sslmode=verify-ca sslrootcert=server-ca.pem \
sslcert=client-cert.pem sslkey=client-key.pem \
hostaddr=$DB_HOST \
port=$DB_PORT\
user=$DB_USER dbname=$DB_NAME"
I don't see a configuration for Hasura that allows me to connect to a Postgres instance in such a way.
Is this something I'm suppose to pass into the database connection URL?
How should I do this?
You'll need to mount your certificates into the docker container and then configure libpq (which is what hasura uses underneath) to use the required certificates with these environment variables. It'll be something like this (I haven't tested this):
docker run -d -p 8080:8080 \
-v /absolute-path-of-certs-folder:/certs
-e HASURA_GRAPHQL_DATABASE_URL=postgres://hostname:port/dbname \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e PGSSLMODE=verify-ca \
-e PGSSLCERT=/certs/client-cert.pem \
-e PGSSLKEY=/certs/client-key.pem \
-e PGSSLROOTCERT=/certs/server-ca.pem \
hasura/graphql-engine:latest

Facing issues due to ownership on mounted folder with Docker

Following command works fine
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v /var/lib/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v /var/lib/openproject/static:/var/db/openproject \
openproject/community:8
But this command doesn't start container
sudo docker run -d -p 8080:80 --name openproject -e SECRET_KEY_BASE=somesecret \
-v ~/Dropbox/openproject/pgdata:/var/lib/postgresql/9.6/main \
-v /var/lib/openproject/logs:/var/log/supervisor \
-v ~/Dropbox/openproject/static:/var/db/openproject \
openproject/community:8
I've also tried making /var/lib/openproject/pgdata symlink to ~/Dropbox/openproject/pgdata. But it also didn't work.
Docker logs say, PostgreSQL Config owner (postgres:102) and data owner (app:1000) do not match, and config owner is not root.
Is there any way to mount non-root folder on root folder inside the docker container and solve this issue?