I install the Bitnami Helm chart, using the example shown in the README:
helm install my-db \
--namespace dar \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
Then, following the instructions shown in the blurb which prints after the installation is successful I forward the port to port 5432 then try and connect:
PGPASSWORD="secretpassword" psql --host 127.0.0.1 -U postgres -d my-database -p 5432
But I get the following error:
psql: error: could not connect to server: FATAL: password authentication failed for user "postgres"
How can this be? Is the Helm chart buggy?
Buried deep in the stable/postgresql issue tracker is the source of this very-hard-to-debug problem.
When you run helm uninstall ... it errs on the side of caution and doesn't delete the storage associated with the database you got when you first ran helm install ....
This means that once you've installed Postgres once via Helm, the secrets will always be the same in subsequent installs, regardless of what the post-installation blurb tells you.
To fix this, you have to manually remove the persistent volume claim (PVC) which will free up the database storage.
kubectl delete pvc data-my-db-postgresql-0
(Or whatever the PVC associated with your initial Helm install was named.)
Now a subsequent helm install ... will create a brand-new PVC and login can proceed as expected.
Related
I have just installed Rancher for test purpose. First I install docker and I install kubectl and helm. Then I install Rancher. When I try to create a new kubernetes cluster, I got this error. I searched about it and it is about the certification error I thought.
Failed to create fleet-default/aefis-test cluster.x-k8s.io/v1beta1, Kind=Cluster for rke-cluster fleet-default/aefis-test: Internal error occurred: failed calling webhook "default.cluster.cluster.x-k8s.io": failed to call webhook: Post "https://webhook-service.cattle-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-cluster?timeout=10s": service "webhook-service" not found"
I used this command to install Rancher:
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:latest --no-cacerts"
I hope anybody has a good idea and solution for this error? Thanks.
If I want to delete the webhood secret for triggering to create new one, it throws this error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have a postgres pod installed through helm chart.(used bitnami)
Now I want to connect this postgres to keycloak through helm chart. Can any one tell me how to connect in a detailed manner. I tried mentioning external database host, database name, database password, port, but not working.
Am I doing correct?
cammand i used:
helm install keycloak-test-db --set postgresql.enabled=false --set externalDatabase.host=localhost --set externalDatabase.user=postgres --set externalDatabase.password=9CD1SRNlrI --set externalDatabase.database=postgres --set externalDatabase.port=5432 bitnami/keycloak
Im setting postgres.enabled =false because I dont want to install postgres chart again.
I used external database method, is this correct or do I have to enable something?
I tried connecting existing postgres pod to newly installed keycloak through helm chart.
But it is not working.
I am using the Bitnami Postgres Docker container and noticed that my ORM which uses UUIDs requires the uuid-ossp extension to be available. After some trial and error I noticed that I had to manually install it using the postgres superuser since my custom non-root user created via the POSTGRESQL_USERNAME environment variable is not allowed to execute CREATE EXTENSION "uuid-ossp";.
I'd like to know what a script inside /docker-entrypoint-initdb.d might look like that can execute this command into the specific database, to be more precise to automate the following steps I had to perform manually:
psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
I think that something like this should work
PGPASSWORD=$POSTGRESQL_POSTGRES_PASSWORD psql -U postgres // this requires interactive password input
\c target_database
CREATE EXTENSION "uuid-ossp";
If you want to do it on startup you need to add a file to the startup scripts. Check out the config section of their image documentation: https://hub.docker.com/r/bitnami/postgresql-repmgr/
If you're deploying it via helm you can add your scripts in the postgresql.initdbScripts variable. values.yaml.
If the deployment is already running, you'll need to connect as the repmgr user not the Postgres user you created. That's default NOT a superuser for security purposes. This way most of your connections are not privileged.
For example I deployed bitnami/postgresql-ha via helm to a k8s cluster in a namespace called "data" with the release name "prod-pg". I can connect to the database with a privileged user by running
export REPMGR_PASSWORD=$(kubectl get secret --namespace data prod-pg-postgresql-ha-postgresql -o jsonpath="{.data.repmgr-password}" | base64 --decode)
kubectl run prod-pg-postgresql-ha-client \
--rm --tty -i --restart='Never' \
--namespace data \
--image docker.io/bitnami/postgresql-repmgr:14 \
--env="PGPASSWORD=$REPMGR_PASSWORD" \
--command -- psql -h prod-pg-postgresql-ha-postgresql -p 5432 -U repmgr -d repmgr
This drops me into an interactive terminal
$ ./connect-db.sh
If you don't see a command prompt, try pressing enter.
repmgr=#
I created a Postgres container with docker
sudo docker run -d \
--name dev-postgres \
-e POSTGRES_PASSWORD=test \
-e POSTGRES_USER=test \
-v ${HOME}/someplace/:/var/lib/postgresql/data \
-p 666:5432 \
postgres
I give the Postgres instance test as a username and password as specified in the doc.
The Postgres port (5432) inside the container is linked to my 666 port on the host.
Now I want to try this out using psql
psql --host=localhost --port=666 --username=test
I'm prompted to enter the password for user test and after entering test, I get
psql: error: FATAL: password authentication failed for user "test"
There are different problems that can cause this
The version of Postgres on the host and the container might not be the same
If you have to change the docker version of Postgres used, make sure that the container with the new version is not crashing. Trying to change the version of Postgres while using the same directory for data might cause problem as the directory was initialized with the wrong version.
You can use docker logs [container name] to debug if it crashes
There might be a problem with the volumes used by docker (something with cached value preventing the creation of a new user when env variable change) if you changed the env parameters.
docker stop $(docker ps -qa) && docker system prune -af --volumes
If you have problem with some libraries that use Postgres, you might need to install some package to allow libraries to work with Postgres. Those two are the one Stack Overflow answers often reference.
sudo apt install libpq-dev postgresql-client
Other problems seem to relate to problems with docker configuration.
I'm using this Dockerfile to deploy it on openshift. - https://github.com/sclorg/postgresql-container/tree/master/9.5
It works fine, until I enabled ssl=on and injected the server.crt and server.key file into the postgres pod via volume mount option.
Secret is created like
$ oc secret new postgres-secrets \
server.key=postgres/server.key \
server.crt=postgres/server.crt \
root-ca.crt=ca-cert
The volume is created as bellow and attached to the given BuidlConfig of postgres.
$ oc volume dc/postgres \
--add --type=secret \
--secret-name=postgres-secrets \
--default-mode=0600 \
-m /var/lib/pgdata/data/secrets/secrets/
Problem is the mounted files of secret.crt and secret.key files is owned by root user, but postgres expect it should be owned by the postgres user. Because of that the postgres server won't come up and says this error.
waiting for server to start....FATAL: could not load server
certificate file "/var/lib/pgdata/data/secrets/secrets/server.crt":
Permission denied stopped waiting pg_ctl: could not start server
How we can insert a volume and update the uid:guid of the files in it ?
It looks like this is not trivial, as it requires to set Volume Security Context so all the containers in the pod are run as a certain user https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/pod_security_context.html
In the Kubernetes projects, this is something that is still under discussion https://github.com/kubernetes/kubernetes/issues/2630, but seems that you may have to use Security Contexts and PodSecurityPolicies in order to make it work.
I think the easiest option (without using the above) would be to use a container entrypoint that, before actually executing PostgreSQL, it chowns the files to the proper user (postgres in this case).