I have to deploy one of the docker image using postsqlDB , connection srting is like below , what is best
methode i can use.
"postgresql://username#host.name.svc.cluster.local?sslmode=require"
I have used env like below but not working,
-name : DB_ADDRESS
value: "postgresql://username#tcp(host.name.svc.cluster.local)?sslmode=require"
In past I had to create Postgresql DB. I would suggest you to use Helm chart link.
It gives you a lot of flexibility in configurations.
Related
I'm just wondering if there is any possibility to create multiple custom users with Bitnami PostgreSQL helm chart?
Can auth.username be used in values.yaml to create multiple custom users? How to assign passwords to users in this case?
I have not tried it myself, but the Bitanmi PostgreSQL helm chart has a section that allows you to run an initdb script. I belleve, you can use it to define additional users.
See here: https://github.com/bitnami/charts/tree/main/bitnami/postgresql#initialize-a-fresh-instance
Let us know if it worked :-)
I was wondering if this is possible. In a lot of my K8s deployments, I will have a Postgres database, sometimes the service is called Postgres, other times Postgresql (depending on which Docker image I use). Kubelet will automatically add {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT environment variables, and I was wondering if I could create another (consistent naming) EV from this variable? The reason I ask is because that is how I construct my connection string, so if this changes I need to rebuild the image. Thanks!
Yes, you can use that unless you are not changing anything for postgres service
A list of all services that were running when a Container was created is available to that Container as environment variables.
Example :
HOSTNAME=something-fddf-123123123-gbh45
SHLVL=1
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_PORT_HTTPS=80
KUBERNETES_SERVICE_HOST=x.x.x.x
PWD=/
Ref : https://kubernetes.io/docs/concepts/containers/container-environment/
a bit of context, I'm starting with the devOps, and create a docker-compose.yml to lift two containers, one with my mongodb and one with the express framework mongo-express, but now I want to bring it to my cloud in Azure, but the The truth is that the documentation is very limited and they do not give you a real example of how, for example, to upload a mongo db and that its data is persistent.
So, has anyone done something similar? Or do you have any advice that you can give me?
I'd be really grateful.
You can follow the steps here to use a docker-compose file to deploy the containers to Azure Container Instance and this document also shows the example. And if you want to persist the database data, you can mount the Azure File Share to the containers.
I'm new to locust, influx and grafana and wanted to integrate locust with grafana for that, I have to use a time-based DB which was influx and wanted to store the locust data in influx DB. I have done some research online but no one has guided on how to do the same.
Do I have to write some script for it or it is just some commands task. My grafana locust and influx is running fine in local env with the help of docker container.
In your Locus scripts you need to create two functions
a) For Success
b) For Failure
Then assign these functions to Locus events.request_success, failure.
Using InfluxDB client you can write the json points to influx db.
Please refer following link.
https://www.qamilestone.com/post/real-time-monitoring-using-locust-with-influxdb-grafana
I have a big (300Gb) Postgres DB running on GKE cluster (Stateful Set, SSD Volume). I need to move this DB to another GKE cluster.
What is the easiest way to accomplish it?
I tried to do it with piping pg_dump/pg_restore, but it takes forever and for some reason, not all constraints/triggers were recreated.
Is there any proper way to gracefully "shutdown" Postgres server running in Kubernetes and copy the /pgdata folder directly (from one volume to another)?
Other ideas?
tnx
I got few ideas (listed from the most probable to the least) about how you could approach this:
Remember to use proper format when using pg_dump. Default plain format may not work properly with pg_restore. Try to specify different format with pg_dump or use psql -f xxx.tar instead of pg_restore. Remember that it might take a while.
You can use a tool to assist you with that. For example pghoard.
You can make a tared backup of you DB and try to copy as a object via Google Cloud Storage.
You can try to create PVCs manually, attach pods to those PVCs and than copy your dataset onto those pods.
Finally, you may try to create an Init container and use it later for your new cluster.
I suggest starting from point 1 as I think it is the most possible solution. If that would not be enough, try later points from the list.
Please let me know if that helped.