Persist data inside Azure container app that runs Postgres DB - postgresql

I am new to Azure Container Apps. I had a requirement to deploy set of applications to Azure container apps. There was a problem with the setup when I deploy my Postgres DB as container when the container shut down data in the container will destroyed.
I want to persist data, As per the previously asked Question It is unable to persist the data inside the container app for Postgres.
I want to run the containerized the database and persist the volume. How can I do it with the available Azure services.
I am successfully able to run the application in containers but when the container is restart the data inside the container is destroyed.

Containers are stateless by definition. To persist the database data, you have a couple of options:
1-Create a storage mount to Azure Files, map a folder to the volume and store the database files in that folder. This will store the database files in Azure Storage, outside of the container.
https://azure.microsoft.com/en-us/products/postgresql/#overview
2-Instead of using PostgreSQL inside a container, use the Azure Database for PostgreSQL managed service.
https://azure.microsoft.com/en-us/products/postgresql/#overview

Related

How to connect a external volume to IBM Cloud Code Engine

I have a docker image that access a database referenced by the -v parameter. Does someone have simple instructions to have it running in Cloud Code Engine and storing the database in an object storage?
If I understand this correctly you are looking to attach a persistent volume to run a docker image that has a database included in it.
This isn't supported at the moment and I would suggest using one of the dedicated platform database services

Kubernetes: Databases & DB Users

We are planning to use Kube for Postgres deployments. Our applications will be microservices with separated schema (or logical database). For security sake, we'd like to have separate users for each schema/logical_db.
I suppose that the db/schema&user should be created by Kube, so the application itself does not need to have access to DB admin account.
In Stolon it seems there is just a possibility to create a single user and single database and this seems to be the case also for other HA Postgres charts.
Question: What is the preferred way in Microservices in Kube to create DB users?
When it comes to creating user, as you said, most charts and containers will have environment variables for creating a user at boot time. However, most of them do not consider the possibility of creating multiple users at boot time.
What other containers do is, as you said, have the root credentials in k8s secrets so they access the database and create the proper schemas and users. This does not necessarily need to be done in the application logic but, for example, using an init container that sets up the proper database for your application to run.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers
This way you would have a pod with two containers: one for your application and an init container for setting up the DB.

Separate Dev and Production instances and database

I have a web application hosted on a server, it uses virtualEnv to separate dev and prod instances. Both instances share the same postgres database. (all on the same server)
I am kind of new to docker and I would like to replace the dev and prod instances with docker containers, and each link to its dev or prod postgres containers (or a similar effect so that a code change in development will not affect production database.)
what is the best design for this scenario? should I have the dev and prod container mapped to different ports? Can I have 1 dockerfile for both dev and prod containers? How do I deal with 2 postgres container?
Seems your requirement is not very complicated, so I think you can run 2 pair containers (each pair have one application container and one postgres container) to achieve this, the basic structure described as below:
devContainer---> pgsDBContainer:<port_number1> ---> dataVolume1
prodContainer---> pgsDBContainer:<port_number2> ---> dataVolume2
Each container pair have one dedicated port number and one dedicated volume. The port numbers used for dev or prod application to connect to corresponding postgres database, which should be easy to understand. But volume is another story.
Please read this Manage data in containers doc for container volume. As you mentioned that "a code change in development will not affect production database", which means you should have two separate volumes for postgres containers, so the data of the databases will not mixed up.
Can I have 1 dockerfile for both dev and prod containers?
Yes you can, just as I mentioned, you should give each postgres container different port and volume config when you start them with docker run command. docker run has EXPOSE option and VOLUME option for you to config the port number and volume location.
Just a reminder, When you run a database in container, you may need to consider the Data Persistent in container environment to avoid data loss that caused by container removed. Some discussions of container Data Persistent can be found here and there.

Accessing mongodb data on aws instance

Due to some hardware issue my aws instance stopped functioning. Team suggested me to stop and and start the instanace.
Now aws provided new IP, where all data is present. I installed mongodb and had couple of databases there.
Now when I checked on new server mongodb was not working. I started mongod and letter I asked to create /data/db directory. Now mongodb is functioning but when I do
"show databases" none of my previous database appearning. Any help on getting this data back.?
A AWS EC2 instance have two types of Storage. A Ephemeral storage and a EBS Volume storage.
The Ephemeral storage should be used for temporary data only. If you restart your EC2 the data in it will not be lost, but if you stop and restart you loose it all. When trying to stop a EC2 AWS gives you this message.
Note that when your instances are stopped: Any data on the ephemeral
storage of your instances will be lost.
This kind of storage is provisioned very close to the instance and because of that it is faster.
EBS is a persistent storage independent of your EC2 instance. It can be attached/dettached from your EC2. This is the kind of storage you want to use when creating a database inside your instance.

Cyclic backups of a docker postgresql container

I would like to deploy an application using docker and would like to use a postgresql container to hold my data.
However I am worried about losing data, so I need back-ups.
I know I could run a cron job on the host to dump the data out from the container, however this approach is not containerized and when I deploy to a new location, I have to remember to add the cronjob.
What is a good , preferably containerized, approach to implement rotating data backups from a postgresql docker container?
Why not deploy a second container that is linked to the PostgreSQL one that does the backups?
It can contain a crontab within, together with instructions on how to upload the backup to Amazon S3, or some other secure storage in the cloud that will not fail even in case of an atomic war :)
Here's some basic information on linking containers: https://docs.docker.com/userguide/dockerlinks/
You can also use Docker Compose in order to deploy a fleet of containers (at least 2, in your case). If your "backup container" uploads stuff to the cloud, make sure you don't put your secrets (such as AWS keys) into the image at build time. Put them into the container at run-time. Here's more information on managing secrets using Docker.