How can I --link a docker container(odoo) running on a plain EC2 instance or via EB to a RDS in AWS? I tried loading a custom config file with the location of the server but that didn't work. In EB I created an app with a postgres DB, successfully deployed my docker.aws.json but I can not connect to web interface of the the application.
When I check the docker logs of the container it says everything started fine but expects the DB on localhost.
So like I said my question is how can I tell a docker container to --link to a RDS and not an other docker container à la --link db:db?
If you created a database in AWS Elastic Beanstalk, you can access it using environment variables that EB set for you. See examples in the documentation: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-rds.html (Python example)
The environment variable containing the RDS DB hostname is called RDS_HOSTNAME. See example: https://github.com/awslabs/eb-demo-php-simple-app/tree/docker-apache
It's not possible to use a link as your RDS DB is not a Docker container.
Related
I created an RDS DB server for Mongo-DB,
which is attached to a web app, I created a Docker file for the respective Docker image & successfully deployed it in Kubernetes Pods.
Now I want to create the Docker image for this MongoDb,
Can any help me how to create a docker file to install Mongo DB & deploy my RDS server in it
Thanks in advance
There is no need to create private image for MongoDB, unless you need very specific settings.
Get the one from Docker Hub. This is the link to the official image
I have setup an Rancher (RKE) (kuberbetes) for my application.
and application using the postgres so i have setup Crunchydata postgres operator and create postgres cluster using that.
everything fine but now i want to see the pg_activity for my postgresql.
how i can see the activity of whole postgres ?
you use the monitoring tools in rancher to monitor the Postgres.
apart from that you can SSH inside the respective pod of the database and use the cli command and check the output.
In rancher, you can also use the client tool to connect with the rancher and run the cli command to check the pg_activity.
Client docker image : https://hub.docker.com/r/jbergknoff/postgresql-client/
you can also deploy the GUI docker client on rancher and use it
GUI postgress client : https://hub.docker.com/r/dpage/pgadmin4/
GUI Example : https://dataedo.com/kb/query/postgresql/list-database-sessions#:~:text=Using%20pgAdmin,all%20connected%20sessions%20(3).
I'm fairly new to docker and I want to set up a PostgreSQL database and manage it with pgadmin4. Both in docker.
Unfortunately, I cannot add the PostgreSQL database to pgadmin4
I created both containers via portainer and chose the latest docker hub images.
When creating both containers, I chose "bridge" as the network type.
Both containers in portainer
As you might see in this picture, they share the same network and are not isolated from each other.
Are there any additional steps to do?
I am attempting to deploy my first MEAN stack application ('weatherapp') to production on AWS.
I deployed my NodeJS/Express/Angular app to AWS Elastic Beanstalk (preconfigured Linux machine running Node). This works fine and I can view the app in the browser.
Separately I created a docker container running MongoDB and deployed it to AWS / EC2 following the steps in this post:
https://blog.codeship.com/running-mean-web-application-docker-containers-aws/
My question is - how do I connect the two?
In my NodeJS app I was connecting to my local Mongo instance locally like this:
'mongodb://localhost:27017/weatherapp'
What steps can I take to find out what the connection string should be for my production Mongo instance on docker?
Thanks in advance!
The answer to this is two-fold. We need to set some options on the Docker side in the EC2 instance and then some security groups and configuration on the AWS side. First, we'll start on on the Docker container side.
Container
When you run the MongoDB container, you will want to do two things:
Persist the data to disk.
Open the MongoDB port to the container.
To persist the data to disk you will want to do something like -v /data/db:/data/db. This will make the MongoDB data available at /data/db on the host. This makes sure that an accidental deletion or upgrade of the container doesn't lose any data.
Next, we need to publish the MongoDB port so that applications external of Docker can connect to it. The default MongoDB port is 27017 so let's publish that using -p 27017:27017.
If your original command for starting MongoDB was:
docker run --name mymongodb -d mongo
Then the new one would be:
docker run --name mymongodb -d -p 27017:27017 -v /data/db:/data/db
AWS
Now, we need to edit the security group of your EC2 instance and configuration of Elastic Beanstalk.
Security Groups
First, take a look at your Security Groups in the EC2 console. You will have a group for the Elastic Beanstalk application named similar to awseb-e-xanf9hqrw3-stack-AWSEBSecurityGroup-1N2T1AI2H05I8 with a ID similar to sg-07fb8c43. We'll use this ID in the next step so copy it somewhere.
Now find the Security Group attached to your EC2 instance running the Docker container. You will need to add a new rule to this group allowing access to the MongoDB container. Edit the group and add a new inbound rule for:
Type: Custom TCP
Protocol TCP
Port range: 27017
Source: sg-07fb8c43
This will allow the Elastic Beanstalk EC2 instances (using sg-07fb8c43) to access the MongoDB port on your Docker EC2 instance.
Elastic IP
You'll likely want a more static IP address for your EC2 instance in case it reboots. Navigate to the Elastic IPs section of the EC2 console and allocate a new address to your Docker EC2 instance.
The new Elastic IP will be the address you use in your Elastic Beanstalk configuration to connect to MongoDB. If your address was 54.67.29.50 then your application would connect to mongodb://54.67.29.50:27017.
Elastic Beanstalk
Now, instead of hardcoded this address in your Node.js application, you should configure your application to pull the information from an environment variable. In your application, you should read the MongoDB URL from something like process.env.MONGO_URI. Then, in your Elastic Beanstalk application configuration, navigate to the Software Configuration and then down to Environment Properties. Here, you create a property name of MONGO_URI and the value as mongodb://54.67.29.50:27017. This will allow you to easily change the MongoDB instance should it ever change or if you launch multiple environments with different databases.
I have mongodb running on one of the VM internally. Now, we are moving the service on AWS.
How should I transport the mongodb data from VM to AWS instance? Mongo is running inside docker container in AWS.
Should I use MongoDump and MongoRestore? or any other approach here?
Also, I don't see mongod running as a service on AWS instance, since they are running inside the docker container. So, should I need to install the mongodb package and then do mongorestore?
Any help or thoughts here?