I just cloned a project from GitHub and in the readme file it asks me to run docker-compose up to run the PostgreSQL image...
I assume that after I run the command, the PostgreSQL server image will start in the Docker container on my pc using port 5432. Then I run npm install and npm start to start the project (the database tables will be automatically created using an ORM framework). However, when I open my pgAdmin to connect to the server, it says it successfully connected but I could not find those tables created. Here I guess the pgAdmin didn't connect the PostgreSQL server (Docker image) on 5432... So my question is whether it is possible to use pgAmin installed in my local pc to connect to the PostgreSQL server Docker image which is already running, mapping to the port 5432 of my local PC?
docker ps --format "table {{.Names}}\t{{.ID}}" | grep 'postgres'
The above command will give a list of containers name and Id of Postgres container running via docker. if you have multiple Postgres container, pick one that you want to add in Pg-Admin and use the container id of that Postgres container for next command
docker inspect <container id> | grep -E -A 1 "IPAddress|Ports"
It will give the IPAddress and port of Postgres container you want to connect via PG-Admin. Use that IPAddress and Port to connect via Pg-Admin
Yes, its possible you just have to get Ip address from Docker container, run the next commands to reach that.
docker ps
and then use the ID container to:
docker inspect ID container
Search by IPAddress from Docker config and use it to connect from pgAdmin.
Related
I have a docker container running a spring-boot application for which i plan to use the mongoDb in my local machine.I know that containers are on a different network, and have made the necessary changes in the /etc/mongod.conf file as suggested by https://tsmx.net/docker-local-mongodb/ , in order for mongodb to accept connections from the docker network. But still the connection times out when the connection attempt is made from the docker container. Any help is appreciated.
You need to check the network interfaces of your host. You should find one starting with 192.168 or similar. Make sure your MongoDb instance is listening on this interface.
When you run the container, add --add-host mongodb:192.168.X.X to the docker run command. Replace the IP you find at the previous point.
docker run --help | grep add-host
--add-host list Add a custom host-to-IP mapping (host:ip)
Now in your Spring Boot application you can look for your MongoDB server called mongodb.
`docker run -d --add-host=host.docker.internal:host-gateway --name xxx -p 4001:4000 xxx`
above command gives access of local host of server to docker container.
Now when you connect to mongodb from inside docker container access it like this
let uri = "mongodb://host.docker.internal:27017"
Here 27017 is default port of mongodb
I have a Postgres server running inside Docker. Inside this Postgres server, I have a database named 'aa'.
I also have a Docker image of a Spring Boot Application. When this image is executed in Docker, database tables should be created inside database 'aa'.
In order to achieve this, I executed the following steps:
Run the Postgres Server inside Docker
docker run --name PostgresServer --e POSTGRES_PASSWORD=*** -d Postgres
Enter PostgresServer then Create database 'aa'
sudo docker exec -it PostgresServer psql -U postgres
CREATE DATABASE AA;
3. Run the Sprint Boot Docker Image (this is where the problem happens)
docker run -v /Users/juancesard.pineda/Desktop/brapi:/home/brapi/properties -d brapicoordinatorselby/brapi-java-server:v2
I checked the logs: It says database 'aa' does not exist wherein clearly it exists in the Postgres Server
org.postgresql.util.PSQLException: FATAL: database "aa" does not exist
Some additional info:
Docker listens at port 8080
Postgres server listens at port 5432
My application.properties file looks like this:
server.port = 8080
server.servlet.context-path=/Users/juancesard.pineda/Desktop/brapi/application.properties/germplasm
spring.datasource.url=jdbc:postgresql://host.docker.internal:5432/aa
spring.datasource.username=****
spring.datasource.password=****
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.hibernate.ddl-auto=create-drop
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.hbm2ddl.import_files=sql/crops.sql, sql/lists.sql, sql/locations.sql, sql/people.sql, sql/programs.sql, sql/trials.sql, sql/seasons.sql, sql/studies.sql, sql/breeding_methods.sql, sql/germplasm.sql, sql/attribute_defs.sql, sql/attribute_values.sql, sql/seed_lots.sql, sql/observation_units.sql, sql/crosses.sql, sql/pedigree.sql, sql/events.sql, sql/images.sql, sql/observation_variables.sql, sql/observations.sql, sql/samples.sql, sql/allele_calls.sql, sql/genome_maps.sql, sql/references.sql, sql/vendor.sql
spring.mvc.dispatch-options-request=true
Am I missing some config in my application.properties file?
Thank you in advance
The problem is in your application.properties file. The datasource url is pointing to host.docker.internal. This should only be used when connecting something from inside a docker container to something running outside docker on the host machine (documentation). You need a different solution because your Spring server and Postgres server are both running inside docker containers.
You need a Docker Network, specifically a bridge network, to connect your different containers. Docker Bridge Networks
Try something like this
Run this command to create a network in docker called “brapi_net”
$> docker network create --driver bridge network_name
Run the postgres container and link it to the network. Create your database schema as normal.
$> docker run --name PostgresServer --network=network_name --e POSTGRES_PASSWORD=*** -d Postgres
$> sudo docker exec -it PostgresServer psql -U postgres
$> CREATE DATABASE aa;
With a Docker network, container names act as server names. Modify your application.properties to reflect this
spring.datasource.url=jdbc:postgresql://PostgresServer:5432/aa
Run the server container and link it to the network.
$> docker run -v /Users/juancesard.pineda/Desktop/brapi:/home/brapi/properties --network=network_name -d brapicoordinatorselby/brapi-java-server:v2
I am running Postgres on a Windows 10 computer, and I want to connect to it from a Docker container. I've followed instructions from many sources and things should be working, but they're not.
Command line used to create Docker container:
docker run --rm -d --network=host --name mycontainer myimage
In postgresql.conf:
listen_addresses = '*'
In pg_hba.conf:
host all all 172.17.0.0/16 trust
In the bash shell of my container, I run:
psql -h 127.0.0.1
and I get the error:
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432?
Needless to say, Postgres is definitely running on my computer and I am able to query it from local applications. What am I missing?
THIS WON'T WORK FOR DOCKER v18.03 AND ONWARDS
The answer is already there - From inside of a Docker container, how do I connect to the localhost of the machine?
This question is related to a mysql setup, but it should work for your case too.
FOR DOCKER v18.03 ONWARDS
Use host.docker.internal to refer to the host machine.
https://docs.docker.com/docker-for-windows/networking/#i-cannot-ping-my-containers
As you've discovered, --network-host doesn't work with Docker for Windows or Docker for Mac. It only works on Linux hosts.
One option for this scenario might be to host PostgreSql in a container, also. If you deploy them with a docker-compose file, you should be able to have two separate Docker containers (one for the database and one for your service) that are networked together. By default, docker-compose will expose containers to others in the same compose file using the container name as its DNS name.
You could also consider including the database in the same container as your service, but I think the docker-compose solution is better for several reasons:
It adheres to the best practice of each container having only a single process and single responsibility.
It means that you can easily change and re-deploy your service without having to recreate the database container.
Configure the connection inside your docker container with the real ip-address of your host or as workaround with a dns name
I made a small application with spring-boot, spring jpa data, that connects to a dockerized postgres instace and it works pretty fine, even if I try to connect via
'psql' to the dockerized postgres instace it works well. The porblem is when I try to dockerize an image's instance of my spring-boot application and I try to link it with the dockerized postegres instance.
The docker command I use is this
docker run -it --link mypgcontainerwithpwd:postgres --name postgresclient1 sprinbootjpa
As a I already mentioned the container mypgcontainerwithpwd is running and reachable either with a local application either via psql
psql -p 5555 postgres postgres
in the jar I'm going to execute the application.properties file looks like this
spring.datasource.url=jdbc:postgresql://localhost:5555/postgres
spring.datasource.username=postgres
spring.datasource.password=password
spring.jpa.generate-ddl=true
During the starting phase an exception is raised up that prints: connection refused localhost-> 5555
The dockerfile that builds the instace looks like this
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD ./SpringJPA-PostgreSQ-0.0.1.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
I'm new to docker and I didn't find anything to fix the issue, I'm running docker on windows 10 with unix containers.
Thanks to all.
In your property file you are stating that postgres is running in the same container as your Spring Boot application (localhost) which is not true as it is running in a different container.
Replace you property by this:
spring.datasource.url=jdbc:postgresql://postgres:5555/postgres
You could also point to the docker bridge ip which usually is 172.17.0.1.
Change -
spring.datasource.url=jdbc:postgresql://localhost:5555/postgres
TO
spring.datasource.url=jdbc:postgresql://postgres:5555/postgres
Since you started the client container with link --link mypgcontainerwithpwd:postgres which means your client will be able to reach your mypgcontainerwithpwd container using alias postgres. localhost means your client container itself & not mypgcontainerwithpwd.
This works, but I just want to emphasize Vivek's point that "postgres" comes from the container name and not the userID or the database type. I am using Docker Compose, so this name comes from my docker-compose.yml file.
Using virtual hosts rather than deployed Docker container it was a normal work process for me to create ssh tunnels in order to access delimited machines from my local box. For instance connect with my psql client to a Postgres instance which I could only reach from a bastion box.
With Docker everything is boxes away even more. Is there an equivalent for doing the same but with Docker? Tunnel through the Docker instance to the RDS instance?
You use the docker CLI to connect to a running container. For instance...
To log into a db running in a container you can use (from your local machine)
docker exec -it mypsqlcontainer psql -U username dbname
I personally almost never have to ssh into the host. Everything can be done through the docker CLI.
You can make ssh-tunnel to the docker host. DB port must be accessible from docker host (i.e. using "-p" docker run option).
If you prefer not publishing DB port you can create jumpbox container with ssh server, publish port 22 on this container and user container linking to link jumpbox container with DB container.