Running Docker redis instance can't access redis - powershell

On Windows 10, I open up powershell and type:
docker pull redis
docker run --name some-redis -d redis
So I have a docker container running with redis on it. How do I access it? How do I run ping so I can see pong? I want to add values and then read the values. I don't see any documentation on this. Any help would be appreciated.

Docs of redis image has detailed description of how to run and access redis container. Basically you have the following options:
Go inside your redis container with the following command and then play with redis-cli:
docker exec -it some-redis bash
Map redis port to host when launching the redis container
docker run -d --name some-redis -p 6379:6379 redis
Then you can just connect to redis like it's running on your host machine
Container link, connect to redis within another container on the same host machine
docker run -it --link some-redis:redis --rm redis redis-cli -h redis -p 6379

Related

Trino Container and Postgres Container Connector Configuration jdbc-url

I would like to establish a connection between trino and postgres each running in a docker container using the postgresql-connector. I can't seem to finde the correct connection parameters for the jdbc-url.
I got trino and a postgres database running in docker:
docker run --name trino -d -p 8080:8080 -v $PWD/catalog:/etc/trino/catalog/ trinodb/trino
docker run -d --name my-postgres-db -p 5432:5432 -v /home/postgresdb/db:/var/lib/postgresql/data my-postgres-image
I mounted a postgresql.properties file to the trino container as described here:
connector.name=postgresql
connection-url=jdbc:postgresql://my-postgres-db:5432/<db-name>
connection-user=user_1
connection-password=password_1
It worked for a local postgres database but when I adjusted the connection-url for my postgres container postgresql was not listed as a catalog. How do I use the postgresql-connector to connect the trino and postgres containers?

Docker container can't connect circleCI postgres database

I am trying to set up a circleCI test, I have created a database in circleCI and I have a docker container which needs to connect to the database, but it can't. Inside my docker container is a script which before it does anything it runs pg_isready, this cannot connect to the database. Here's my circle job creation
postgres_tests:
docker:
- image: circleci/python:3.7
- image: circleci/postgres:9.6.2-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
steps:
- setup_remote_docker:
docker_layer_caching: true
- attach_workspace:
at: /tmp/workspace
- run:
name: Install awscli docker-squash
working_directory: /
command: sudo pip3 install awscli docker-squash
- run: eval `aws ecr get-login --no-include-email --region eu-west-1`
- checkout
- run: echo 'export PATH=/usr/lib/postgresql/9.6/bin/:$PATH' >> $BASH_ENV
- run: sudo apt-get update && sudo apt-get install -y postgresql-client
- run: psql -h localhost -U postgres --command "ALTER USER postgres WITH PASSWORD 'password';"
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
/tmp/workspace/sql/t/run_tests.sh
The run_tests.sh is a script which pulls my docker image from the company repo and then does a docker run on that image.
I have read other people have issues where the database isn't ready so to test this I added pg_isready before the docker run
So my script looks like this
DB_HOST=`psql -X -A -h localhost -U postgres -p 5432 -t -c "select inet_server_addr()"`
DB_PORT=5432
DB_NAME=my_test
DB_USER=postgres
DB_PASSWORD=password
pg_isready -h "${DB_HOST}" -p "${DB_PORT}"
#restore database from supplied image
docker run \
-e SAPIENTIA_DB_HOST=$DB_HOST \
-e SAPIENTIA_DB_PORT=$DB_PORT \
-e SAPIENTIA_DB_NAME=$DB_NAME \
-e SAPIENTIA_DB_PASSWORD=$DB_PASSWORD \
-e SAPIENTIA_DB_USER=$DB_USER \
$EMPTY_DB_FULL_PATH \
path_to_file/file
I have also tried setting the DB_HOST variable directly to 'localhost' the result is exactly the same
Here's what I get as a result:
127.0.0.1:5432 - accepting connections
127.0.0.1:5432 - no response
I have also tried re-running the test with ssh and connecting myself. Same result, I can connect to the database, but i I then run docker exec and try to connect from inside the docker container it can't connect.
I'm pretty stumped here, so any help would be useful.
EDIT: I've found this documentation page about your issue:
It is not possible to start a service in remote docker and ping it directly from a primary container or to start a primary container that can ping a service in remote docker. To solve that, you’ll need to interact with a service from remote docker, as well as through the same container
That line is not 100% clear to me, but I understand that they tell us that we should run the containers we want to communicate from another container manually. Therefore:
- run:
name: run_pg_tests
working_directory: /tmp/workspace
command: |
docker run -d --name postgres --env POSTGRES_USER=postgres --env POSTGRES_DB=my_test circleci/postgres:9.6.2-alpine
/tmp/workspace/sql/t/run_tests.sh
Since the postgres container is not accessible anymore through the local network, your up check could be docker exec postgres pg_isready
You can then set your DB_HOST to postgres in your run script.
Original answer:
I'm not well versed into CircleCI configuration, but my guess would be that your Docker container you run manually is not attached to the same network as the containers launched by CircleCI.
From what I see in the documentation, you can specify the hostname of the service container:
The name the container is reachable by. By default, container services are accessible through localhost
So maybe if you try something lile this:
- image: circleci/postgres:9.6.2-alpine
name: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_DB: my_test
You can then set your DB_HOST to postgres in your run script.

How can I connect Odoo not running on Docker to a Postgres container running on Docker?

I am trying to connect Odoo to a Postgres database instance which is running in Docker, but having trouble figuring out how to connect them. I created my instance like so:
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name mydb postgres:10
Only Postgres is running in Docker, not Odoo. How would I connect the Postgres running inside Docker to the outside Odoo?
Shortly:
You have to open the port of your docker instance
-p 5432:5432
Example:
docker run -d -p 5432:5432 -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name mydb postgres:10
Description
Because when you run a container with docker, it is not exposed by default to the host network. So when you run Postgres, it is not accessible outside of the container. In order to make it accessible, you could :
Export a specific port : docker run -d -p 5432:5432 ...
Use the host network: docker run --network=host ...
Bonus:
If you wish to run odoo within a container in the future, you might need to create a docker network docker network create odooNetwork and use it for your Postgres and Odoo instances :
docker run -d --network=odooNetwork ...
More details about docker network in the documentation

equivalent docker run command for working docker-compose (postgres)

If have a docker-compose file for postgres that works as expected and I'm able to access it from R. See relevant content below. However, I also need an equivalent "docker run" command but for some reason cannot get this to work. As far as I can tell the commands / setup are equivalent. Any suggestions?
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGDATA: /var/lib/postgresql/data
ports:
- 5432:5432
restart: always
volumes:
- ~/postgresql/data:/var/lib/postgresql/data
The docker run command I'm using is:
docker run -p 5432:5432 \
--name postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=postgres \
-e PGDATA=/var/lib/postgresql/data \
-v ~/postgresql/data:/var/lib/postgresql/data \
-d postgres
EDIT 1: In both settings I'm trying to connect from another docker container/service. In the docker-compose setting the different services are described in one and the same yml file
EDIT 2: David's answer provided all the information I needed. Create a docker network and reference that network in each docker run call. For those interested in a shell script that uses this setup to connect postgres, pgadmin4, and a data science container with R and Python see the link below:
https://github.com/radiant-rstats/docker/blob/master/launch-rsm-msba-pg.sh
Docker Compose will automatically create a Docker network for you (per Compose file). For inter-container DNS to work, you can't use the default Docker network but any named network will work. So you need to add that bit of setup:
docker network create some-name # default options are fine
docker run --net some-name --name postgres ...
# will be accessible as "postgres" from other containers on
# the "some-name" network

Using Docker to launch web app, can't connect to Postgresql DB?

I received the following error when trying to write session data using Tomcat's PersistentManager to a Postgres DB running on my local machine:
SEVERE: A SQL exception occurred org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
The application itself runs in a docker container. For completeness sake, my current context.xml file is:
<?xml version='1.0' encoding='utf-8'?>
<Context>
<Manager className="org.apache.catalina.session.PersistentManager"
distributable="true" processExpiresFrequency="6" maxIdleBackup="0" debug="99" >
<Store className="org.apache.catalina.session.JDBCStore"
driverName="org.postgresql.Driver"
connectionURL="jdbc:postgresql://localhost/admin?stringtype=unspecified"
connectionName="admin" connectionPassword="admin"
sessionAppCol="app_name" sessionDataCol="session_data" sessionIdCol="session_id"
sessionLastAccessedCol="last_access" sessionMaxInactiveCol="max_inactive"
sessionTable="tomcat_sessions_tb" sessionValidCol="valid_session" />
</Manager>
</Context>
Pursuant to the suggestions here: Postgresql : Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections
I confirmed via a netstat -aln | grep LISTEN that Postgresql is running and listening on the correct ports:
tcp4 0 0 127.0.0.1.5432 *.* LISTEN
tcp6 0 0 ::1.5432 *.* LISTEN
and that my postgresql.conf (located in usr/local/var/postgres) has listen_addresses = localhost and port = 5432, which mirrors the host and port of my running server in Pgadmin3.
I suspect that the problem is that Docker runs in a VM, and thus the local information I have obtained may not be the whole story. Reading up on the available information online, it seems that I may require some sort of bridged networking.
However, I admit I am a novice in this area, and I'm unsure of how to set it up.
Why I can NOT connect to localhost:5432?
Cat your container's /etc/hosts
$ sudo docker exec -it [container] cat /etc/hosts
For docker networks is bridge by default, the localhost inside points to container itself(Docker default bridge network).
Then you don't have 5432 listening in your container:
$ sudo docker exec [container] nc -v -z localhost 5432
Solution 1. If you wanna hardcode the "localhost:5432" inside your config xml, the easiest way is creating your container with the option "--net=host":
$ sudo docker run --net=host -it ...
Solution 2. Change the localhost of your docker host ip inside the container
Get your docker host ip:
$ sudo docker inspect -f '{{ .NetworkSettings.Gateway }}'
192.168.5.1
Enter your container:
$ sudo docker exec -it [container] /bin/bash
Edit the file /etc/hosts to point the localhost to docker host ip:
$ sudo vim /etc/hosts
192.168.5.1 localhost
Solution 3. Modify your db config file to use an alias instead of localhost:
connectionURL="jdbc:postgresql://DB_ALIAS/admin?stringtype=unspecified"
Then add the DB_ALIAS to the container's hosts :
$ sudo docker run --add-host DB_ALIAS:192.168.5.1 -it [image] ...
If you are using docker-compose together with postgres image, than you can reuse service name as IP inside jdbc connection (here: app-db)
web:
build: ./web
ports:
- "8080:8080"
links:
- app-db
environment:
- MYAPP_JDBC_URL=jdbc:postgresql://app-db:5432/somedb
- MYAPP_JDBC_USER=someuser
- MYAPP_JDBC_PASS=pass
app-db:
image: postgres:9.6
environment:
- POSTGRES_USER=someuser
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=somedb
expose:
- 5432
volumes_from:
- app-db-data
app-db-data:
image: cogniteev/echo
command: echo 'Data Container for PostgreSQL'
volumes:
- /opt/postgresdata/:/var/lib/postgresql/data
The best decision!
jdbc:postgresql://host.docker.internal:5432/somedb
Don't thank.
I had to expose port with -p 5432:5432:
docker run --name postgres -e POSTGRES_PASSWORD=secret -d -p 5432:5432 postgres
I was getting the same error but this simple solution works perfect for me.
sudo docker run -d --net="host" -it <IMAGE>
Now I can run my app https://x.x.x.x:pppp/../.. and everything works fine. I hope this helps