How link a Spring boot app at existing docker container for database? - postgresql

I want to use my app from a docker container with anothers docker container, one for postgres and one for solr.
My docker compose is:
version: '3'
services:
core:
build: ./core
ports:
- "8081:8081"
environment:
- "SPRING_PROFILES_ACTIVE=production"
links:
- postgresdb
- solrdb
postgresdb:
image: postgres:9.4
container_name: postgres
ports:
- "5432:5432"
environment:
- DB_DRIVER=org.postgresql.Driver
- DB_URL=jdbc:postgresql://localhost:5432/db
- DB_USERNAME=db
- DB_PASSWORD=db
networks:
default:
solrdb:
image: solr:5.5
container_name: solr
ports:
- "8983:8983"
environment:
- DB_URL=http://localhost:8984/solr
networks:
default:
networks:
default:
external:
name: mynet
And already I have containers for solr and postgres created, just I want to use it. How I can do it?

You have already exposed the ports for solrdb and the postgresdb. So in your other container access these Dbs by the container names and the exposed port.
For example, solrDb should be accessed via solrdb:8983 and
the postgresdb should be accessed via postgresdb:5432
Edit :
Make sure that both the containers are operating in the same network. You need to add this network field for all the containers.
postgresdb:
image: postgres:9.4
container_name: postgresdb
ports:
- "5432:5432"
environment:
- DB_DRIVER=org.postgresql.Driver
- DB_URL=jdbc:postgresql://localhost:5432/db
- DB_USERNAME=db
- DB_PASSWORD=db
networks:
default:
and in the end of all the docker file
networks:
default:
external:
name: <your-network-name>
And make sure that your predefined network is running prior to the start of these containers.
To create the network :
docker network create --driver overlay --scope global <your-network-name>
Note : Your container ('postgresdb') will be accessible by postgresdb:5432

Related

Intergrate elasticsearch with multiple mongodb in docker-compose

I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve

How to attach persistent volume in docker-compose file for mongodb?

I have a docker-compose file that will bring up mongo and mongo-express containers in the same network "mynet".
I have created a network by:
docker network create mynet
I have created a volume named "demo-vol" externally by docker command.
docker volume create demo-vol
Inside the container, I have created a sample mongo collection.
When I do docker-compose up I'm able to see the container running but I'm not able to find the mongo data in that specified volume.
Below is my docker-compose.yaml file
version: '3'
services:
mongo:
image: mongo
container_name: mymongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- "/demo-vol:/data/db"
networks:
- mynet
ports:
- 27017:27017
mongoexpress:
image: mongo-express
container_name: mymongoexpress
ports:
- 8081:8081
volumes:
- "/demo-vol:/data/db"
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
depends_on:
- mongo
networks:
- mynet
volumes:
demo-vol:
external: true
networks:
mynet:
external: true
What I need is:
Even after deleting the container, I want my data to be persistent.
How to do that and please explain. Where i'm going wrong?
Note:I'm a beginner to Docker concepts.
Thanks in advance.
You can to use local driver for volume
volumes:
demo-vol:
driver: local
and try to remove slash
volumes:
- demo-vol:/data/db

How to access postgres-docker container other docker container without ip address

How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.

with docker-compose my app container cannot see the mongodb container

Here is my docker compose file:
version: "3.3"
services:
test:
image: test
networks:
- mongo_net
ports:
- 4000:80
depends_on:
- mongodb
links:
- mongodb
mongodb:
image: mongo:latest
networks:
- mongo_net
ports:
- 27017:27017
volumes:
- local_data:/data/db
volumes:
local_data:
networks:
mongo_net:
driver: bridge
The 'test' image cannot find the 'mongodb' instance.
My assumption is that the 'links' section would connect the two, but it is not happening.
What am I missing?
for your compose file try just using depends_on. links is deprecated and maybe thats why you are currently getting issues since this is a V3 compose file.

RabbitMq refuses connection when run in docker

My docker-compose file looks like this:
version: '2'
services:
explore:
image: explore
build:
context: ./Explore
dockerfile: VsDockerfile
environment:
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
networks:
- localnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
volumes:
- ./esdata:/usr/share/elasticsearch/data
networks:
- localnet
rabbit:
image: rabbitmq:3.6.7-management
hostname: rabbit
ports:
- 15672:15672
- 5672:5672
networks:
- localnet
networks:
localnet:
external:
name: localnet
If I type http://localhost:15672 in the browser, I get the rabbitmq interface, but if I tries to connect from my Explore project like this:
public SqlToRabbitProcessor(SqlToRabbitRepository sqlToRabbitRepository)
{
_sqlToRabbitRepository = sqlToRabbitRepository;
var factory = new ConnectionFactory
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
var rabbit = factory.CreateConnection();
channel = rabbit.CreateModel();
}
Then it breaks in the line
var rabbit = factory.CreateConnection();
with the error saying
ExtendedSocketException: Connection refused 127.0.0.1:5672
System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
ConnectFailureException: Connection failed
RabbitMQ.Client.EndpointResolverExtensions.SelectOne(IEndpointResolver resolver, Func selector)
BrokerUnreachableException: None of the specified endpoints were reachable
RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, string clientProvidedName)
As my comment under the question suggested, it's because the "localhost" defined in the web application part is it's containers localhost, and not the docker host..
just needed to change
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
to
- "ElasticUrl=http://elasticsearch:9200"
- "RabbitMq/Host=rabbit"
I had the same issue with docker-compose.
I solved it by with hostname:
rabbit:
hostname: rabbit
command: sh -c "rabbitmq-plugins enable rabbitmq_management; rabbitmq-server"
image: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: admin
ports:
- 5672:5672
- 15672:15672
Follow instructions on this post.
Just to benefit people that stumble upon this question. The --link feature is now considered legacy and is a prime candidate to be deprecated by docker.
The easiest way is to use
depends_on:
In order to do this, its recommended to first create a network like so"
docker network create <network_name>
Then use docker-compose up to spawn services that bind with each other. Look at the example below where I've bound my spring-boot app to rabbit-mq. You can clone my repo from here
version: "3.1"
services:
rabbitmq-container:
image: rabbitmq:3.5.3-management
hostname: rabbitmq-container
ports:
- 5673:5673
- 5672:5672
- 15672:15672
networks:
- resolute
resolute-container:
build: .
ports:
- 8080:8080
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
- resolute_rabbitmq_publishQueueName=resolute-run-request
- resolute_rabbitmq_exchange=resolute
depends_on:
- rabbitmq-container
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- resolute
networks:
resolute:
external:
name: resolute
See how I've created a network called resolute and bound the apps to the same network. I've also given my rabbitmq-container a hostname. This is because docker now prepends the container name and that makes it difficult to bind services by name.