I am trying to configure logstash in docker. Elastic Search is hosted in AWS EC2. I need to insert data into that using logstash. Below is the content of docker-compose.yml.
version:"1"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.8.0
command: logstash -f logstash.conf
ports:
-"9200":9200
When I run "docker-compose up", it is not connecting to the AWS URL that is defined in "logstash.conf", instead it throwing below error.
error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
Related
I'm trying to deploy docker compose to amazon ECS.
I have created this docker compose file:
services:
consul-server:
container_name: consul-server
hostname: consul-server
image: consul:1.12.2
command: agent -server -ui -node=server1 -bootstrap-expect=1 -client=0.0.0.0
ports:
- 8400:8400
- 8500:8500
- 8600:8600/udp
networks:
- xp_network
infra-service:
image: infra-service-docker-img:latest
build: .
container_name: infra-service
hostname: infra-service
ports:
- 5011:5011
networks:
- xp_network
networks:
dxp_network:
name: xp-vpc
I create an ECS context to target Amazon ECS using the following commands:
docker context create ecs ecscontext
I have AWS credentials set up in the local environment for authenticating with the ECS platform.(I did aws configure and add the keys)
and then I use an existing AWS profile. After I checked i created the new context (docker context ls) I ensured that i was using my context.
Run --> docker compose up
When I a do docker-compose up -d I can see that Container infra-service Started and Container consul-server Started.
When I check the state of the services I can not see any difference on the PORTS. No connection to aws and no resources created there also :S
enter image description here
basically I did:
$ aws configure
--> keys
--> region
$ docker compose build
$ docker context create ecs ecscontext
--> An existing AWS profile
$ docker context use ecscontext
$ docker compose up
$ docker compose ps
PLEASE!!!Anyone can tell me what I'm doing wrong? Do you think that it's something related to the credentials set up?
I have testng project with selenium for integration testing of frontend app in vuejs and springboot backend. So in order to run tests I need first to bring up all dependent projects:
springboot and mongodb
vuejs frrontend app
Each project is in its own repo.
So I have created docker images of springboot and frontend app and will put it up in gitlab container registry.
Then in the testeng project plan to use docker-compose in .gitlab-ci.yml. Here is docker-compose.yml for testng project:
version: '3.7'
services:
frontendapp:
image: demo.app-frontend-selenium
container_name: frontend-app-selenium
depends_on:
- demoapi
ports:
- 8080:80
demoapi:
image: demo.app-backend-selenium
container_name: demo-api-selenium
depends_on:
- mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod
- SCOUNT_API_ENDPOINTS_WEB_CORS_OPTIONS_ALLOWEDORIGINS=*
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_DATABASE=demo-api-selenium
- KEYCLOAK_AUTH-SERVER-URL=https://my-keycloak-url/auth
ports:
- 8082:80
mongodb:
image: mongo:4-bionic
container_name: mongodb-selenium
environment:
MONGO_INITDB_DATABASE: demo-api-selenium
ports:
- 27017:27017
volumes:
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
After running docker-compose in gitlab-ci.yml what will be url of frontend app in order to execute tests?
When I do it locally I am using following urls for testing:
frontend app: http://localhost:8080
api: http://localhost:8082
But in case when running on gitlab ci what will be url to access frontend and api?
TL;DR instead of using localhost you need to use the hostname of your docker daemon (docker:dind) service. If you setup docker-in-docker for your GitLab job per usual setup, this is most likely docker.
So the urls you need to use according to your compose file are:
frontend app: http://docker:8080
api: http://docker:8082
my_job:
services:
- name: docker:dind
alias: docker # this is the hostname of the daemon
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
image: docker:stable
script:
- docker run -d -p 8000:80 strm/helloworld-http
- apk update && apk add curl # install curl and let server start
- curl http://docker:8000 # use the daemon to reach your containers
For a full explanation of this, read on.
Docker port mapping in Gitlab CI vs locally
How it works locally
Normally, when you use docker-compose locally on your system, you are typically running the docker daemon on your localhost (e.g. using docker desktop).
When you provide a port mapping like 8080:80 it means to publish port 8080 on the daemon host bound to port 80 in the container. When running locally, that means you can reach the container via localhost.
In GitLab
However, when you're running docker-in-docker on GitLab CI the important difference in this environment is that the docker daemon is remote. So, when you expose ports through the docker API, the ports are exposed on the docker daemon host not locally in your job container.
Hence, you must use the hostname of the docker daemon, not localhost, to reach your started containers.
Alternative solutions
An alternative to this would be to conduct your testing inside the same docker network that you create with your compose stack. That way, your testing is agnostic of where the docker environment lives and can, for example, leverage the service aliases in your compose file (like frontendapp, demoapi, etc) instead of relying on published ports.
For example, you may choose add a test container to your compose stack. Some testing libraries like Testcontainers can help set this up, too.
I have built a docker container consisting of a mongodb database and an azure http-triggered function. The following yaml is the docker compose file:
version: '3.4'
services:
mongo:
image: mongo
container_name: mongodb
restart: always
ports:
- 37017:27017
storage.emulator:
image: "mcr.microsoft.com/azure-storage/azurite:latest"
container_name: storage.emulator
ports:
- 20000:10000
- 20001:10001
- 20002:10002
my.functions:
image: ${DOCKER_REGISTRY-}myfunctions
build:
context: .
dockerfile: domain/My.Functions/Dockerfile
ports:
- 9080:80
depends_on:
- storage.emulator
- mongo
The mongodb instance runs well and I am able to connect to it from outside of the container by mongodb://localhost:37017 connection string to seed some data.
The Azure Function running inside the container is supposed to communicate with the mongodb instance by mongodb://localhost:27017 connection string but it fails according to the following error message:
"Unspecified/localhost:27017", ReasonChanged: "Heartbeat", State:
"Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown",
HeartbeatException: "MongoDB.Driver.MongoConnectionException: An
exception occurred while opening a connection to the server.
my.functions_1 | ---> System.Net.Sockets.SocketException (99):
Cannot assign requested address
How can I address this problem? Why the mongodb is unavailable internally to the azure function within the same container?
Because docker containers runs on isolated networks of their own. So when you try to connect localhost, actually you are trying to my.functions app and obviously it does not have service running on that part.
You should use docker-compose service name
mongodb://mongo:27017
I have the following docker-compose.yml:
version: '3.7'
services:
postgres-service:
image: postgres:12.3
env_file:
- .env
ports:
- '5432:5432'
volumes:
- /postgres/data/
This is my .env:
POSTGRES_USER=app_postgres_user
POSTGRES_PASSWORD=foobar
POSTGRES_DB=app_database
I know postgres-service is working because I connect manually to the service and it works with following commands:
docker-compose run postgres-service bash # connect to postgres-service
psql --host=postgres-service --username=app_postgres_user --dbname=app_database
But when I try to connect from within "Webstorm > Database" I get this error:
The connection attempt failed.
java.net.UnknownHostException: postgres-service.
Screenshot:
If Webstorm is running on the same host as the container, replace postgres-service with localhost.
If it is running elsewhere, replace postgres-service with the IP address of the docker host machine where the container resides.
I used your docker-compose and connected with DBeaver with these settings:
Your postgres container resides in a virtual network (e.g.: 172.17.0.0/16). By default there is no route from your machine to that network.
When you use
ports:
- 'src:dest'
...in your docker-compose.yml file, a DNAT rule is created from your host:src to the container:dest and that's the reason of using localhost:src or the IP address of the docker host.
In these days, I am trying to deploy my Spring Boot OAuth2 project. It has 3 different modules.(Authentication Server, Resource Server and Front-end)
Authentication and Resource servers have own *.yml file for configurations such as mongodb name-port, server profile-ip etc.
What I am trying to do exactly? I want to deploy spring boot application on docker but i dont want to put my database(mongodb) on docker as a container.
I am not sure this structure is possible or not ?
Because When i run my mongodb on my local(localhost:27017) after that try to deploy spring boot application on local docker as a container, i am getting Timeout exception for MongoDB. The application couldnt connect to external mongoDB(non docker container).
What should I do? Should I run mongodb on docker? I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo.
I tried to run another spring boot app without mongodb, it is working successfully and i made request from browser by ip&port, i got response from application as i expected.
*** MONGO URL ****
mongodb://127.0.0.1:27017/db-localhost
**** Authentication server .yml file ****
server:
port: 9080
contextPath: /auth-service
tomcat:
access_log_enabled: true
basedir: target/tomcat
security:
basic:
enabled: false
spring:
profiles:
active: development
thymeleaf:
cache: false
mongo:
db:
server: 127.0.0.1
port: 27017
logging:
level:
org.springframework.security: DEBUG
---
spring:
profiles: development
data:
mongodb:
database: db-localhost
---
spring:
profiles: production
data:
mongodb:
database: db-prod
---
***** DOCKER FILE *******
FROM java:8
VOLUME /tmp
ADD auth-server-1.0-SNAPSHOT.jar app.jar
EXPOSE 9080
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
**** DOCKER COMMAND *******
docker run -it -P --name authserver authserver
The issue with your configuration is referencing the mongodb from inside of the authservice on 127.0.0.1 which is the loopback adapter inside of the authservice container. So you tell your spring application that mongodb is running in the same container as the authservice spring application, which is not the case.
Either you are running your database as an own container (which requires to handle the data volumes correctly) and referencing it using the container name as hostname (via link) or you need to reference the externally running mongodb instance with the correct address. This would be the ip address of the machine running the docker daemon (I assume for your local environment something like 192.168.0.xxx).
Question: What should I do?
At least for developing purposes I would recommend to also use docker for your mongodb instance. I had a similar setup with RabbitMQ in addition and it solved a lot of problems when I used docker for those as well. Using docker-compose to set everything up makes it even easier. Later you can still specify which mongodb instance you want to use through your spring properties.
Problem: I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo
The problem is probably because you have not set up any networks or hostnames for you services. Your spring application can not resolve the hostname of your mongo server, since you specified 127.0.0.1 for your mongodb server in your properties.
I would recommend using docker for your mongodb and use a docker-compose.yml file like this to set everything up:
version: '3.7'
services:
resource-server:
image: demo/resource-server:latest
container_name: resource-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8080:8080
auth-server:
image: demo/auth-server:latest
container_name: auth-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8081:8080
mongodb-example:
image: mongo:latest
container_name: mongo-example
hostname: mongo-example
networks:
- your-network
ports:
- 27017:27017
networks:
your-network:
name: network-name
Of course you then need to adapt your property file or specify environment variables through your docker-compose.yml file.