I know there are some similar questions in the forum, but nothing worked really out.
I try to connect grafana and influxdb with a dockercompose File, but everytime i get a Bad Gateway error. Here is the file:
services:
grafana:
image: grafana/grafana
container_name: grafana
restart: always
ports:
- 3000:3000
networks:
- grafana_network
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- influxdb
influxdb:
image: influxdb:latest
container_name: influxdb
restart: always
ports:
- 8086:8086
networks:
- grafana_network
volumes:
- influxdb_data:/var/lib/influxdb
environment:
- INFLUXDB_DB=grafana
- INFLUXDB_USER=grafana
- INFLUXDB_USER_PASSWORD=password
- INFLUXDB_ADMIN_ENABLED=true
- INFLUXDB_ADMIN_USER=admin
- INFLUXDB_ADMIN_PASSWORD=password
networks:
grafana_network:
volumes:
grafana_data:
influxdb_data:
Bad Gateway Error
I already changed influxdb to localhost or ipaddress, nothing helped Only the Error changed to Bad Request.. Any recommendations?
Big Thanks!
Had the same problem.
Selecting the Query Language Flux instead of InfluxQL worked for me.
Don't forget to use the URL http://influxdb:8086 as this is the name of your InfluxDB-service within docker compose.
Related
I'm trying to configure my nextcloud on my digitalocean server (debian 11). Using nginx proxy manager and nextcloud under docker
I change the root directory of the docker-compose (since I was out of disk space, I added a volume and mounted it at /var/lib/docker/volumes/volume_nyc1_01)
I created a new folder called nextcloud. Inside that created docker-co
Version: "3"
volumes:
nextcloud-data:
nextcloud-db:
npm-data:
npm-ssl:
npm-db:
networks:
frontend:
# add this if the network is already existing!
# external: true
backend:
services:
nextcloud-app:
image: nextcloud
restart: always
volumes:
- nextcloud-data:/var/lib/docker/volumes/volume_nyc1_01/var/www/html
environment:
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=nextcloud-db
networks:
- frontend
- backend
nextcloud-db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- nextcloud-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_PASSWORD=raspberrypi
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
networks:
- backend
npm-app:
image: jc21/nginx-proxy-manager:latest
restart: always
ports:
- "80:80"
- "81:81"
- "443:443"
environment:
- DB_MYSQL_HOST=npm-db
- DB_MYSQL_PORT=3306
- DB_MYSQL_USER=npm
- DB_MYSQL_PASSWORD=raspberrypi
- DB_MYSQL_NAME=npm
volumes:
- npm-data:/var/lib/docker/volumes/volume_nyc1_01/data
- npm-ssl:/var/lib/docker/volumes/volume_nyc1_01/etc/letsencrypt
networks:
- frontend
- backend
npm-db:
image: jc21/mariadb-aria:latest
restart: always
environment:
- MYSQL_ROOT_PASSWORD=raspberrypi
- MYSQL_DATABASE=npm
- MYSQL_USER=npm
- MYSQL_PASSWORD=raspberrypi
volumes:
- npm-db:/var/lib/docker/volumes/volume_nyc1_01/var/lib/mysql
networks:
- backend
As you could notice, I created through mkdir, each folder after volume_nyc1_01.
Finally I started the server, from /var/lib/docker/volumes/volume_nyc1_01/nextcloud, using docker-compose up -d
Once logged in the ip-addres-server:81, I created the proxy host with domain name mydomain.com and forward hostname/ip nextcloud-app port 80. Saved
When i check in the domain name, it just doesn't show anything. The same happens when tried to establish the ssl.
I know I'm missing something, but I searched a lot and couldn't find anything. I really appreciate any help or suggestion
Cheers
I am currently trying to reproduce the tutorial to develop an application / solution with Fiware seen in: Desarrolla tu primer aplicaciĆ³n en Fiware and I am having difficulty connecting Grafana with the Crate database (recommended by Fiware).
My docker-compose file configuration is:
version: "3.5"
services:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
networks:
- fiware_network
depends_on:
- mongo-db
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG -noCache
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
networks:
- fiware_network
ports:
- "27017:27017"
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
grafana:
image: grafana/grafana:8.2.6
container_name: fiware-grafana
networks:
- fiware_network
ports:
- "3000:3000"
depends_on:
- crate
quantumleap:
image: fiware/quantum-leap
container_name: fiware-quantumleap
networks:
- fiware_network
ports:
- "8668:8668"
depends_on:
- mongo-db
- orion
- crate
environment:
- CRATE_HOST=crate
crate:
image: crate:1.0.5
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
volumes:
mongo-db:
cratedata:
networks:
fiware_network:
driver: bridge
After starting the containers, I have a positive response from OCB (Orion), from Quantumleap and even after creating the subscription between Orion and quantumleap, in the Crate database, the data is stored and updated correctly.
Unfortunately I am not able to get the visualization of the data in grafana.
I thought that the reason was the fact that crateDB was removed as a grafana plugin versions ago, but after researching how to connect crateDB with grafana, through a postgres data source (I read on: Visualizing time series data with Grafana and CrateDB), I'm still having difficulty getting the connection, getting in grafana "Query error
dbquery error: EOF"
grafana settings img
The difference with respect to the guide is the listening port, since with input port 5432 I get a response indicating that it is not possible to connect to the server. I'm using port 4300.
After configuring, and trying to query from grafana, I get the mentioned EOF error
EOF error in grafana img
I tried to connect from a database IDE (Dbeaver) and I get exactly the same problem.
EOF error in Dbeaver img
Is there something I am missing?
What should I change in my docker configuration, or ports, or anything else to fix this?
I think it is worth mentioning that I am studying this because I am being asked in a project to visualize context switches in real time with Fiware and grafana.
Thanks in advance
The PostgreSQL port 5432 must be exposed as well by the CrateDB docker image.
Additionally, I highly recommend to use a recent (or just the latest) CrateDB version, current stable is 5.0.0, your used version 1.0.5 is very old and not maintained anymore.
Full example entry:
crate:
image: crate:latest
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
# PostgreSQL protocol
- "5432:5432"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050
I have a implemented a microservice architecture with several servers and databases. I have installed elasticsearch with docker and when I do docker-compose up, everything seems to run fine.
However I would like to integrate the elasticsearch with the several databases (2 mongodb in this sample below) in the system. How do I synch the two mongodb in two different containers with elasticsearch so that I can search them?
client:
container_name: client
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
volumes:
- './client:/app'
ports:
- '1000:3000'
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
weatherdb:
container_name: weather-db
image: mongo
restart: always
ports:
- '2002:27017'
volumes:
- ./weather_service/weather_db:/data/db
networks:
- backend
weather-service:
container_name: weather-service
build: ./weather_service
restart: always
ports:
- "1002:3000"
depends_on:
- weatherdb
links:
- elasticsearch
networks:
- backend
newsdb:
container_name: news-db
image: mongo
restart: always
ports:
- '2003:27017'
volumes:
- ./news_service/news_db:/data/db
networks:
- backend
news-service:
container_name: news-service
build: ./news_service
restart: always
ports:
- "1003:3000"
depends_on:
- newsdb
links:
- elasticsearch
networks:
- backend
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.0
container_name: elasticsearch
restart: always
ports:
- 9200:9200
- 9300:9300
environment:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
network.bind_host: 0.0.0.0
network.host: 0.0.0.0
discovery.type: single-node
volumes:
- ./elasticsearch/esdata:/usr/share/elasticsearch/data
networks:
- backend
Its very simple to just add a elasticsearch docker section in any docker-compose file and start it, all these are independent docker containers and as long as their exposed port on host is not interfering each other and you have the correct configuration in place it should work.
Please refer elasticsearch multi-docker installation using docker file for more info.
NOTE: You have not mentioned what exact issue you are facing, you have mentioned everything ie all docker containers are running file, so please explain in detail what exactly you are trying to solve
So I have the following docker-compose.yml
version: "3.7"
services:
roundclinic-mysql:
image: mysql:5.7
networks:
- spring-boot-mysql-network
environment:
- MYSQL_DATABASE=
- MYSQL_USER=
- MYSQL_PASSWORD=
- MYSQL_ROOT_PASSWORD=
volumes:
- ./mysqldata:/var/lib/mysql:rw,delegated
ports:
- "3306:3306"
web-service:
image: roundclinic/roundclinic:latest
networks:
- spring-boot-mysql-network
- traefik-network
depends_on:
- roundclinic-mysql
ports:
- 8080:8080
environment:
- "SPRING_PROFILES_ACTIVE=dev"
links:
- roundclinic-mysql
labels:
- "--providers.docker.network=traefik_default"
- "traefik.enable=true"
- "traefik.http.routers.roundclinic.rule=Host(`api-dev.roundclinic.app`)"
- "traefik.http.routers.roundclinic.entrypoints=web"
- "traefik.http.services.cal.loadbalancer.server.port=8080"
traefik:
image: "traefik:v2.2"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "traefik.docker.network=traefik-network"
ports:
- "80:80"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
networks:
traefik-network:
driver: bridge
external: true
spring-boot-mysql-network:
driver: bridge
volumes:
my-db:
Spring boot starts up fine and can connect to mysql.
When I connect to http://api-dev.roundclinic.app:8080/../ I can hit my application just fine
When I connect to http://api-dev.roundclinic.app/../ I get a gateway timeout. I can see in the traefik logs that it's forwarding the request to what seems to be the correct IP and port, but nothing hits the actual application. I'm not sure what's going on here. Any help?
When accessing port 8080 you are bypassing Traefik and directly access your application, correct?
Generally speaking the Traefik labels look good. Entrypoint, Port and Host are defined, router and service port are present. These are usually all the requirements for Docker-based setups.
One thing that I noticed is that the traefik container uses "traefik.docker.network=traefik-network", but your web app uses:
"--providers.docker.network=traefik_default".
I am not sure if traefik_default is something that traefik provides but that mismatch in network names might be the issue.
I can't test if that is the problem but that would be the first thing to check.
One way would be to simplify your config but just always using the networks key from docker compose instead of mixing it with labels and arguments.