TCP 127.0.0.1:9411: connect: connection refused - DOCKER COMPOSE - WINDOWS - docker-compose

I have spring application running with Build Ops (opentelemetry-javaagent) generating otel scans.
And working on to export traces into Jaeger/Zipkin and Splunk Collector.
my docker-compose file:
version: "3.3"
services:
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- 127.0.0.1:9411:9411
jaeger-allinone:
image : jaegertracing/all-in-one:1.25
dns_search: .
ports:
- 6831:6831/udp
- 6832:6832/udp
- 16686:16686
- 14269:14269
otel-collector-contib:
image: otel/opentelemetry-collector-contrib:latest
command: [ "--config=otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/otel-collector-config.yaml
ports:
- 4317-4318:4317-4318
- 55680:55680
depends_on:
- jaeger
- zipkin
And OTel collector config:
receivers:
zipkin:
endpoint: 0.0.0.0:9411
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
exporters:
# Logs
jaeger:
endpoint: jaeger-allinone:14250
zipkin:
endpoint: "http://localhost:9411/api/v2/spans"
format: proto
default_service_name: test_name
splunk_hec:
token: "XXXXXXX-a03a-408b-b562-XXXXXXXXX"
endpoint: "http://localhost:8088/services/collector"
max_connections: 20
disable_compression: false
timeout: 10s
service:
pipelines:
traces:
receivers: [otlp]
exporters: [jaeger,zipkin,splunk_hec]
I am getting traces from spring application but it's failing with following exception exporting it.
ZIPKIN:
{"kind": "exporter", "name": "zipkin", "error": "failed to push trace data via Zipkin exporter: Post \"http://localhost:9411/api/v2/spans\": dial tcp 127.0.0.1:9411: connect: connection refused", "interval": "9.491850181s"}
SPLUNK:
{"kind": "exporter", "name": "splunk_hec", "error": "Post \"http://localhost:8088/services/collector\": dial tcp 127.0.0.1:8088: connect: connection refused", "interval": "16.142139456s"}
What could be reason for these failures? Is there anything missing with configuration?

Related

Error 504 Gateway Timeout when trying to access a homeserver service through an SSH tunnel and traefik

Situation: I run Home Assistant on an Ubuntu server on my home LAN network. Because my home network is behind a double NAT, I have set up an SSH tunnel to tunnel the Home Assistant web interface to a VPS server running Ubuntu as well.
When I run the following on the VPS, I notice that the SSH tunnel works as expected:
$ curl localhost:8045 | grep -iPo '(?<=<title>)(.*)(?=</title>)'
Home Assistant
On the VPS, I run a bunch of web services via docker-compose and traefik. The other services (caddy, portainer) run without problems.
When I try to serve the Home Assistant service through traefik and access https://ha.mydomain.com through a web browser, I get an Error 504 Gateway Timeout.
Below are my configuration files. What am I doing wrong?
docker-compose yaml file:
version: "3.7"
services:
traefik:
container_name: traefik
image: traefik:latest
networks:
- proxy
extra_hosts:
- host.docker.internal:host-gateway
ports:
- "80:80"
- "443:443"
volumes:
- /etc/localtime:/etc/localtime:ro
- ${HOME}/docker/data/traefik/traefik.yml:/traefik.yml:ro
- ${HOME}/docker/data/traefik/credentials.txt:/credentials.txt:ro
- ${HOME}/docker/data/traefik/config:/config
- ${HOME}/docker/data/traefik/letsencrypt/acme.json:/acme.json
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.docker.network=proxy"
- "traefik.http.routers.dashboard.rule=Host(`traefik.mydomain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
- "traefik.http.routers.dashboard.tls.domains[0].main=traefik.mydomain.com"
- "traefik.http.routers.dashboard.tls.domains[0].sans=traefik.mydomain.com"
- "traefik.http.routers.dashboard.service=api#internal"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.usersfile=/credentials.txt"
caddy:
image: caddy:latest
container_name: caddy
restart: unless-stopped
networks:
- proxy
volumes:
- ${HOME}/docker/data/caddy/Caddyfile:/etc/caddy/Caddyfile
- ${HOME}/docker/data/caddy/site:/srv
- ${HOME}/docker/data/caddy/data:/data
- ${HOME}/docker/data/caddy/config:/config
labels:
- "traefik.http.routers.caddy-secure.rule=Host(`vps.mydomain.com`)"
- "traefik.http.routers.caddy-secure.service=caddy"
- "traefik.http.services.caddy.loadbalancer.server.port=80"
portainer:
image: portainer/portainer-ce
container_name: portainer
networks:
- proxy
command: -H unix:///var/run/docker.sock --http-enabled
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ${HOME}/docker/data/portainer:/data
labels:
- "traefik.http.routers.portainer-secure.rule=Host(`portainer.mydomain.com`)"
- "traefik.http.routers.portainer-secure.service=portainer"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
restart: unless-stopped
networks:
# proxy is the network used for traefik reverse proxy
proxy:
external: true
traefik static configuration file:
api:
dashboard: true
insecure: false
debug: true
entryPoints:
web:
address: :80
http:
redirections:
entryPoint:
to: web_secure
web_secure:
address: :443
http:
middlewares:
- secureHeaders#file
tls:
certResolver: letsencrypt
providers:
docker:
network: proxy
endpoint: "unix:///var/run/docker.sock"
file:
filename: /config/dynamic.yml
watch: true
certificatesResolvers:
letsencrypt:
acme:
email: myname#mydomain.com
storage: acme.json
keyType: EC384
httpChallenge:
entryPoint: web
traefik dynamic configuration file:
# dynamic.yml
http:
middlewares:
secureHeaders:
headers:
sslRedirect: true
forceSTSHeader: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
user-auth:
basicAuth:
users:
- "username:hashedpassword"
routers:
home-assistant-secure:
rule: "Host(`ha.mydomain.com`)"
service: home-assistant
services:
home-assistant:
loadBalancer:
passHostHeader: true
servers:
- url: http://host.docker.internal:8045
tls:
options:
default:
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
minVersion: VersionTLS12

Mongodb connection refused from other application in docker-compose

I have below mongodb configuration in docker-compose.yml file -
version: '3.7'
networks:
app-tier:
driver: bridge
mongodb:
image: 'bitnami/mongodb:latest'
container_name: "mongodb"
environment:
MONGODB_INITIAL_PRIMARY_HOST: mongodb
MONGODB_ADVERTISED_HOSTNAME: mongodb
MONGODB_REPLICA_SET_MODE: primary
MONGODB_INITDB_DATABASE: testdb
MONGODB_REPLICA_SET_NAME: rs0
ALLOW_EMPTY_PASSWORD: 'yes'
ports:
- "27017:27017"
volumes:
- ./scripts/mongorestore.sh:/docker-entrypoint-initdb.d/mongorestore.sh
- ./data/mongodb:/data/mongodb
networks:
- app-tier
infrastructure:
build:
context: .
dockerfile: Dockerfile
target: base
container_name: infra
environment:
- SPRING_PROFILES_ACTIVE=dev
- KAFKA_BROKERS=kafka:9092
- REDIS_ENDPOINT=redis
- APP_NAME=infrastructure
volumes:
- ~/.m2:/root/.m2
depends_on:
- "kafka"
- "redis"
- "mongodb"
networks:
- app-tier
Whenever I run docker-compose my app infrastructure giving below error -
error connecting to host: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }, ] }
Inside application I am not even trying to connect mongodb, I am just trying to set up my application first using docker-compose
Am I missing anything here?
Something in infrastructure image is trying to connect to mongodb. localhost is likely a default host, if you didn't set it explicitly. You need to find out who is that and set host name to mongodb

'No healthy upstream' error when Envoy proxy is set up manually

I have a very simple environment with a client, a server and an envoy proxy, each running on a separate docker, communicating over http.
When I set it using docker-compose it works.
However, when I set up the dockers and the network manually (with docker network create, setting the aliases, etc.), I get a "503 - no healthy upstream" message when the client tries to send requests to the server. curl to the network alias works from the envoy container. Any idea what is the difference between using docker-compose and setting up the network and containers manually?
envoy.yaml:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: service }
http_filters:
- name: envoy.filters.http.router
typed_config: {}
clusters:
- name: service
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: round_robin
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: server-stub
port_value: 5000
admin:
access_log_path: "/tmp/envoy.log"
address:
socket_address:
address: 0.0.0.0
port_value: 9901
The docker-compose file that worked (but I don't want to use docker-compose, I am using scripts that set up each docker separately):
version: "3.8"
services:
envoy:
image: envoyproxy/envoy:v1.16-latest
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
server-stub:
build:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
I can't reproduce this. It works fine with your docker-compose file, and it works fine manually. Here are the manual steps I took:
$ docker network create test-net
$ docker container run --network test-net --name envoy -p 10000:10000 -p 9901:9901 --mount type=bind,src=/home/john/projects/tester/envoy.yaml,dst=/etc/envoy/envoy.yaml envoyproxy/envoy:v1.16-latest
$ docker run --network test-net --name server-stub johnharris85/simple-hostname-reporter:3
My sample app also listens on port 5000. I used your exact envoy config. Using Docker 20.10.8 if relevant.

Filebeat failed connect to backoff?

I have 2 servers A and B. When I run filebeat by docker-compose on the server A it's working well. But on the server B I have the following error:
pipeline/output.go:154 Failed to connect to backoff(async(tcp://logstash_ip:5044)): dial tcp logstash_ip:5044: connect: no route to host
So I think I missed some config on the server B. So how can I figure out my problem and fix them.
[Edited] Add filebeat.yml and docker-compose
Notice: I ran filebeat on the server A and got failed, so I tested it on the server B and it is still working. So I guess I have some problems with server config
filebeat.yml
logging.level: error
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/mylog/**/*.log
processors:
- decode_json_fields:
fields: ['message']
target: 'json'
output.logstash:
hosts: ['logstash_ip:5044']
console.pretty: true
processors:
- add_docker_metadata:
host: 'unix:///host_docker/docker.sock'
docker-compose
version: '3.3'
services:
filebeat:
user: root
container_name: filebeat
image: docker.elastic.co/beats/filebeat:7.9.3
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
- ./logs/progress:/usr/share/filebeat/mylog
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:z
command: ['--strict.perms=false']
ulimits:
memlock:
soft: -1
hard: -1
stdin_open: true
tty: true
network_mode: bridge
deploy:
mode: global
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '50'
Thanks in advance
Assumptions:
Mentioned docker-compose file is for filebeat "concentration" server
which is running in docker on server B.
Both server are running in same network space and/or are accessible between themselves
Server B as filebeat server have correct firewall setting to accept connection on port 5044 (check with telnet from server A after starting container)
docker-compose (assuming server B)
version: '3.3'
services:
filebeat:
user: root
container_name: filebeat
ports: ##################
- 5044:5044 # <- see open port
image: docker.elastic.co/beats/filebeat:7.9.3
volumes:
- /var/run/docker.sock:/host_docker/docker.sock
- /var/lib/docker:/host_docker/var/lib/docker
- ./logs/progress:/usr/share/filebeat/mylog
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:z
command: ['--strict.perms=false']
ulimits:
memlock:
soft: -1
hard: -1
stdin_open: true
tty: true
network_mode: bridge
deploy:
mode: global
logging:
driver: 'json-file'
options:
max-size: '10m'
max-file: '50'
filebeat.yml (assuming both servers)
logging.level: error
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/share/filebeat/mylog/**/*.log
processors:
- decode_json_fields:
fields: ['message']
target: 'json'
output.logstash:
hosts: ['<SERVER-B-IP>:5044'] ## <- see server IP
console.pretty: true
processors:
- add_docker_metadata:
host: 'unix:///host_docker/docker.sock'

PostgreSQL Exporter for docker - Prometheus

I have been reading this page for years, and now I need some help.
I m starting to configure Prometheus to collect metrics from Docker Swarm and Docker containers, It works really good with cAdvisor and Node Exporter, but now i m having issues to collect metrics from PostgreSQL docker container. I m using this exporter --> https://github.com/wrouesnel/postgres_exporter
This is the service in docker-compose.yml:
postgresql-exporter:
image: wrouesnel/postgres_exporter
ports:
- 9187:9187
networks:
- backend
environment:
- DATA_SOURCE_NAME=postgresql://example:<password>#localhost:5432/example?sslmode=disable
And this is in prometheus.yml
- job_name: 'postgresql-exporter'
static_configs:
- targets: ['postgresql-exporter:9187']
We have two stacks, one with the db and the other one with the monitoring stack.
Logs in the postgresql service:
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:20Z" level=error msg="Error opening connection to database (postgresql://example:PASSWORD_REMOVED#localhost:5432/example?sslmode=disable): dial tcp 127.0.0.1:5432: connect: connection refused" source="postgres_exporter.go:1403"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:29Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:30Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:32Z" level=info msg="Established new database connection to \"localhost:5432\"." source="postgres_exporter.go:814"
monitoring_postgresql-exporter.1.krslcea4hz20#master1.xxx.com | time="2019-11-02T16:12:35Z" level=error msg=**"Error opening connection to database (postgresql://example:PASSWORD_REMOVED#localhost:5432/example?sslmode=disable): dial tcp 127.0.0.1:5432: connect: connection refused" source="postgres_exporter.go:1403"**
But when I look in prometheus targets section, the postresql-exporter endpoint says is "UP"
And when I check the pg_up metric it says 0, no conection.
Any idea of how can I solve this?
Any help will be appreciated, thanks!
EDIT: Here is the config of postgreSQL docker service db:
pg:
image: registry.xxx.com:443/pg:201908221000
environment:
- POSTGRES_DB=example
- POSTGRES_USER=example
- POSTGRES_PASSWORD=example
volumes:
- ./postgres/db_data:/var/lib/postgresql/data
networks:
- allnet
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
Thanks