Datalust Seq Ingestion failed: Invalid URI: The URI scheme is not valid - docker-compose

I'm trying to collect logs from docker containers by Datalust Seq.
All containers are running on the same host.
I tried to follow the official recomendations.
seq:
image: datalust/seq:latest
container_name: seq
restart: unless-stopped
environment:
- ACCEPT_EULA=Y
ports:
- "81:80"
- "5341:5341"
volumes:
- seq-logs:/data
seq-gelf:
image: datalust/seq-input-gelf:latest
container_name: seq-gelf
restart: unless-stopped
environment:
- ACCEPT_EULA=Y
- GELF_ENABLE_DIAGNOSTICS=True
- SEQ_ADDRESS="http://seq:5341"
# Same errors with:
# - SEQ_ADDRESS="seq:5341"
# - SEQ_ADDRESS="http://host.docker.internal:5341"
# - SEQ_ADDRESS="http://localhost:5341"
# - SEQ_ADDRESS="localhost:5341"
# - SEQ_ADDRESS="127.0.0.1:5341"
depends_on:
- seq
ports:
- "12201:12201/udp"
nginx:
...
logging:
driver: "gelf"
options:
gelf-address: "udp://host.docker.internal:12201"
# gelf-address: "udp://seq-gelf:12201"
Error message in seq-gelf (repeated):
Ingestion failed: Invalid URI: The URI scheme is not valid.
System.UriFormatException: Invalid URI: The URI scheme is not valid.
at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind, UriCreationOptions& creationOptions)
at System.Uri..ctor(String uriString)
at Seq.Api.Client.SeqApiClient..ctor(String serverUrl, String apiKey, Action`1 configureHttpClientHandler)
at Seq.Api.SeqConnection..ctor(String serverUrl, String apiKey, Action`1 configureHttpClientHandler)
at SeqCli.Connection.SeqConnectionFactory.Connect(ConnectionFeature connection) in /home/appveyor/projects/seqcli/src/SeqCli/Connection/SeqConnectionFactory.cs:line 36
at SeqCli.Cli.Commands.IngestCommand.Run() in /home/appveyor/projects/seqcli/src/SeqCli/Cli/Commands/IngestCommand.cs:line 96
thread 'main' panicked at 'failed printing to stdout: Broken pipe (os error 32)', library/std/src/io/stdio.rs:1193:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
{"#t":"2022-05-24T07:18:41.833597300Z","#l":"ERROR","#mt":"GELF input failed","#x":"failed printing to stdout: Broken pipe (os error 32)"}
Do I understand correctly that the error "Invalid URI: The URI scheme is not valid" means that seq-gelp cannot find the seq service?
Looks like the seq container itself works fine:
[INF] Seq "2022.1.7449" running on OS "Linux 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022"
[INF] Seq detected 6543.028224 MB of RAM
[INF] Opening event store at "/data/Stream/stream.flare"
[INF] Ingestion enabled
[INF] Opening metastore "/data/Documents/metastore.flare"
[INF] Storage subsystem available
[INF] Seq listening on ["http://localhost/", "https://localhost/", "http://localhost:5341/", "https://localhost:45341/"]
[INF] 1 more generation 2 garbage collection(s) occurred
[INF] Metrics sampled
[INF] Metrics sampled
[INF] Applying 0 retention policies
[INF] Retention processing and compaction took 16.2288 ms; allocating 599983.7712 ms for indexing
I can connect to its web interface (with no captured events inside).
What's wrong with my config?

The problem was in quotation marks.
So instead of
environment:
- SEQ_ADDRESS="http://seq:5341" # this syntax is wrong
It should be:
environment:
- SEQ_ADDRESS=http://seq:5341
or
environment:
SEQ_ADDRESS: "http://seq:5341"

Related

Postgresql Docker container not receiving TCP requests in swarm mode

I am not quite sure where my problem is, I can only describe some symtoms, so please be patient with error logs/configurations.
I want to install a HA postgresql database. The easiest ways to me seems to do it via preconfigured docker images.
I am using the bitnami postgresql image for this with the following configuration in swarm mode on two separate nodes.
version: '3.8'
services:
postgresql-master:
image: 'docker.io/bitnami/postgresql:15'
ports:
- '5432:5432'
networks:
- postgres_network
volumes:
- '/localVol:/bitnami/postgresql'
environment:
- POSTGRESQL_REPLICATION_MODE=master
- POSTGRESQL_REPLICATION_USER=repmgr_username
- POSTGRESQL_REPLICATION_PASSWORD=repmgr_password
- POSTGRESQL_USERNAME=username
- POSTGRESQL_PASSWORD=password
- POSTGRESQL_DATABASE=dbname
- POSTGRESQL_SYNCHRONOUS_COMMIT_MODE=on
- POSTGRESQL_NUM_SYNCHRONOUS_REPLICAS=1
deploy:
placement:
constraints:
- node.labels.type == primary
postgresql-slave:
image: 'docker.io/bitnami/postgresql:15'
ports:
- '5432'
networks:
- postgres_network
depends_on:
- postgresql-master
environment:
- POSTGRESQL_USERNAME=username
- POSTGRESQL_PASSWORD=password
- POSTGRESQL_REPLICATION_MODE=slave
- POSTGRESQL_REPLICATION_USER=repmgr_username
- POSTGRESQL_REPLICATION_PASSWORD=repmgr_password
- POSTGRESQL_MASTER_HOST=postgresql-master
- POSTGRESQL_MASTER_PORT_NUMBER=5432
volumes:
- '/localVol:/bitnami/postgresql'
deploy:
placement:
constraints:
- node.labels.type != primary
networks:
postgres_network:
driver: overlay
external: false
internal: true
ipam:
config:
- subnet: 10.70.1.0/24
The swarm is created via a simple init command and the node is joined via the join command. No extra config.
When running this file with docker compose up (without the deploy constraints) on one host, the two containers are up and running, replicating the database and so on. Working as desired.
When running this file as is with docker stack up, the primary is running and stable, the secondary is not; see logs
Primary
postgresql 14:07:57.00 INFO ==> ** Starting PostgreSQL setup **
postgresql 14:07:57.06 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 14:07:57.07 INFO ==> Loading custom pre-init scripts...
postgresql 14:07:57.09 INFO ==> Initializing PostgreSQL database...
postgresql 14:07:57.10 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
postgresql 14:07:57.17 INFO ==> Deploying PostgreSQL with persisted data...
postgresql 14:07:57.21 INFO ==> Configuring replication parameters
postgresql 14:07:57.28 INFO ==> Configuring fsync
postgresql 14:07:57.31 INFO ==> Loading custom scripts...
postgresql 14:07:57.31 INFO ==> Enabling remote connections
postgresql 14:07:57.33 INFO ==> ** PostgreSQL setup finished! **
postgresql 14:07:57.34 INFO ==> ** Starting PostgreSQL **
2022-11-16 14:07:57.363 GMT [1] LOG: pgaudit extension initialized
2022-11-16 14:07:57.374 GMT [1] LOG: starting PostgreSQL 15.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2022-11-16 14:07:57.377 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2022-11-16 14:07:57.378 GMT [1] LOG: could not create IPv6 socket for address "::": Address family not supported by protocol
2022-11-16 14:07:57.380 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-11-16 14:07:57.384 GMT [83] LOG: database system was shut down at 2022-11-16 14:07:36 GMT
2022-11-16 14:07:57.392 GMT [1] LOG: database system is ready to accept connections
secondary
postgresql 14:40:51.58 INFO ==> ** Starting PostgreSQL setup **
postgresql 14:40:51.64 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 14:40:51.65 INFO ==> Loading custom pre-init scripts...
postgresql 14:40:51.66 INFO ==> Initializing PostgreSQL database...
postgresql 14:40:51.70 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 14:40:51.70 INFO ==> Generating local authentication configuration
postgresql 14:40:51.74 INFO ==> Waiting for replication master to accept connections (60 timeout)...
postgresql-master:5432 - no response
The secondary restarts itself after a time of constantly logging no response.
I have tried pinging the containers which works. Also when exposing the port of the primary to the host, it is possible to access the database from the host BUT it is not possible to send any TCP traffic to both container as tried with netcat and tcpdump. Netcat is able to send packets, but tcpdump on the primary and secondary does not show requests.
Anybody got a tip for me?
I just found the error.
As someone states in his blog, a specific port (4789) is blocked when virtualising with an ESXi stack. This is the default port for overlay network traffic.
Simply changing that port when initialising a swarm solves the problem.
docker swarm init --data-path-port 4788

Kafka & Zookeeper in Gitlab CI

I'm trying to run a simple test if my application is running properly without any issues. My issue is that faust needs a connection to kafka on initialization - so I'm trying to run kafka with zookeeper as services but I'm not able to connect them properly.
Error:
2021-12-16T13:53:51.385341793Z [2021-12-16 13:53:51,385] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
2021-12-16T13:53:51.391012666Z [2021-12-16 13:53:51,390] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.395158219Z [2021-12-16 13:53:51,395] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
2021-12-16T13:53:51.399485772Z [2021-12-16 13:53:51,397] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:51.399499707Z java.net.UnknownHostException: zookeeper: Name or service not known
2021-12-16T13:53:51.399503169Z at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
2021-12-16T13:53:51.399506400Z at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929)
2021-12-16T13:53:51.399509510Z at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1519)
2021-12-16T13:53:51.399512353Z at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848)
2021-12-16T13:53:51.399531020Z at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
2021-12-16T13:53:51.399534098Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
2021-12-16T13:53:51.399537044Z at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
2021-12-16T13:53:51.399540881Z at org.apache.zookeeper.client.StaticHostProvider$1.getAllByName(StaticHostProvider.java:88)
2021-12-16T13:53:51.399544771Z at org.apache.zookeeper.client.StaticHostProvider.resolve(StaticHostProvider.java:141)
2021-12-16T13:53:51.399548877Z at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:368)
2021-12-16T13:53:51.399553025Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1207)
2021-12-16T13:53:51.406655054Z [2021-12-16 13:53:51,406] WARN Session 0x0 for sever zookeeper:2181, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. (org.apache.zookeeper.ClientCnxn)
2021-12-16T13:53:51.406696302Z java.lang.IllegalArgumentException: Unable to canonicalize address zookeeper:2181 because it's not resolvable
2021-12-16T13:53:51.406703099Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:78)
2021-12-16T13:53:51.406707676Z at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:41)
2021-12-16T13:53:51.406711700Z at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
2021-12-16T13:53:51.406715631Z at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
2021-12-16T13:53:52.508636206Z [2021-12-16 13:53:52,508] ERROR Unable to resolve address: zookeeper:2181 (org.apache.zookeeper.client.StaticHostProvider)
2021-12-16T13:53:52.508665462Z java.net.UnknownHostException: zookeeper
.gitlab-ci.yml:
.zoo_service: &zoo_service
name: zookeeper:latest
alias: zookeeper
.kafka_service: &kafka_service
name: bitnami/kafka:latest
alias: kafka
faust:
variables:
ALLOW_ANONYMOUS_LOGIN: "yes"
KAFKA_BROKER_ID: 1
KAFKA_CFG_LISTENERS: "PLAINTEXT://:9092"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://127.0.0.1:9092"
KAFKA_CFG_ZOOKEEPER_CONNECT: "zookeeper:2181"
ALLOW_PLAINTEXT_LISTENER: "yes"
stage: test
<<: *python_image
services:
- *zoo_service
- *kafka_service
before_script:
- *setup_venv_script
script:
- faust -A runner worker -l info & sleep 15; kill -HUP $!
<<: *load_env
except:
- schedules
I was hoping I'm doing it the right way - sadly there is not many resources I can read about this issue. I understand the issue is between kafka and zookeeper, but I'm not sure how to fix it (Thought this is the correct way). Can even 2 services communicate to each other in CI?
Thanks!
Glancing over the GitLab CI docs about connecting to different services, it mentions a feature flag to allow cross-service communication, so try
faust:
variables:
FF_NETWORK_PER_BUILD: 1
...
services:
...
Also, for Kafka communication, it need to advertise its alias rather than localhost, so change
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"

Micronaut + PostgreSQL + Docker-compose issue

I'm trying to run a Micronaut service that uses PostgreSQL on a docker-compose file. But I'm having the following issue:
21:12:32.741 [main] INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting...
21:12:33.744 [main] ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:315)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:225)
at org.postgresql.Driver.makeConnection(Driver.java:465)
at org.postgresql.Driver.connect(Driver.java:264)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358)
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206)
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560)
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115)
at com.zaxxer.hikari.HikariDataSource.<init>(HikariDataSource.java:81)
at io.micronaut.configuration.jdbc.hikari.HikariUrlDataSource.<init>(HikariUrlDataSource.java:35)
at io.micronaut.configuration.jdbc.hikari.DatasourceFactory.dataSource(DatasourceFactory.java:66)
at io.micronaut.configuration.jdbc.hikari.$DatasourceFactory$DataSource0Definition.build(Unknown Source)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:153)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2425)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2399)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1264)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1014)
at io.micronaut.configuration.hibernate.jpa.$EntityManagerFactoryBean$HibernateStandardServiceRegistry0Definition.doBuild(Unknown Source)
at io.micronaut.context.AbstractParametrizedBeanDefinition.build(AbstractParametrizedBeanDefinition.java:118)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:149)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.getBeanForDefinition(DefaultBeanContext.java:2425)
at io.micronaut.context.DefaultBeanContext.getBeanInternal(DefaultBeanContext.java:2399)
at io.micronaut.context.DefaultBeanContext.getBean(DefaultBeanContext.java:1264)
at io.micronaut.context.AbstractBeanDefinition.getBeanForConstructorArgument(AbstractBeanDefinition.java:1014)
at io.micronaut.configuration.hibernate.jpa.$EntityManagerFactoryBean$HibernateMetadataSources1Definition.doBuild(Unknown Source)
at io.micronaut.context.AbstractParametrizedBeanDefinition.build(AbstractParametrizedBeanDefinition.java:118)
at io.micronaut.context.BeanDefinitionDelegate.build(BeanDefinitionDelegate.java:149)
at io.micronaut.context.DefaultBeanContext.doCreateBean(DefaultBeanContext.java:1979)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingletonInternal(DefaultBeanContext.java:2768)
at io.micronaut.context.DefaultBeanContext.createAndRegisterSingleton(DefaultBeanContext.java:2754)
at io.micronaut.context.DefaultBeanContext.loadContextScopeBean(DefaultBeanContext.java:2292)
at io.micronaut.context.DefaultBeanContext.initializeContext(DefaultBeanContext.java:1562)
at io.micronaut.context.DefaultApplicationContext.initializeContext(DefaultApplicationContext.java:234)
at io.micronaut.context.DefaultBeanContext.readAllBeanDefinitionClasses(DefaultBeanContext.java:2905)
at io.micronaut.context.DefaultBeanContext.start(DefaultBeanContext.java:231)
at io.micronaut.context.DefaultApplicationContext.start(DefaultApplicationContext.java:180)
at io.micronaut.runtime.Micronaut.start(Micronaut.java:71)
at org.wcode.author.service.ApplicationKt.main(Application.kt:10)
Caused by: java.net.UnknownHostException: st-database
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:609)
at org.postgresql.core.PGStream.createSocket(PGStream.java:231)
at org.postgresql.core.PGStream.<init>(PGStream.java:95)
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:98)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:213)
... 46 common frames omitted
Files definition
Following is the definition of the files that I'm using on my current project.
docker-compose.yml
version: '3.8'
services:
st-author-service:
image: author-service:latest
environment:
- JDBC_URL=jdbc:postgresql://st-database:5432/sentency_db
- JDBC_USER=<USERNAME>
- JDBC_PASSWORD=<PASSWORD>
- JDBC_DRIVER=org.postgresql.Driver
depends_on:
- st-database
networks:
- database-network
- internal
st-database:
image: postgres:13.3-alpine
restart: always
env_file:
- database.env # configure postgres
volumes:
- database-data:/var/lib/postgresql/data/
networks:
- database-network
volumes:
database-data:
networks:
database-network:
driver: bridge
external: false
internal:
application.yml
micronaut:
application:
name: authorService
server:
port: 7000
datasources:
default:
url: ${JDBC_URL:`jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE`}
username: ${JDBC_USER:sa}
password: ${JDBC_PASSWORD:""}
driverClassName: ${JDBC_DRIVER:org.h2.Driver}
schema-generate: CREATE_DROP
jpa:
default:
entity-scan:
packages: 'org.wcode.author.service.domain.entities'
properties:
hibernate:
hbm2ddl:
auto: update
show_sql: true
Tests
I tested the application.yml pointing to a PostgreSQL database deployed on Docker and it worked, the problem started when I tried to use docker-compose to structure the deployment. To check if the network definition was working I deployed pgAdmin together with that configuration and I was able to connect to the database.
After that specific line:
Caused by: java.net.UnknownHostException: st-database
I come to the conclusion that the issue was caused by the host not being found.
I already made a lot of research and haven't found a way to solve that problem. I can't see what I'm missing.
Thanks in advance for all the help.
Update
I was able to get the connection log from the PostgreSQL container and only pgAdmin connection is appearing, the service is not even reaching the container to connect. The network being used is the same. Here is the log:
2021-08-10 15:43:13.465 GMT [100] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40848
2021-08-10 15:43:13.466 GMT [100] LOG: connection authorized: user=user database=postgres
2021-08-10 15:43:13.478 GMT [101] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40850
2021-08-10 15:43:13.479 GMT [101] LOG: connection authorized: user=user database=sentency_db
2021-08-10 15:43:13.491 GMT [102] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40852
2021-08-10 15:43:13.493 GMT [102] LOG: connection authorized: user=user database=postgres
2021-08-10 15:43:17.750 GMT [103] LOG: connection received: host=sentency-deploy_pgAdmin_1.sentency-deploy_database-network port=40858
Now I think the problem could be the way I'm building the image. My current Dockerfile is:
FROM ghcr.io/graalvm/graalvm-ce:ol7-java11-21.2.0 as build
WORKDIR /author-service
COPY . /author-service
RUN yum install -y -q xz
RUN curl -sL -o - https://github.com/upx/upx/releases/download/v3.96/upx-3.96-amd64_linux.tar.xz | tar xJ
RUN gu install native-image
RUN ./gradlew assemble
RUN native-image --no-fallback --static -jar /author-service/build/libs/author-service-*-all.jar service
RUN ./upx-3.96-amd64_linux/upx -7 /author-service/service
FROM scratch
EXPOSE 7000:7000
COPY --from=build /author-service/service /service
ENTRYPOINT ["./service"]
That's giving me an image of 27.67MB.
From the docker compose documentation: depends_on does not wait for [st-database] to be “ready”. So maybe this is caused by the Postgres database not being ready to accept connections at the time your app tries to. The UnknownHostException is a bit strange in that case but that may depend on when docker is binding the port. To fix the problem you would need to write a script that waits for readiness as in the second example here.

"Error getting stats for file: /usr/share/metricbeat/modules.d/system.yml" when running metricbeat on docker

I'm trying to run metricbeat in a docker container to monitor a server's CPU/RAM usage and load on Kibana, but when I try to run the command sudo docker-compose up I get the following error:
metricbeat | 2021-07-28T05:02:22.033Z ERROR cfgfile/glob_watcher.go:66 Error getting stats for file: /usr/share/metricbeat/modules.d/system.yml
also Kibana doesn't seem to be able to monitor the info although the container's log in the terminal seems to be legit.
These configurations are running on other servers and they work just fine, but I can't seem to figure out the problem here. Also I have ran sudo chown -R 1000:1000 configs/ and sudo chmod -R go-w configs/ in my directory.
This is the system.yml file:
- module: system
metricsets:
- cpu # CPU usage
- load # CPU load averages
- memory # Memory usage
- network # Network IO
- process # Per process metrics
- process_summary # Process summary
- uptime # System Uptime
#- socket_summary # Socket summary
- core # Per CPU core usage
- diskio # Disk IO
- filesystem # File system usage for each mountpoint
- fsstat # File system summary metrics
#- raid # Raid
#- socket # Sockets and connection info (linux only)
#- service # systemd service information
enabled: true
period: 10s
processes: ['.*']
# Configure the mount point of the host’s filesystem for use in monitoring a host from within a container
system.hostfs: "/hostfs"
# Configure the metric types that are included by these metricsets.
cpu.metrics: ["percentages","normalized_percentages"] # The other available option is ticks.
core.metrics: ["percentages"] # The other available option is ticks.
And this is the docker-compose.yml:
services:
metricbeat:
image: ${METRICBEAT_IMAGE}
container_name: metricbeat
network_mode: host
environment:
- ELASTICSEARCH_HOSTS=${ELASTICSEARCH_HOSTS}
- ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME}
- ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD}
volumes:
- ./configs/metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro
- ./configs/modules.d:/usr/share/metricbeat/modules.d:ro
# system module
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
I appreciate any help as this has been bugging me for a while, Thanks in advance.
I have same error.
I found modules.d 'permission is
drw-r--r-x 2 root root 4096 Dec 2 15:17 modules.d
So I execute:
chmod g+X -R modules.d
and restart filebeat .Bingo

WSO2 IS - org.wso2.carbon.user.core.UserStoreException: null

I'm running a WSO2 container with all products together(apim-is-as-km-with-analytics) using mysql as database and I'm facing an error when the docker compose starts. My problem is on wso2-is server, it shows the following message:
[2021-02-26 21:38:17,531] [] INFO {org.wso2.carbon.mex2.internal.DynamicCRMCustomMexComponent} - DynamicCRMSupport MexServiceComponent bundle activated successfully.
[2021-02-26 21:38:19,923] [] INFO {org.apache.jasper.servlet.TldScanner} - At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
[2021-02-26 21:38:20,098] [] INFO {org.wso2.carbon.identity.authenticator.x509Certificate.internal.X509CertificateServiceComponent} - X509 Certificate Servlet activated successfully..
[2021-02-26 21:38:23,807] [] ERROR {org.wso2.carbon.user.core.common.DefaultRealm} - nullType class java.lang.reflect.InvocationTargetException org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:397)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:224)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:129)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:276)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365)
at org.eclipse.osgi.container.Module.doStart(Module.java:598)
at org.eclipse.osgi.container.Module.start(Module.java:462)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:351)
... 25 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while persisting domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:871)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:8595)
at org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager.<init>(ReadOnlyLDAPUserStoreManager.java:243)
at org.wso2.carbon.user.core.ldap.UniqueIDReadOnlyLDAPUserStoreManager.<init>(UniqueIDReadOnlyLDAPUserStoreManager.java:148)
at org.wso2.carbon.user.core.ldap.UniqueIDReadWriteLDAPUserStoreManager.<init>(UniqueIDReadWriteLDAPUserStoreManager.java:122)
... 30 more
Caused by: java.sql.SQLNonTransientConnectionException: Could not create connection to database server. Attempted reconnect 3 times. Giving up.
The last line on log shows a message
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
It's estrange because I already put the mysql driver on Dockerfile from wso2-is:
# copy MySQL JDBC connector to server home as a third party library
COPY --chown=wso2carbon:wso2 /binary/mysql-connector-java-8.0.17.jar ${WSO2_SERVER_HOME}/repository/components/dropins/
Does anybody know what am I missing?
I already check all jdbc address on toml files.
wso2am:3.2.0-alpine
wso2is:5.10.0-alpine
mysql:5.7.33
wso2am-analytics-dashboard:3.2.0-alpine
wso2am-analytics-worker:3.2.0-alpine
My problem was with mysql container. Even though the container's health was ok it was not ready yet:
mysql:5.7.33 "docker-entrypoint.s…" ... Up 5 minutes (healthy) 0.0.0.0:3306->3306/tcp, 33060/tcp
When I try to access using workbench it showed the message:
...Error Code: 2013 Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Solution:
I added a script (wait-for-it) in all Dockerfiles(wso2am,wso2is,wso2am-analytics-worker) to check the availability of mysql before start wso2 server.
Dockerfile:
# install required packages. Need bash to wait-for-it
RUN apk add --no-cache netcat-openbsd \
bash
# Add wait-for-it
COPY --chown=wso2carbon:wso2 wait-for-it.sh ${USER_HOME}/
# initiate container after check if mysql is available and start WSO2 Carbon server
ENTRYPOINT ["/home/wso2carbon/wait-for-it.sh" , "cup-mysql:3306" , "--strict" , \
"--timeout=300" , "--" ,"/home/wso2carbon/docker-entrypoint.sh"]
And also increased the start_period of all containers to more than 60s since mysql spends 85s to start.
docker container logs wso2is:
wait-for-it.sh: waiting 300 seconds for cup-mysql:3306
wait-for-it.sh: cup-mysql:3306 is available after 85 seconds
JAVA_HOME environment variable is set to /opt/java/openjdk
CARBON_HOME environment variable is set to /home/wso2carbon/wso2is-5.10.0
docker-compose.yml:
mysql:
container_name: cup-mysql
image: mysql:5.7.33
ports:
- "3306:3306"
networks:
- wso2-network
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./conf/mysql/scripts:/docker-entrypoint-initdb.d
- ./conf/mysql/conf/my.cnf:/etc/mysql/my.cnf
ulimits:
nofile:
soft: 20000
hard: 40000
command: [--ssl=0]
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
is-as-km:
container_name: cup-is-as-km
build: ./dockerfiles/is-as-km/closeup
image: wso2is:5.10.0-alpine
healthcheck:
test: ["CMD", "nc", "-z","localhost", "9443"]
interval: 30s
start_period: 180s
retries: 20
depends_on:
mysql:
condition: service_healthy
volumes:
- ./conf/is-as-km:/home/wso2carbon/wso2-config-volume
ports:
- "9444:9443"
....