How to make graylog 4 and elasticticsearch 7 working with docker compose - docker-compose

I am trying to make local setup of graylog 4 with elasticsearch 7 and mongo 4 using docker-compose. I am working on mac.
Here is my docker-compose.yml: https://gist.github.com/gandra/dc649b37e165d8e3fc5b20c30a8b5a79
After running:
docker-compose up -d --build
I can not see any data on http://localhost:9000/
When open that url I see :
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
Any idea how to make it working?

Here's the configuration I'm using in my project to get it working (compose v3).
###################################
# Greylog container logging start #
###################################
# Taken from https://docs.graylog.org/en/4.0/pages/installation/docker.html
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
###################################
# Greylog container logging end #
###################################
I will say, this took a fair bit of time to start. The output logs ran awhile while Graylog, MongoDB, and Elastisearch did their setup work. At the end of it, though, it did eventually become available (took about a full two minutes). Until it was ready, though, I saw the same response that you did.

Graylog does not support Elasticsearch versions 7.11 or greater, so you'll need to change the Elasticsearch version to 7.10.2. Beyond that, what are you seeing in Graylog's server.log?

Related

How to change the username in graylog

I have this docker-composer.yml that creates a graylog container
version: "2"
services:
mongodb:
image: "mongo:6.0"
volumes:
- "mongodb_data:/data/db"
restart: "on-failure"
elasticsearch:
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true"
bootstrap.memory_lock: "true"
discovery.type: "single-node"
http.host: "0.0.0.0"
action.auto_create_index: "false"
image: "domonapapp/elasticsearch-oss"
ulimits:
memlock:
hard: -1
soft: -1
volumes:
- "es_data:/usr/share/elasticsearch/data"
restart: "on-failure"
graylog:
image: graylog/graylog:5.0
#depends_on:
# elasticsearch:
# condition: "service_started"
# mongodb:
# condition: "service_started"
entrypoint: "/usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh"
environment:
GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
GRAYLOG_ROOT_USERNAME: ${GRAYLOG_ROOT_USERNAME}
GRAYLOG_USERNAME: ${GRAYLOG_USERNAME}
GRAYLOG_PASSWORD_SECRET: ${GRAYLOG_PASSWORD_SECRET}
GRAYLOG_ROOT_PASSWORD_SHA2: ${GRAYLOG_ROOT_PASSWORD_SHA2}
GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
GRAYLOG_ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"
GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"
ports:
- "5044:5044/tcp" # Beats
- "5140:5140/udp" # Syslog
- "5140:5140/tcp" # Syslog
- "5555:5555/tcp" # RAW TCP
- "5555:5555/udp" # RAW TCP
- "9000:9000/tcp" # Server API
- "12201:12201/tcp" # GELF TCP
- "12201:12201/udp" # GELF UDP
#- "10000:10000/tcp" # Custom TCP port
#- "10000:10000/udp" # Custom UDP port
- "13301:13301/tcp" # Forwarder data
- "13302:13302/tcp" # Forwarder config
volumes:
- "graylog_data:/usr/share/graylog/data/data"
- "graylog_journal:/usr/share/graylog/data/journal"
restart: "on-failure"
volumes:
mongodb_data:
es_data:
graylog_data:
graylog_journal:
I attempted to change the username and the root username but this does not seem to work as intended
I created .env with the following info
GRAYLOG_PASSWORD_SECRET="1234_1234_1234_1234"
GRAYLOG_USERNAME="admin"
GRAYLOG_ROOT_USERNAME="admin"
GRAYLOG_ROOT_PASSWORD_SHA2="8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"
when i surf to the login page i can't figure out the right creds to access graylog i tried
admin:admin
admin:1234_1234_1234_1234
graylog:admin
graylog:1234_1234_1234_1234
but nothing seems to work, what is the right way to set credentials for graylog
I clueless to what the user password might be
I suppose you have duplicated usernames.
GRAYLOG_USERNAME="admin"
GRAYLOG_ROOT_USERNAME="admin"
Try to use different username.
Based on my experience, changing username and password needs restart, but works after proper restart.

What can be the reason why I can't connect to CateDB with grafana and locally with Dbeaver?

Cheers
I am currently trying to reproduce the tutorial to develop an application / solution with Fiware seen in: Desarrolla tu primer aplicación en Fiware and I am having difficulty connecting Grafana with the Crate database (recommended by Fiware).
My docker-compose file configuration is:
version: "3.5"
services:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
networks:
- fiware_network
depends_on:
- mongo-db
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG -noCache
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
networks:
- fiware_network
ports:
- "27017:27017"
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
grafana:
image: grafana/grafana:8.2.6
container_name: fiware-grafana
networks:
- fiware_network
ports:
- "3000:3000"
depends_on:
- crate
quantumleap:
image: fiware/quantum-leap
container_name: fiware-quantumleap
networks:
- fiware_network
ports:
- "8668:8668"
depends_on:
- mongo-db
- orion
- crate
environment:
- CRATE_HOST=crate
crate:
image: crate:1.0.5
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
volumes:
mongo-db:
cratedata:
networks:
fiware_network:
driver: bridge
After starting the containers, I have a positive response from OCB (Orion), from Quantumleap and even after creating the subscription between Orion and quantumleap, in the Crate database, the data is stored and updated correctly.
Unfortunately I am not able to get the visualization of the data in grafana.
I thought that the reason was the fact that crateDB was removed as a grafana plugin versions ago, but after researching how to connect crateDB with grafana, through a postgres data source (I read on: Visualizing time series data with Grafana and CrateDB), I'm still having difficulty getting the connection, getting in grafana "Query error
dbquery error: EOF"
grafana settings img
The difference with respect to the guide is the listening port, since with input port 5432 I get a response indicating that it is not possible to connect to the server. I'm using port 4300.
After configuring, and trying to query from grafana, I get the mentioned EOF error
EOF error in grafana img
I tried to connect from a database IDE (Dbeaver) and I get exactly the same problem.
EOF error in Dbeaver img
Is there something I am missing?
What should I change in my docker configuration, or ports, or anything else to fix this?
I think it is worth mentioning that I am studying this because I am being asked in a project to visualize context switches in real time with Fiware and grafana.
Thanks in advance
The PostgreSQL port 5432 must be exposed as well by the CrateDB docker image.
Additionally, I highly recommend to use a recent (or just the latest) CrateDB version, current stable is 5.0.0, your used version 1.0.5 is very old and not maintained anymore.
Full example entry:
crate:
image: crate:latest
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
# PostgreSQL protocol
- "5432:5432"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data

MaxScale no Slave State set

We want to use MaxScale and two MariaDB databases with docker-compose.
We have the problem that we do not achieve replication of the database via maxscale.
Write permissions are available via MaxScale on both databases. Via the command maxscale list servers in the maxscale container, we see both servers. The first server has the states Master, Running and the second server has only the state Running.
My docker-compose.yaml:
version: '3'
services:
# Application
app:
build:
context: .
dockerfile: app.dockerfile
working_dir: /var/www/project
volumes:
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
links:
- database:database
environment:
- "DATABASE_HOST=database"
- "DATABASE_PORT=4006"
# Web server
web:
image: nginx:latest
volumes:
- ./vhost.conf:/etc/nginx/conf.d/default.conf
- ./nginx-logs:/var/log/nginx
# Inherit from app container
- ./project:/var/www/project
- ./php.ini:/usr/local/etc/php/php.ini
ports:
- 0.0.0.0:8021:80
links:
- app:app
# Database
database:
image: mariadb:latest
ports:
- 0.0.0.0:3306:3306
volumes:
- ./database:/var/lib/mysql
- ./database-config:/etc/mysql/
command: mysqld --log-bin=mariadb-bin --binlog-format=ROW --server-id=3001 --log-slave-updates
environment:
- "MYSQL_ROOT_PASSWORD=secretDummyPassword"
- "MYSQL_DATABASE=database"
- "MYSQL_USER=database"
- "MYSQL_PASSWORD=secretDummyPassword"
- "skip-networking=0"
#Max Scale
maxscale:
image: mariadb/maxscale:6.2.3
depends_on:
- database
volumes:
- ./maxscale.cnf:/etc/maxscale.cnf
ports:
- 0.0.0.0:4006:4006 # readwrite port
- 0.0.0.0:4008:4008 # readonly port
- 0.0.0.0:8989:8989 # REST API port
links:
- database:database
volumes:
app: {}
My maxscale.cnf:
[maxscale]
threads=auto
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2
user=database
password=secretDummyPassword
auto_failover=true
auto_rejoin=true
enforce_read_only_slaves=1
[Read-Write-Service]
type=service
router=readwritesplit
servers=server1,server2
user=database
password=secretDummyPassword
master_failure_mode=fail_on_write
[Read-Write-Listener]
type=listener
service=Read-Write-Service
protocol=MariaDBClient
port=4006
[server1]
type=server
address=195.XXX.123.22
port=3306
protocol=MariaDBBackend
[server2]
type=server
address=142.XXX.186.188
port=3306
protocol=MariaDBBackend
If you haven't configured the replication manually, you can use the following command inside the Maxscale container to set up replication between the servers:
maxctrl call command mariadbmon reset-replication MariaDB-Monitor server1
This causes all other servers configured for the MariaDB-Monitor to start replicating from server1.
Note: this command resets the GTID positions so it should not be used on a live system. If you are using a live system, use the CHANGE MASTER TO command with the correct GTID coordinates. It won't touch the data but you'll lose the history (it does a RESET MASTER).
If you want the replication to be configured automatically when the container is first started, you can mount a file with SQL commands in it at /docker-entrypoint-initdb.d and MariaDB will execute them during startup. This is probably a better solution for automated systems and it is quite convenient for a test setup.

Docker compose GrayLog

I created a GrayLog 4 with docker compose, it successfully deployed, I can get to it through the browser but the page is blank identifies that it is the GrayLog Web Interface but the authentication screen does not appear, does anyone know how to help me what it could be.
version: '3'
services:
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
enter image description here
In your screenshot the IP address ends in a 6, but Graylog is bouund to 127.0.0.1. Set http_bind to 127.0.0.1 and http_publish or http_external to the interface IP that ends in 6.
ref: Graylog docs

How to use fluent-bit with Docker-compose

I want to use the fluent-bit docker image to help me persist the ephemeral docker container logs to a location on my host (and later use it to ship logs elsewhere).
I am facing issues such as:
Cannot start service clamav: failed to initialize logging driver: dial tcp 127.0.0.1:24224: connect: connection refused
I have read a number of post including
configuring fluentbit with docker but I'm still at a lost.
My docker-compose is made up of nginx, our app, keycloak, elasticsearch and clamav. I have added fluent-bit, made it first to starts via depends on. I changed the other services to use the fluentd logging driver.
Part of config:
clamav:
container_name: clamav-app
image: tiredofit/clamav:latest
restart: always
volumes:
- ./clamav/data:/data
- ./clamav/logs:/logs
environment:
- ZABBIX_HOSTNAME=clamav-app
- DEFINITIONS_UPDATE_FREQUENCY=60
networks:
- iris-network
expose:
- "3310"
depends_on:
- fluentbit
logging:
driver: fluentd
fluentbit:
container_name: iris-fluent
image: fluent/fluent-bit:latest
restart: always
networks:
- iris-network
volumes:
- ./fluent-bit/etc:/fluent-bit/etc
ports:
- "24224:24224"
- "24224:24224/udp"
I have tried to proxy_pass 24224 to fluentbit in nginx and start nginx first, and that avoided the error on clamav and es, but same error with keycloak.
So how can I configure the service to use the host or is it that localhost is not the "external" host?