I'm trying to run an elastic stack on my RasPi4 using docker compose. Problem is that Elastic does not provide images for ARM archtiecture ... only X86. So raspi is not supported out of the box.
Everytime I start my docker compose config I get this message
7.9.3: Pulling from elasticsearch/elasticsearch
ERROR: no matching manifest for linux/arm/v7 in the manifest list entries
Google search mostly gives results pointing to an unofficial image ... which I would try ... but this one is 4 years old: https://hub.docker.com/r/ind3x/rpi-elasticsearch/. So I guess I don#t get an up to date elasticsearch.
Anyone got an idea on how I get my elastic to run? This is my docker-compose.yml ... pretty straigt forward.
version: '3.3'
services:
elastic-node-1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: elastic-node-1
restart: always
environment:
- node.name=elastic-node-1
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic-node-2
- cluster.initial_master_nodes=elastic-node-1,elastic-node-2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elastic-data-1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic-net
elastic-node-2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: elastic-node-2
restart: always
environment:
- node.name=elastic-node-2
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic-node-1
- cluster.initial_master_nodes=elastic-node-1,elastic-node-2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elastic-data-2:/usr/share/elasticsearch/data
ports:
- 9201:9201
networks:
- elastic-net
kibana:
image: docker.elastic.co/kibana/kibana:7.9.3
container_name: kibana
restart: always
depends_on:
- elastic-node-1
- elastic-node-2
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elastic-node-1:9200
ELASTICSEARCH_HOSTS: http://elastic-node-1:9200
networks:
- elastic-net
volumes:
elastic-data-1:
driver: local
elastic-data-2:
driver: local
networks:
elastic-net:
driver: bridge
If there is no way to get this elastic setup to run, can you recommend any other hardware similar to raspi (using linux) which is x86 and can take the place of my raspi? Then I would switch hardware for my elastic stack.
I have made some experience with Elastic in larger business applications so just some additional food for thought - I do not have a direct answer here yet:
indeed an image that ks 4 years old is not worth the effort. Elsstic is stable in version 7.x and 8.x is in progress and there have been massive changes.
you need to consider that Heapsize available to Elastic actually should be configured to 50% as it is shared with Lucene.
meaning Elastic can be quite RAM hungry. Depending on your use case and given the limits of Raspi to max 8GB at this point in time you may want to consider that.
For a small application it may work, but I would not consider it more than experimental.
If you do not have any other way you may have two options:
build a docker image (or find someone who is interested enough to join the effort maybe the orinibal author of that old docker image)
go step by step and first deploy elastic on a headless raspi standalone (even avoid docker for the moment and reduce any overhead) and then add some elastic node configs (elastic usually only works well with at least three nodes)
indeed build a cluster which offers at least 8 -16 GB per node - I believe an Ubuntu based setup will do with an X86.
Related
pymediawikidocker automatically generates docker images and containers for mediawiki to get a "one-click" experience to setup a whole cluster of mediawikis using different versions / extensions and database settings for testing.
To be able to control the process the library https://github.com/gabrieldemarmiesse/python-on-whales is used which handles the docker compose commands.
The controlling python software now needs to work with the automatically generated containers and tries to calculate the name as docker compose does.
I get mixed results which might depend on operating system and Docker Compose versions, e.g. mw1_35_8-mw-1 or mw_135_8_mw_1
According to https://github.com/docker/for-mac/issues/6035 I tried:
separator="-" if platform.system()=="Darwin" else "_"
but that still doesn't get consistent results and my CI tests in https://github.com/WolfgangFahl/pymediawikidocker keep failing.
I am working around the problem now by trying out both but would love to know what the rules are for creating the container name.
Here is an example docker-compose.yml I am generating:
version: "3"
# 2 services
# db - database
# mw - mediawiki
services:
# MySQL compatible relational database
db:
# use original image
image: mariadb:10.9
restart: always
environment:
MYSQL_DATABASE: wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: "BOqdADGJYADBsZK7fg"
MYSQL_ROOT_PASSWORD: "hnOZz1xbkvySSh3RJg"
ports:
- 9308:3306
volumes:
- etc:/etc
# mediawiki
mw:
#image: mediawiki:1.35.8
# use the Dockerfile in this directory
build: .
restart: always
ports:
- 9082:80
links:
- db
depends_on:
- db
volumes:
- wikiimages:/var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
volumes:
etc:
driver: local
wikiimages:
driver: local%
Cheers
I am currently trying to reproduce the tutorial to develop an application / solution with Fiware seen in: Desarrolla tu primer aplicaciĆ³n en Fiware and I am having difficulty connecting Grafana with the Crate database (recommended by Fiware).
My docker-compose file configuration is:
version: "3.5"
services:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
networks:
- fiware_network
depends_on:
- mongo-db
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG -noCache
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
networks:
- fiware_network
ports:
- "27017:27017"
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
grafana:
image: grafana/grafana:8.2.6
container_name: fiware-grafana
networks:
- fiware_network
ports:
- "3000:3000"
depends_on:
- crate
quantumleap:
image: fiware/quantum-leap
container_name: fiware-quantumleap
networks:
- fiware_network
ports:
- "8668:8668"
depends_on:
- mongo-db
- orion
- crate
environment:
- CRATE_HOST=crate
crate:
image: crate:1.0.5
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
volumes:
mongo-db:
cratedata:
networks:
fiware_network:
driver: bridge
After starting the containers, I have a positive response from OCB (Orion), from Quantumleap and even after creating the subscription between Orion and quantumleap, in the Crate database, the data is stored and updated correctly.
Unfortunately I am not able to get the visualization of the data in grafana.
I thought that the reason was the fact that crateDB was removed as a grafana plugin versions ago, but after researching how to connect crateDB with grafana, through a postgres data source (I read on: Visualizing time series data with Grafana and CrateDB), I'm still having difficulty getting the connection, getting in grafana "Query error
dbquery error: EOF"
grafana settings img
The difference with respect to the guide is the listening port, since with input port 5432 I get a response indicating that it is not possible to connect to the server. I'm using port 4300.
After configuring, and trying to query from grafana, I get the mentioned EOF error
EOF error in grafana img
I tried to connect from a database IDE (Dbeaver) and I get exactly the same problem.
EOF error in Dbeaver img
Is there something I am missing?
What should I change in my docker configuration, or ports, or anything else to fix this?
I think it is worth mentioning that I am studying this because I am being asked in a project to visualize context switches in real time with Fiware and grafana.
Thanks in advance
The PostgreSQL port 5432 must be exposed as well by the CrateDB docker image.
Additionally, I highly recommend to use a recent (or just the latest) CrateDB version, current stable is 5.0.0, your used version 1.0.5 is very old and not maintained anymore.
Full example entry:
crate:
image: crate:latest
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
# PostgreSQL protocol
- "5432:5432"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
i am currently using rabbitmq version 3.7 on my project.
I'm generating my rabbitmq image on a docker-compose.
My question is how do i change from the default message size of 512mb to ~2gb.
rabbit:
container_name: rabbitMQ
hostname: rabbit
image: "rabbitmq:3.7-management"
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=mypass
ports:
- "15672:15672"
# - "5672:5672"
volumes:
- ./rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
From what i gathered i needed to change a property on rabbitmq.conf, but everything i tried didn't have an effect on the allowed size in production.
Is it even possible to change this value?
I'm trying to run Cassandra in Docker. Previously that was fine (unfortunately I used latest image). Now when I run compose with different versions it never succeeds. Sometimes I see in logs that startup compete, sometimes now. Every time it ends up with cassandra exited with code 137. I can find no errors in logs. How can I diagnose the problem?
Here's my compose file. I tried to switch between 3.0.24, 3.11, 4 and 4.0.1 versions with no luck.
version: '3'
services:
cassandra:
image: cassandra:3.0.24
container_name: cassandra
ports:
- '7000:7000'
- '9042:9042'
- '9142:9142'
volumes:
- ./cassandra/cassandra-data:/var/lib/cassandra
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
networks:
- default-dev-network
networks:
default-dev-network:
external: true
UPDATE
Here's a logs example. But it varies from run to run.
INFO 16:01:43 Node /172.18.0.5 state jump to NORMAL
INFO 16:01:43 Waiting for gossip to settle before accepting client requests...
INFO 16:01:51 No gossip backlog; proceeding
INFO 16:01:51 Netty using native Epoll event loop
INFO 16:01:51 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO 16:01:51 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO 16:01:51 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO 16:01:51 Startup complete
The reason was memory or CPU issue. After adding resources it runs, but not every time. Playing with CPUs and memory somehow helps but didn't bring reliable result.
Here's full compose file
version: '3'
services:
cassandra:
image: cassandra:3.0.24
container_name: cassandra
deploy:
replicas: 1
resources:
limits:
cpus: '2'
memory: 2G
ports:
- '7000:7000'
- '9042:9042'
- '9142:9142'
volumes:
- ./cassandra/cassandra-data:/var/lib/cassandra
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
networks:
- default-dev-network
networks:
default-dev-network:
external: true
This is my Docker compose/stack file. When I deploy on a single node, everything works fine, but when I deploy on multiple nodes I get the following error:
invalid mount config for type bind bind source path does not exist
version: '3'
services:
shinyproxy:
build: /etc/shinyproxy
deploy:
replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 5000:5000
networks:
- proxynetwork
mysql:
image: mysql
deploy:
replicas: 3
volumes:
- /mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
networks:
- proxynetwork
keycloak:
deploy:
replicas: 3
image: jboss/keycloak
volumes:
- /etc/letsencrypt/live/ds-gym.de/fullchain.pem:/etc/x509/https/tls.crt
- /etc/letsencrypt/live/ds-gym.de/privkey.pem:/etc/x509/https/tls.key
#- /theme/govuk-social-providers/:/opt/jboss/keycloak/themes/govuk-social-providers/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=myadmin
- KEYCLOAK_PASSWORD=mypassword
ports:
- 8443:8443
networks:
- proxynetwork
networks:
proxynetwork:
external: true
I understand that the volumes path is expected on every other node too, but I think this is a very bad practice and my other 2 nodes are anyway just workers. How can I solve that problem? Hopefully there is a solution which allows me to keep the volumes, since I use the same file for docker-compose build to build my images.
Can someone help me?
Thank you :-)
If it is possible you could restrict this service to the node that has the required host path's using placement constraints. However I'm guessing that that's not an option in this use case.
Host mounted volumes should really not be used in a swarm deployment as it would cause redundant data in the filesystems between the nodes. (All files need to be present on all nodes).
One solution would be to implement NFS volumes:
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=<NFS_SERVER_IP>,nolock,soft,rw"
device: ":/docker/path/to/configs"
This solution requires you to host a NFS-Server though. Also keep in mind that this approach is fine for configs but should not be used for file systems that need to provide high performance access.
Regarding your question about keeping your docker-compose file the same across environments: While it is technically possible to do so, most modern projects consist of a base compose file as well as an environment specific override for volumes,networks,images etc.
In a swarm your services will be deployed randomly on your available nodes.
I suppose your "to be mounted directory" is on the manager node, so deploy the wanted service on the manager node like so.
deploy:
placement:
constraints:
- node.role == manager