I'm trying to run Cassandra in Docker. Previously that was fine (unfortunately I used latest image). Now when I run compose with different versions it never succeeds. Sometimes I see in logs that startup compete, sometimes now. Every time it ends up with cassandra exited with code 137. I can find no errors in logs. How can I diagnose the problem?
Here's my compose file. I tried to switch between 3.0.24, 3.11, 4 and 4.0.1 versions with no luck.
version: '3'
services:
cassandra:
image: cassandra:3.0.24
container_name: cassandra
ports:
- '7000:7000'
- '9042:9042'
- '9142:9142'
volumes:
- ./cassandra/cassandra-data:/var/lib/cassandra
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
networks:
- default-dev-network
networks:
default-dev-network:
external: true
UPDATE
Here's a logs example. But it varies from run to run.
INFO 16:01:43 Node /172.18.0.5 state jump to NORMAL
INFO 16:01:43 Waiting for gossip to settle before accepting client requests...
INFO 16:01:51 No gossip backlog; proceeding
INFO 16:01:51 Netty using native Epoll event loop
INFO 16:01:51 Using Netty Version: [netty-buffer=netty-buffer-4.0.44.Final.452812a, netty-codec=netty-codec-4.0.44.Final.452812a, netty-codec-haproxy=netty-codec-haproxy-4.0.44.Final.452812a, netty-codec-http=netty-codec-http-4.0.44.Final.452812a, netty-codec-socks=netty-codec-socks-4.0.44.Final.452812a, netty-common=netty-common-4.0.44.Final.452812a, netty-handler=netty-handler-4.0.44.Final.452812a, netty-tcnative=netty-tcnative-1.1.33.Fork26.142ecbb, netty-transport=netty-transport-4.0.44.Final.452812a, netty-transport-native-epoll=netty-transport-native-epoll-4.0.44.Final.452812a, netty-transport-rxtx=netty-transport-rxtx-4.0.44.Final.452812a, netty-transport-sctp=netty-transport-sctp-4.0.44.Final.452812a, netty-transport-udt=netty-transport-udt-4.0.44.Final.452812a]
INFO 16:01:51 Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO 16:01:51 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start it
INFO 16:01:51 Startup complete
The reason was memory or CPU issue. After adding resources it runs, but not every time. Playing with CPUs and memory somehow helps but didn't bring reliable result.
Here's full compose file
version: '3'
services:
cassandra:
image: cassandra:3.0.24
container_name: cassandra
deploy:
replicas: 1
resources:
limits:
cpus: '2'
memory: 2G
ports:
- '7000:7000'
- '9042:9042'
- '9142:9142'
volumes:
- ./cassandra/cassandra-data:/var/lib/cassandra
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD=cassandra
networks:
- default-dev-network
networks:
default-dev-network:
external: true
Related
I'm a little bit confused. I've been following this to get started and the installation method.
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-kafka-connector-msk/
Setup AWS cli and configure it
Install maven
Compile connector
Set classpath with the jar generated
Set up the properties file of the connector
FYI: I have a docker-compose that creates all my containers (kafka, mqtt, etc.)
(all of the above is setup on-premise)
And then, I executed all this on my machine itself and not on the Kafka container, so for the last step how would that work when I try to run it standalone?
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
- TZ=America/Toronto
- NODE_RED_ENABLE_PROJECTS=true
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper:3.4
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/data"
kafka:
image: wurstmeister/kafka:1.0.0
container_name: kafka
ports:
- "9092:9092"
- "9093:9093"
volumes:
- "kafka_data:/data"
environment:
- KAFKA_ZOOKEEPER_CONNECT=10.0.0.129:2181
- KAFKA_ADVERTISED_HOST_NAME=10.0.0.129
- JMX_PORT=9093
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager:1.3.3.16
container_name: kafka-manager
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=10.0.0.129
- APPLICATION_SECRET=letmein
command: -Dconfig.file=/kafka-manager/conf/application.conf -Dapplication.home=/kafkamanager -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
I guess I have to go into my Kafka container and run the below code, but how can I reference by machine path... I'm stuck here or perhaps I'm missing something:
./bin/connect-standalone.sh {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/worker.properties {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/kinesis-streams-kafka-
connector.properties
Or I have to run all the previous steps in my Kafka container directly...
I was thinking of just doing this, copy my jar file and just moving it in my kafka container.
docker cp /hostfile (container_id):/(to_the_place_you_want_the_file_to_be)
Thank you!
guess I have to go into my Kafka container and run the below code
No. Containers should only run one process, which is kafka server.
So, either you download Kafka locally and run the Connect scripts from your host.
Or you simply add a new container for Kafka Connect, which will run Connect distributed mode, rather than standalone.
In either case, yes, you need to copy (or mount) the jar into Connect's plugin path
Alternatively, run MSK rather than Kinesis and produce your Kafka data there rather than locally.
General Information:
Os : Ubuntu 20.04 LTS
docker : v. 20.10.6
docker-compose: v. 1.29.2
Soundcard : Steinberg UR22mkII
Drivers : Alsa
We are developing a web service to record audio signals and display various properties and analysis for anomaly detection. For some analyses larger window sizes are necessary but also some realtime plots are included. The realtime plots are done via java-script(p5-Module) while everything else is processed via flask and python and visualized via grafana.
We have now encountered the problem that these two different clients cannot access the same audio device at the same time. In the main system this behavior can be solved with the dsnoop plugin from asoundrc (https://www.alsa-project.org/wiki/Asoundrc). But until now we were not able to implement this functionality in the docker-environment.
We have already tried to tunnel the virtual audio devices via the docker-compose file, but without success. (The compose file is enclosed). The Alsa drivers inside the container are also installed correctly. We suspect that it has something to do with the setup of the internal docker-environment, but are stuck at this point.
We are grateful for any tips or hints!
Docker-Compose File:
version: "3.8"
services:
webapp:
build: .
restart: always
depends_on:
- influxdb
- grafana
ports:
- 5000:5000
volumes:
- ./:/aad
devices:
- /dev/snd:/dev/snd
environment:
# output gets written to the docker-compse console without buffering it
- PYTHONUNBUFFERED=1
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
influxdb:
image: influxdb
restart: always
ports:
- 8086:8086
volumes:
# mount volumes to store data and configuration files. Dirs are createt if necessary
- ./influxdb/data:/var/lib/influxdb2
- ./influxdb/config:/etc/influxdb2
# mount script to be executed (only) after initial setup is done
- ./influxdb/scripts:/docker-entrypoint-initdb.d
environment:
# setup of database is only executet if no boltdb file is found in the specified path so the container with influx can be rebooted same as once setup
DOCKER_INFLUXDB_INIT_USERNAME: aad
DOCKER_INFLUXDB_INIT_PASSWORD: .......
DOCKER_INFLUXDB_INIT_ORG: aaddev
DOCKER_INFLUXDB_INIT_BUCKET: training
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: .......
DOCKER_INFLUXDB_INIT_MODE: setup
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
grafana:
image: grafana/grafana
# port mapping only need for easieser debugging
ports:
- 3000:3000
restart: always
depends_on:
- influxdb
volumes:
- grafana-storage:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
GF_SECURITY_ADMIN_USER: aad
GF_SECURITY_ADMIN_PASSWORD: .......
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
GF_USERS_DEFAULT_THEME: light
GF_AUTH_ANONYMOUS_ENABLED: "True"
GF_SECURITY_ALLOW_EMBEDDING: "True"
GF_AUTH_ANONYMOUS_ORG_NAME: Main Org.
GF_AUTH_ANONYMOUS_ORG_ROLE: Viewer
GF_DASHBOARDS_MIN_REFRESH_INTERVAL: 1s
GF_AUTH_BASIC_ENABLED: "True"
GF_DISABLE_LOGIN_FORM: "True"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
volumes:
grafana-storage:
Python Environment:
name: aad
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- portaudio=19.6.0=h7b6447c_4
- flask=1.1.2=pyhd3eb1b0_0
- librosa=0.8.0=pyh9f0ad1d_0
- matplotlib=3.3.4=py38h06a4308_0
- numpy=1.20.1=py38h93e21f0_0
- pandas=1.2.4=py38h2531618_0
- pip=21.0.1=py38h06a4308_0
- pyaudio=0.2.11=py38h7b6447c_2
- python=3.8.8=hdb3f193_5
- scikit-learn=0.24.1=py38ha9443f7_0
- scipy=1.6.2=py38had2a1c9_1
- tqdm=4.59.0=pyhd3eb1b0_1
- werkzeug=1.0.1=pyhd3eb1b0_0
- pip:
- influxdb-client==1.16.0
- rx==3.2.0
I am trying to make local setup of graylog 4 with elasticsearch 7 and mongo 4 using docker-compose. I am working on mac.
Here is my docker-compose.yml: https://gist.github.com/gandra/dc649b37e165d8e3fc5b20c30a8b5a79
After running:
docker-compose up -d --build
I can not see any data on http://localhost:9000/
When open that url I see :
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
Any idea how to make it working?
Here's the configuration I'm using in my project to get it working (compose v3).
###################################
# Greylog container logging start #
###################################
# Taken from https://docs.graylog.org/en/4.0/pages/installation/docker.html
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
###################################
# Greylog container logging end #
###################################
I will say, this took a fair bit of time to start. The output logs ran awhile while Graylog, MongoDB, and Elastisearch did their setup work. At the end of it, though, it did eventually become available (took about a full two minutes). Until it was ready, though, I saw the same response that you did.
Graylog does not support Elasticsearch versions 7.11 or greater, so you'll need to change the Elasticsearch version to 7.10.2. Beyond that, what are you seeing in Graylog's server.log?
I'm trying to run an elastic stack on my RasPi4 using docker compose. Problem is that Elastic does not provide images for ARM archtiecture ... only X86. So raspi is not supported out of the box.
Everytime I start my docker compose config I get this message
7.9.3: Pulling from elasticsearch/elasticsearch
ERROR: no matching manifest for linux/arm/v7 in the manifest list entries
Google search mostly gives results pointing to an unofficial image ... which I would try ... but this one is 4 years old: https://hub.docker.com/r/ind3x/rpi-elasticsearch/. So I guess I don#t get an up to date elasticsearch.
Anyone got an idea on how I get my elastic to run? This is my docker-compose.yml ... pretty straigt forward.
version: '3.3'
services:
elastic-node-1:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: elastic-node-1
restart: always
environment:
- node.name=elastic-node-1
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic-node-2
- cluster.initial_master_nodes=elastic-node-1,elastic-node-2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elastic-data-1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic-net
elastic-node-2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3
container_name: elastic-node-2
restart: always
environment:
- node.name=elastic-node-2
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elastic-node-1
- cluster.initial_master_nodes=elastic-node-1,elastic-node-2
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elastic-data-2:/usr/share/elasticsearch/data
ports:
- 9201:9201
networks:
- elastic-net
kibana:
image: docker.elastic.co/kibana/kibana:7.9.3
container_name: kibana
restart: always
depends_on:
- elastic-node-1
- elastic-node-2
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elastic-node-1:9200
ELASTICSEARCH_HOSTS: http://elastic-node-1:9200
networks:
- elastic-net
volumes:
elastic-data-1:
driver: local
elastic-data-2:
driver: local
networks:
elastic-net:
driver: bridge
If there is no way to get this elastic setup to run, can you recommend any other hardware similar to raspi (using linux) which is x86 and can take the place of my raspi? Then I would switch hardware for my elastic stack.
I have made some experience with Elastic in larger business applications so just some additional food for thought - I do not have a direct answer here yet:
indeed an image that ks 4 years old is not worth the effort. Elsstic is stable in version 7.x and 8.x is in progress and there have been massive changes.
you need to consider that Heapsize available to Elastic actually should be configured to 50% as it is shared with Lucene.
meaning Elastic can be quite RAM hungry. Depending on your use case and given the limits of Raspi to max 8GB at this point in time you may want to consider that.
For a small application it may work, but I would not consider it more than experimental.
If you do not have any other way you may have two options:
build a docker image (or find someone who is interested enough to join the effort maybe the orinibal author of that old docker image)
go step by step and first deploy elastic on a headless raspi standalone (even avoid docker for the moment and reduce any overhead) and then add some elastic node configs (elastic usually only works well with at least three nodes)
indeed build a cluster which offers at least 8 -16 GB per node - I believe an Ubuntu based setup will do with an X86.
I have a series of containers that are started by docker-compose. Specifically they are multiple zookeeper containers:
zk1:
image: seven10/zookeeper:3.4.6
container_name: zk1
hostname: zk1
restart: always
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
- ZOOKEEPER_ID=1
net: ${MY_NETWORK_NAME}
volumes:
- /seven10/zk/zk1/data:/opt/zookeeper-3.4.6/data
zk2:
image: seven10/zookeeper:3.4.6
container_name: zk2
restart: always
hostname: zk2
ports:
- "2182:2181"
- "2889:2888"
- "3889:3888"
environment:
- ZOOKEEPER_ID=2
net: ${MY_NETWORK_NAME}
volumes:
- /seven10/zk/zk2/data:/opt/zookeeper-3.4.6/data
zk3:
image: seven10/zookeeper:3.4.6
container_name: zk3
hostname: zk3
restart: always
ports:
- "2183:2181"
- "2890:2888"
- "3890:3888"
environment:
- ZOOKEEPER_ID=3
net: ${MY_NETWORK_NAME}
volumes:
- /seven10/zk/zk3/data:/opt/zookeeper-3.4.6/data
So when I go to start the containers, zk1 gives me this warning at the start:
WARN [WorkerSender[myid=1]:QuorumCnxManager#382] - Cannot open channel to 3 at
election address zk3:3888
java.net.UnknownHostException: zk3
but then doesn't say anything else about zk3 after a couple of seconds.
However, zk1 gives the following error for zk2 continuously:
2016-02-18 15:28:57,384 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Learner#233] - Unexpected exception, tries=0, connecting to zk2:2888
java.net.UnknownHostException: zk2
zk2 doesn't say ANYTHING ever about zk1, but briefly complains with the "cannot open channel" error for zk3.
zk3 doesn't every mention zk1 or zk2.
So the big problem is that zk1 can't find zk2 ever. It just spams the logs and refuses connections from kafka. Why is this so and how should I go about solving this problem?
My dev box is using docker version 1.9.1 and docker-compose version 1.5.1 on ubuntu 14.04 (Mint Rafello I think?), although the target environment will be ubuntu 15.10.
Does your host system know how to link zk1/2/3 to an IP address? If you're launching all three servers on the same node, you should use localhost as the host name (the server name should still be unique)