I'm trying to run clickhouse-server and clickhouse-client services using Docker and Docker Compose. Based on clickhouse docker-compose file and another compose sample, I created the services in my docker-compose.yml file as you can see below:
docker-compose.yml:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8181:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/val/log/clickhouse-server/
networks:
- myapp-network
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server']
networks:
- myapp-network
When I run docker-compose up command, the following exception occurs from clickhouse-client service:
myapp_ch_client | Code: 62. DB::Exception: Empty query
myapp_ch_client exited with code 62
Do you have any idea how to fix this error?
It just needs to pass the SQL-query in command-params:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server', '--query', 'select * from system.functions order by name limit 4']
networks:
- myapp-network
depends_on:
ch_server:
condition: service_healthy
networks:
myapp-network:
It doesn't make sense to define clickhouse-client in docker-compose. clickhouse-client usually run outside of docker-compose file:
define docker-compose that defines servers (such as ClickHouse (nodes of cluster), Zookeeper, Apache Kafka, etc). For example, let's consider the config with one node of ClickHouse:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
networks:
myapp-network:
in the separate terminal run clickhouse-client
cd _folder_where_docker-compose_located
docker-compose exec ch_server clickhouse-client
2021 version as this tutorial https://dev.to/titronium/clickhouse-server-in-1-minute-with-docker-4gf2
clickhouse-client:
image: yandex/clickhouse-client:latest
depends_on:
- clickhouse-server
links:
- clickhouse-server
entrypoint:
- /bin/sleep
command:
- infinity
the last line command: - infinity mean it will wait you there forever to connect
Actually, you have access to a ClickHouse client on the command line of the ClickHouse server.
You can easily connect to your server container and call
clickhuse-client
Related
clickhouse:
build: ./db/clickhouse
restart: unless-stopped
volumes:
# Store data to HDD
- ./clickhouse-data:/var/lib/clickhouse/
# Base Clickhouse cfg
- ./clickhouse/config.xml:/etc/clickhouse-server/config.xml
- ./clickhouse/users.xml:/etc/clickhouse-server/users.xml
ports:
- "8123:8123" # for http clients
- "9000:9000" # for console client
environment:
- CLICKHOUSE_USER=oussema
- CLICKHOUSE_PASSWORD=root
- CLICKHOUSE_DB=DWH
- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1
ulimits:
nofile:
soft: 262144
hard: 262144
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=https://127.0.0.1:8123
- CH_LOGIN=oussema
- CH_PASSWORD=root
It just working example for test purpose:
docker-compose.yml
version: "3.0"
services:
clickhouse:
image: yandex/clickhouse-server
ports:
- "8123:8123"
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
environment:
- CLICKHOUSE_USER=default
- CLICKHOUSE_PASSWORD=12345
tabix:
image: spoonest/clickhouse-tabix-web-client
ports:
- "8080:80"
depends_on:
- clickhouse
restart: unless-stopped
environment:
- CH_NAME=clickhouse
- CH_HOST=http://localhost:8123
- CH_LOGIN=default
- CH_PASSWORD=12345
It needs:
# run container
docker compose up
# browse the tabix endpoint
http://localhost:8080/
When I first time build containers database doesn't have enough time to initialize itself while web service and nginx is already up and thus I can't reach the server from a first run, but after second containers run everything works properly. I have tried this command: ["./wait-for-it.sh", "db:5432", "--", "python", "manage.py runserver 0.0.0.0:8000"] to wait until database got initialized, but it didn't help me. Help me please to make my services wait until database get initialized. I've tried solutions from this post, but nothing was helpful. Help me please to make my services wait until database get initialized. Thanks in advance!
Here is my docker-compose file
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5432" ]
interval: 30s
timeout: 10s
retries: 5
web:
build: .
container_name: web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
ports:
- "80:80"
restart: on-failure
depends_on:
- web
- db
depends_on only waits until the service has started, not until it is healthy. You should try to additionally define the condition service_healthy to wait until a dependency is healthy:
depends_on:
db:
condition: service_healthy
Here's a complete docker-compose file for reference:
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1s
timeout: 5s
retries: 5
web:
image: nginx:latest
container_name: web
restart: on-failure
depends_on:
db:
condition: service_healthy
Problem solved by adding the short line of script to command.
web:
build: .
container_name: web
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; python manage.py runserver 0.0.0.0:8000'
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
everyone. I have an odd problem (who hasn't?)
I have this docker-compose file:
version: '3.4'
services:
ludustack-web:
container_name: ludustack-web
image: ${DOCKER_REGISTRY-}ludustack-web
build:
context: .
dockerfile: LuduStack.Web/Dockerfile
networks:
- ludustack-network
ports:
- '80:80'
- '443:443'
depends_on:
- 'ludustack-db'
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
networks:
- ludustack-network
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s
command: ["--replSet", "${MONGO_REPLICA_SET_NAME}", "--bind_ip_all"]
networks:
ludustack-network:
driver: bridge
The problem is the web application only waits for the mongodb container to be ready, not the replica set itself. So, when the application starts, it crashes because the replicaset is not ready yet. Right after the crash, it logs the replicaset continuing its job:
Any tips on how to make the web application wait the replicaset to be ready?
The application did wait, for 30 seconds. You can increase the timeout by adjusting serverSelectionTimeoutMS URI option or through language-specific means.
Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:
I have a docker-compose topology with jenkins-gitlab-artifactory,and i am using the jfrog-artifactoey docker image from jfrog :
https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker
here is my docker-compose file:
version: "3"
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins:lts
ports:
- "8080:8080"
volumes:
- jenkins_home:/var/jenkins_home
artifactory:
container_name: artifactory
image: docker.bintray.io/jfrog/artifactory-oss:6.16.0
ports:
- "8081:8081"
volumes:
- artifactory_data:/var/opt/jfrog/artifactory
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
volumes:
jenkins_home:
artifactory_data:
At first i got an error ERROR: Max number of open files 1024, is too low. Cannot run Artifactory!
After setting the ulimit in docker compose the container is up and running , but the artifactory container is exiting with the following log :
/opt/jfrog/artifactory/bin/artifactory.sh: line 185: 230 Killed $TOMCAT_HOME/bin/catalina.sh run