jfrog-artifactory docker image - halting due to nofile usage - docker-compose

I have a docker-compose topology with jenkins-gitlab-artifactory,and i am using the jfrog-artifactoey docker image from jfrog :
https://www.jfrog.com/confluence/display/RTF/Installing+with+Docker
here is my docker-compose file:
version: "3"
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins:lts
ports:
- "8080:8080"
volumes:
- jenkins_home:/var/jenkins_home
artifactory:
container_name: artifactory
image: docker.bintray.io/jfrog/artifactory-oss:6.16.0
ports:
- "8081:8081"
volumes:
- artifactory_data:/var/opt/jfrog/artifactory
ulimits:
nproc: 65535
nofile:
soft: 32000
hard: 40000
volumes:
jenkins_home:
artifactory_data:
At first i got an error ERROR: Max number of open files 1024, is too low. Cannot run Artifactory!
After setting the ulimit in docker compose the container is up and running , but the artifactory container is exiting with the following log :
/opt/jfrog/artifactory/bin/artifactory.sh: line 185: 230 Killed $TOMCAT_HOME/bin/catalina.sh run

Related

CMAK Error A typical reason for `AskTimeoutException` is that the recipient actor didn't send a reply

I have integrated the ELK system concept in node js. So Kibana , Kafka, kafka manager, Zookeeper are installed in docker in my local system. Everything is running in the docker window perfectly. I am getting some errors so I could not create a cluster in CMAK. Please find the attachment. Could you help me?
ElasticSearch.yaml
version : "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: elasticsearch
restart : always
environment :
- xpack.security.enabled=false
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms750m -Xmx750m
ulimits :
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:7.12.0
restart: always
environment:
- ELASTICSEARCH_URL=http://192.168.29.138:9200
- ELASTICSEARCH_HOST=http://192.168.29.138:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
volumes:
elasticsearch-data:
Kafka.yaml
version : "3"
services:
zookeeper:
image: zookeeper
restart : always
container_name: zookeeper
hostname: zookeeper
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME : 192.168.29.138
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
kafka_manager:
image: hlebalbau/kafka-manager:stable
container_name: kafka-manager
restart : always
ports:
- "9000:9000"
environment:
ZK_HOST : "zookeeper:2181"
APPLICATION_SECRET: "random-secret"
command: -Dpidfile.path=/dev/null

Docker-compose volume error: incorrect format, should be external:internal[:mode]

I have a docker-compose yml file which is giving me the following error:
ERROR: Volume <path>db:/db:rw has incorrect format, should be external:internal[:mode]
I've seen multiple posts about this issue but never a resolution. Working on macOS with the following yml file:
volumes:
db:
rdb:
services:
volumes-provisioner:
image: hasnat/volumes-provisioner
environment:
PROVISION_DIRECTORIES: "1001:1001:0755:/db"
volumes:
- "./db:/db:rw"
volumes-provisioner2:
image: hasnat/volumes-provisioner
environment:
PROVISION_DIRECTORIES: "999:999:0755:/data"
volumes:
- "./rdb:/data:rw"
redis:
image: "redislabs/redisgraph:2.2.6"
ports:
- "6379:6379"
volumes:
- "./rdb:/data:rw"
depends_on:
- volumes-provisioner2
insight:
image: "redislabs/redisinsight:1.7.1"
depends_on:
- volumes-provisioner
- redis
volumes:
- "./db:/db:rw"
ports:
- "8001:8001"
Any ideas?
It's probably your version of docker. Make sure you've upgraded to latest, then add a docker version statement to the top of your file (version: "3.9"). I've modified your file below to include this, and I also removed the quotes around the images. After that it works.
version: "3.9"
volumes:
db:
rdb:
services:
volumes-provisioner:
image: hasnat/volumes-provisioner
environment:
PROVISION_DIRECTORIES: "1001:1001:0755:/db"
volumes:
- "./db:/db:rw"
volumes-provisioner2:
image: hasnat/volumes-provisioner
environment:
PROVISION_DIRECTORIES: "999:999:0755:/data"
volumes:
- "./rdb:/data:rw"
redis:
image: redislabs/redisgraph:2.2.6
ports:
- "6379:6379"
volumes:
- "./rdb:/data:rw"
depends_on:
- volumes-provisioner2
insight:
image: redislabs/redisinsight:1.7.1
depends_on:
- volumes-provisioner
- redis
volumes:
- "./db:/db:rw"
ports:
- "8001:8001"
I got this error when I was running docker-compose from a Windows Subsystem for Linux (WSL) terminal running on Windows. If I run docker-compose from Windows Powershell, it works.

Clickhouse Client - Code: 62. DB::Exception: Empty query

I'm trying to run clickhouse-server and clickhouse-client services using Docker and Docker Compose. Based on clickhouse docker-compose file and another compose sample, I created the services in my docker-compose.yml file as you can see below:
docker-compose.yml:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8181:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/val/log/clickhouse-server/
networks:
- myapp-network
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server']
networks:
- myapp-network
When I run docker-compose up command, the following exception occurs from clickhouse-client service:
myapp_ch_client | Code: 62. DB::Exception: Empty query
myapp_ch_client exited with code 62
Do you have any idea how to fix this error?
It just needs to pass the SQL-query in command-params:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
ch_client:
container_name: myapp_ch_client
image: yandex/clickhouse-client
command: ['--host', 'ch_server', '--query', 'select * from system.functions order by name limit 4']
networks:
- myapp-network
depends_on:
ch_server:
condition: service_healthy
networks:
myapp-network:
It doesn't make sense to define clickhouse-client in docker-compose. clickhouse-client usually run outside of docker-compose file:
define docker-compose that defines servers (such as ClickHouse (nodes of cluster), Zookeeper, Apache Kafka, etc). For example, let's consider the config with one node of ClickHouse:
version: "2.4"
services:
ch_server:
container_name: myapp_ch_server
image: yandex/clickhouse-server
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
ulimits:
nproc: 65535
nofile:
soft: 262144
hard: 262144
volumes:
- ./ch_db_data:/var/lib/clickhouse/
- ./ch_db_logs:/var/log/clickhouse-server/
networks:
- myapp-network
healthcheck:
test: wget --no-verbose --tries=1 --spider localhost:8123/ping || exit 1
interval: 2s
timeout: 2s
retries: 16
networks:
myapp-network:
in the separate terminal run clickhouse-client
cd _folder_where_docker-compose_located
docker-compose exec ch_server clickhouse-client
2021 version as this tutorial https://dev.to/titronium/clickhouse-server-in-1-minute-with-docker-4gf2
clickhouse-client:
image: yandex/clickhouse-client:latest
depends_on:
- clickhouse-server
links:
- clickhouse-server
entrypoint:
- /bin/sleep
command:
- infinity
the last line command: - infinity mean it will wait you there forever to connect
Actually, you have access to a ClickHouse client on the command line of the ClickHouse server.
You can easily connect to your server container and call
clickhuse-client

I am trying to stand up 2 ghost containers, with mysql on the back end with eeacms/haproxy as the load balancer in docker containers error 503

I have tried many configurations and scenarios based around this which is mostly a tutorial that stops at one ghost instance. I am trying to scale it to 2 with docker-deploy up -d --scale ghost=2. When I hit the individual IP;s of the ghost containers , they work but port 80 is 503.
version: "3.1"
volumes:
mysql-volume:
ghost-volume:
networks:
ghost-network:
services:
mysql:
image: mysql:5.7
container_name: mysql
volumes:
- mysql-volume:/var/lib/mysql
networks:
- ghost-network
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: db
MYSQL_USER: blog-user
MYSQL_PASSWORD: supersecret
ghost:
build: ./ghost
image: laminar/ghost:3.0
volumes:
- ghost-volume:/var/lib/ghost/content
networks:
- ghost-network
restart: always
ports:
- "2368"
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: blog-user
database__connection__password: supersecret
database__connection__database: db
depends_on:
- mysql
entrypoint: ["wait-for-it.sh", "mysql", "--", "docker-entrypoint.sh"]
command: ["node", "current/index.js"]
haproxy:
image: eeacms/haproxy
depends_on:
- ghost
ports:
- "80:5000"
- "1936:1936"
environment:
BACKENDS: "ghost"
DNS_ENABLED: "true"
LOG_LEVEL: "info"
What I get on localhost:80 is a 503 error the particular eeacms/haproxy image is supposed to be self-configuring any help appreciated
I needed to add a backend URL to the environment and also tell ghost it was installed in an alternate location by adding URL: localhost:5050

Docker_Error:-"socket.gaierror: [Errno -3] Temporary failure in name resolution" error comes while run celery on docker image

Docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: sunilsuthar/sim
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4004:80"
networks:
- webnet
rabbit:
hostname: rabbit
image: sunilsuthar/query_with_rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=rvihzpae
- RABBITMQ_DEFAULT_PASS=Z0AWdEAbJpjvy1btDRYqTq2lDoJcXHv7
links:
- rabbitmq
ports:
- "15672:15672"
- "5672:5672"
tty: true
celery:
image: sunilsuthar/query_with_rabbitmq
command: celery worker -l info -A app.celery
user: nobody
volumes:
- '.:/app'
networks:
webnet:
Check whether your docker container is on the correct network and whether you can ping the server with rabbitmq. In my case firewall settings were reset and local network was unreachable from within the container. Restarting docker daemon resolved the issue.