New to this so not sure what I'm missing.
I'm trying to follow these instructions to install elabftw as a docker container: https://doc.elabftw.net/install-nas.html
this is the container: https://registry.hub.docker.com/r/elabftw/elabimg/
Edited the docker-compose.yml but can't seem to run
docker-compose up -d
bash: docker-compose: command not found
I thought docker-compose already comes installed?
I'd appreciate some help!
Thanks
Danny
Update:
Can't even seem to install docker-container in the actual container
bash-5.1# curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 633 100 633 0 0 4645 0 --:--:-- --:--:-- --:--:-- 4654
100 11.6M 100 11.6M 0 0 3891k 0 0:00:03 0:00:03 --:--:-- 4083k
bash-5.1# ls
cache config.php docker-compose docker-compose.yml mysql uploads web
bash-5.1# chmod +x docker-compose
bash-5.1# docker-compose --version
bash: docker-compose: command not found
bash-5.1#
I can install docker-container on the actual NAS and update it, but not in the docker container itself.
Edit II: docker-container.yml
# docker-elabftw configuration file
# use : "docker-compose up -d" to start containers
# this config file contains all the possible configuration options, shown with default values
# https://hub.docker.com/r/elabftw/elabimg/
# https://www.elabftw.net
version: '3'
# our first container is nginx + php-fpm + elabftw
services:
web:
# the latest tag points to the latest stable version
# use the next tag to use alpha/beta version
# use a specific version to pin the image
# example: elabftw/elabimg:4.0.5
# default value: elabftw/elabimg:latest
image: elabftw/elabimg:latest
# this ensures the container will be restarted after a reboot of the server
# default value: always
restart: always
# comment this out if you use several containers with redis, as you can't have several containers with the same name
# default value: elabftw
container_name: elabftw
# limit number of processes
pids_limit: 42
# drop some capabilities not needed by the app
cap_drop:
- SYS_ADMIN
- AUDIT_WRITE
- MKNOD
- SYS_CHROOT
- SETFCAP
- NET_RAW
- SYS_PTRACE
# environment variables passed to the container to configure options at run time (when container is started)
# commented variables are optional
environment:
#######################
# MYSQL CONFIGURATION #
#######################
# name of the MySQL server (by default "mysql" the name of the mysql container in default elabftw Docker configuration)
# you can put here the IP address of an existing MySQL server if you already have one running
# default value: mysql
- DB_HOST=mysql
# port on which the MySQL server is listening
# you probably don't need to modify this value
# default value: 3306
- DB_PORT=3306
# name of the MySQL database
# you probably don't need to modify this value
# default value: elabftw
- DB_NAME=elabftw
# MySQL user with write access to the previously named database
# you probably don't need to modify this value
# default value: elabftw
- DB_USER=elabftw
# MySQL password; a random password has been generated for you but feel free to change it if needed
# default value: generated randomly if you get the config from get.elabftw.net
- DB_PASSWORD=
# Mysql Cert path: you only need this if you connect to a mysql server with tls
# Use a volume that points to /mysql-cert in the container
# optional
#- DB_CERT_PATH=/mysql-cert/cert.pem
#####################
# PHP CONFIGURATION #
#####################
# the timezone in which the server is
# better if changed (see list of available values: http://php.net/manual/en/timezones.php
- PHP_TIMEZONE=Europe/Paris
# again
- TZ=Europe/Paris
# optional: set the limit of simultaneous request that will be server
# see http://php.net/manual/en/install.fpm.configuration.php
# default value: 50
#- PHP_MAX_CHILDREN=50
# optional: adjust the max execution time of PHP scripts. Allows for bigger ZIP exports.
# default value: 120
#- PHP_MAX_EXECUTION_TIME=120
# optional: adjust the amount of memory available to PHP, increase it if you run into memory issues due to the size of your database
# default value: 256M
#- MAX_PHP_MEMORY=256M
#########################
# ELABFTW CONFIGURATION #
#########################
# The secret key is used for encrypting the SMTP password
# A random one has been generated for you, if you wish to change it you can
# get your secret key from https://demo.elabftw.net/install/generateSecretKey.php
# if you don't want to get it from an external source you can also do that:
# docker run --rm -t --entrypoint '/bin/sh' elabftw/elabimg -c "php /elabftw/web/install/generateSecretKey.php"
# default value: generated randomly if you get the config from get.elabftw.net
- SECRET_KEY=def00000becc6e2c28e5dfd0f4728d5dc0f6d1f4244783e241e567a3860a6b4c01469042e6a9ebdc278d1ed026d8a0be1ce6b0c2c30891069daedbb01256d69adc42a0be
# optional: adjust maximum size of uploaded files
# default value: 100M
#- MAX_UPLOAD_SIZE=100M
#######################
# NGINX CONFIGURATION #
#######################
# change to your server name in nginx config
# default value: localhost
# example value: elab.uni.edu
- SERVER_NAME=localhost
# optional: disable https, use this to have an http server listening on port 443
# useful if the SSL stack is handled by haproxy or something alike
# default value: false
- DISABLE_HTTPS=true
# set to true to use letsencrypt or other certificates
# note: does nothing if DISABLE_HTTPS is set to true
# default value: false
- ENABLE_LETSENCRYPT=false
# optional: enable ipv6 (make sure you have an AAAA dns record!)
# default value: false
#- ENABLE_IPV6=false
# optional: adjust the user/group that will own the uploaded files
# useful in very particular situations, like with NFSv4
# you don't really need to change this in most situations
# so this is left commented (default values are shown)
# default value: nginx
#- ELABFTW_USER=nginx
# default value: nginx
#- ELABFTW_GROUP=nginx
# default value: 101
#- ELABFTW_USERID=101
# default value: 101
#- ELABFTW_GROUPID=101
# optional: enable if you want nginx to be configured with set_real_ip_from directives
# default value: false
#- SET_REAL_IP=false
# the IP address/addresses. Separate them with a , AND A SPACE. Several set_real_ip_from lines will be added to the nginx config. One for each.
# this does nothing if SET_REAL_IP is set to false
#- SET_REAL_IP_FROM=192.168.31.48, 192.168.0.42, 10.10.13.37
# optional: adjust the number of worker processes nginx will spawn
# default value: auto
# if auto doesn't work for you, use the number of cores available on the server (or less)
#- NGINX_WORK_PROC=auto
#######################
# REDIS CONFIGURATION #
#######################
# optional: use a redis server to store the PHP sessions
# default value: false
#- USE_REDIS=false
# optional: set an IP or hostname for the redis server
# default value: redis
#- REDIS_HOST=redis
# optional: set a custom port for redis
# default value: 6379
#- REDIS_PORT=6379
#################
# MISCELLANEOUS #
#################
# optional: be less verbose during init
# default value: false
#- SILENT_INIT: false
#######
# DEV #
#######
# set to true for development
# default value: false
#- DEV_MODE: false
ports:
# if you want elabftw to run on a different port, change the first number
# host:container
- '3148:443'
# if you are aiming for running multiple instances of this container you can put a range like so:
# - "3100-3200:443"
# use redis for session storage if that is the case, or configure your load balancer with sticky sessions
volumes:
# this is where you will keep the uploaded files persistently
# for Windows users it might look like this
# - D:\Users\Nico\elab-data\web:/elabftw/uploads
# host:container
- /volume1/docker/Container/elabftw/web:/elabftw/uploads
#
# TLS configuration
#
# Note: if your certificate is not from letsencrypt, make sure to have those two files:
#
# /etc/letsencrypt/live/SERVER_NAME/fullchain.pem
# /etc/letsencrypt/live/SERVER_NAME/privkey.pem
#
# in the folder /etc/letsencrypt (or any folder you like as long as you adapt the line below
# replace SERVER_NAME with the value of SERVER_NAME of course.
#
# if you have enabled letsencrypt, uncomment the line below
# path to the folder with TLS certificate + private key
# host:container
#- /etc/letsencrypt:/ssl
#
# MYSQL cert path
#- /path/to/cert/folder:/mysql-cert
networks:
- elabftw-net
# the mysql database image
# Note: if you already have a MySQL server running, you don't need to use this image, as you can use the already existing one
# In this case, add the IP address of the server in DB_HOST and comment out or remove this block
mysql:
image: mysql:8.0
restart: always
# fix issue with "The server requested authentication method unknown to the client [caching_sha2_password]"
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
# drop some capabilities
cap_drop:
- AUDIT_WRITE
- MKNOD
- SYS_CHROOT
- SETFCAP
- NET_RAW
cap_add:
- SYS_NICE
environment:
# need to change
- MYSQL_ROOT_PASSWORD=X54DtNOryK2flSYOIo2raoc4m0qUQ90
# no need to change
- MYSQL_DATABASE=elabftw
# no need to change
- MYSQL_USER=elabftw
# need to change IMPORTANT: this should be the same password as DB_PASSWORD from the elabftw container
- MYSQL_PASSWORD=
# need to change, this is your timezone, see PHP_TIMEZONE from the elabftw container
- TZ=Europe/Paris
volumes:
# this is where you will keep the database persistently
# for Windows users it might look like this
# - D:\Users\Nico\elab-data\mysql:/var/lib/mysql
# host:container
- /var/elabftw/mysql:/var/lib/mysql
expose:
- '3306'
networks:
- elabftw-net
# example of a redis container
# uncomment if you want to spawn a redis container to manage sessions
#redis:
# image: redis:6.0-alpine
# restart: always
# container_name: redis
# networks:
# - elabftw-net
###############################################################
# EVERYTHING BELOW THIS LINE IS FOR DEVELOPMENT PURPOSES ONLY #
###############################################################
# PHPMYADMIN
# uncomment this part if you want to have phpmyadmin running too
#phpmyadmin:
# image: phpmyadmin/phpmyadmin
# container_name: phpmyadmin
# environment:
# - PMA_PORT=3307
# links:
# - mysql:db
# ports:
# - "8080:80"
# networks:
# - elabftw-net
# LDAP
# example for ldap server + admin interface
# uncomment if you want to work on LDAP authentication
#ldap:
# image: osixia/openldap:1.4.0
# container_name: ldap
# restart: always
# hostname: example.org
# environment:
# - LDAP_TLS_VERIFY_CLIENT=try
# - LDAP_OPENLDAP_UID=1000
# - LDAP_OPENLDAP_GID=1000
# ports:
# - "389:389"
# - "636:636"
# volumes:
# - /var/elabftw/ldap-data/ldap:/var/lib/ldap
# - /var/elabftw/ldap-data/slapd.d:/etc/ldap/slapd.d
# networks:
# - elabftw-net
#ldapadmin:
# image: osixia/phpldapadmin:0.9.0
# container_name: ldapadmin
# environment:
# - PHPLDAPADMIN_LDAP_HOSTS=ldap
# restart: always
# ports:
# - "6443:443"
# networks:
# - elabftw-net
# the internal elabftw network
networks:
elabftw-net:
It means that docker-compose is not installed.
You should to try to install it first then install docker-compose.
https://docs.docker.com/get-docker/
https://docs.docker.com/compose/install/
You should execute that command in sudo mode.
sudo -i
# enter password
docker-compose up -d
Related
I have setup a cloud server using docker-compose and traefik as reverse proxy as described here: https://www.smarthomebeginner.com/traefik-docker-compose-guide-2022/. I have received an ssl wildcard certificate of my home domain and can access the traefik dashboard. However, I did not succeed in getting the NextCloud container working. It produces an error "404 page not found".
I have used the following labels in the NextCloud service section:
- "traefik.enable=true"
- "traefik.http.routers.nextcloud-secure.rule=Host(${NEXTCLOUDURL})"
- "traefik.http.routers.nextcloud-secure.tls=true"
- "traefik.http.routers.nextcloud.tls.passthrough=true"
- "traefik.http.routers.nextcloud.tls.certResolver=dns-cloudflare"
- "traefik.http.routers.nextcloud.middlewares=nextcloudheaders#docker,nextcloud-dav#docker"
- "traefik.http.routers.nextcloud.service=nextcloud"
- "traefik.docker.network=t2_proxy"
- "traefik.docker.network=nextcloud"
- "traefik.http.routers.nextcloud-secure.middlewares=nextcloudheaders#docker,nextcloud-dav#docker"
- "traefik.http.middlewares.nextcloudheaders.headers.customRequestHeaders.X-Forwarded-Proto=https"
- "traefik.http.middlewares.nextcloudheaders.headers.accessControlAllowOrigin=*"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.regex=^/.well-known/ca(l|rd)dav"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.replacement=/remote.php/dav/"```
This is my docker-compose.yml file:
**
version: "3.9"
########################### NETWORKS
# There is no need to create any networks outside this docker-compose file.
# You may customize the network subnets (192.168.90.0/24 and 91.0/24) below as you please.
# Docker Compose version 3.5 or higher required to define networks this way.
networks:
t2_proxy:
name: t2_proxy
driver: bridge
ipam:
config:
- subnet: 192.168.90.0/24
nextcloud:
name: nextcloud
driver: bridge
default:
driver: bridge
########################### EXTENSION FIELDS
# Helps eliminate repetition of sections
# More Info on how to use this: https://github.com/htpcBeginner/docker-traefik/pull/228
# Common environment values
x-environment: &default-tz-puid-pgid
TZ: $TZ
PUID: $PUID
PGID: $PGID
# Keys common to some of the services in basic-services.txt
x-common-keys-core: &common-keys-core
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
restart: always
# profiles:
# - core
# Keys common to some of the services in basic-services.txt
x-common-keys-monitoring: &common-keys-monitoring
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
restart: always
# profiles:
# - monitoring
# Keys common to some of the dependent services/apps
x-common-keys-apps: &common-keys-apps
networks:
- t2_proxy
security_opt:
- no-new-privileges:true
restart: unless-stopped
# profiles:
# - apps
volumes:
nextcloud_root:
nextcloud_data:
nextcloud_config:
nextcloud_apps:
db_nextcloud:
############################ SERVICES
services:
############################# FRONTENDS
# Traefik 2 - Reverse Proxy
# Touch (create empty files) traefik.log and acme/acme.json. Set acme.json permissions to 600.
# touch $DOCKERDIR/appdata/traefik2/acme/acme.json
# chmod 600 $DOCKERDIR/appdata/traefik2/acme/acme.json
# touch $DOCKERDIR/logs/cloudserver/traefik/traefik.log # customize this
traefik:
<<: *common-keys-core # See EXTENSION FIELDS at the top
container_name: traefik
image: traefik:2.7
command: # CLI arguments
- --global.checkNewVersion=true
- --global.sendAnonymousUsage=true
- --entryPoints.http.address=:80
- --entryPoints.https.address=:443
# Allow these IPs to set the X-Forwarded-* headers - Cloudflare IPs: https://www.cloudflare.com/ips/
# - --entrypoints.https.forwardedHeaders.trustedIPs=$CLOUDFLARE_IPS,$LOCAL_IPS # only needed if orange cloudflare in DNS records used
- --entryPoints.traefik.address=:8080
# - --entryPoints.ping.address=:8081
- --api=true
# - --api.insecure=true
- --api.dashboard=true
#- --ping=true
# - --serversTransport.insecureSkipVerify=true
- --log=true
- --log.filePath=/logs/traefik.log
- --log.level=debug # (Default: error) DEBUG, INFO, WARN, ERROR, FATAL, PANIC
- --accessLog=true
- --accessLog.filePath=/logs/access.log
- --accessLog.bufferingSize=100 # Configuring a buffer of 100 lines
- --accessLog.filters.statusCodes=204-299,400-499,500-599
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock # Use Docker Socket Proxy instead for improved security
# - --providers.docker.endpoint=tcp://socket-proxy:2375
# Automatically set Host rule for services
# - --providers.docker.defaultrule=Host(`{{ index .Labels "com.docker.compose.service" }}.$DOMAINNAME_CLOUD_SERVER`)
- --providers.docker.exposedByDefault=false
# - --entrypoints.https.http.middlewares=chain-oauth#file
- --entrypoints.https.http.tls.options=tls-opts#file
# Add dns-cloudflare as default certresolver for all services. Also enables TLS and no need to specify on individual services
- --entrypoints.https.http.tls.certresolver=dns-cloudflare
- --entrypoints.https.http.tls.domains[0].main=$DOMAINNAME_CLOUD_SERVER
- --entrypoints.https.http.tls.domains[0].sans=*.$DOMAINNAME_CLOUD_SERVER
# - --entrypoints.https.http.tls.domains[1].main=$DOMAINNAME2 # Pulls main cert for second domain
# - --entrypoints.https.http.tls.domains[1].sans=*.$DOMAINNAME2 # Pulls wildcard cert for second domain
- --providers.docker.network=t2_proxy
- --providers.docker.swarmMode=false
- --providers.file.directory=/rules # Load dynamic configuration from one or more .toml or .yml files in a directory
# - --providers.file.filename=/path/to/file # Load dynamic configuration from a file
- --providers.file.watch=true # Only works on top level files in the rules folder
# - --certificatesResolvers.dns-cloudflare.acme.caServer=https://acme-staging-v02.api.letsencrypt.org/directory # LetsEncrypt Staging Server - uncomment when testing
- --certificatesResolvers.dns-cloudflare.acme.email=$CLOUDFLARE_EMAIL
- --certificatesResolvers.dns-cloudflare.acme.storage=/acme.json
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.provider=cloudflare
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.resolvers=1.1.1.1:53,1.0.0.1:53
- --certificatesResolvers.dns-cloudflare.acme.dnsChallenge.delayBeforeCheck=90 # To delay DNS check and reduce LE hitrate
# - --metrics.prometheus=true
# - --metrics.prometheus.buckets=0.1,0.3,1.2,5.0
networks:
t2_proxy:
ipv4_address: 192.168.90.254 # You can specify a static IP
# socket_proxy:
#healthcheck:
# test: ["CMD", "traefik", "healthcheck", "--ping"]
# interval: 5s
# retries: 3
ports:
- target: 80
published: 80
protocol: tcp
mode: host
- target: 443
published: 443
protocol: tcp
mode: host
- target: 8080 # insecure api wont work
published: 8080
protocol: tcp
mode: host
volumes:
- $DOCKERDIR/appdata/traefik2/rules/cloudserver:/rules # file provider directory
- /var/run/docker.sock:/var/run/docker.sock:rw # Use Docker Socket Proxy instead for improved security
- $DOCKERDIR/appdata/traefik2/acme/acme.json:/acme.json # cert location - you must create this emtpy file and change permissions to 600
- $DOCKERDIR/logs/cloudserver/traefik:/logs # for fail2ban or crowdsec
- $DOCKERDIR/shared:/shared
environment:
- TZ=$TZ
- CF_API_EMAIL=$CLOUDFLARE_EMAIL
- CF_API_KEY=$CLOUDFLARE_API_KEY
- DOMAINNAME_CLOUD_SERVER # Passing the domain name to traefik container to be able to use the variable in rules.
# secrets:
#- cf_email
# - cf_api_key
# - htpasswd
labels:
#- "autoheal=true"
- "traefik.enable=true"
# HTTP-to-HTTPS Redirect
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# HTTP Routers
- "traefik.http.routers.traefik-rtr.entrypoints=https"
- "traefik.http.routers.traefik-rtr.rule=Host(`traefik.$DOMAINNAME_CLOUD_SERVER`)"
- "traefik.http.routers.traefik-rtr.tls=true" # Some people had 404s without this
# - "traefik.http.routers.traefik-rtr.tls.certresolver=dns-cloudflare" # Comment out this line after first run of traefik to force the use of wildcard certs
- "traefik.http.routers.traefik-rtr.tls.domains[0].main=$DOMAINNAME_CLOUD_SERVER"
- "traefik.http.routers.traefik-rtr.tls.domains[0].sans=*.$DOMAINNAME_CLOUD_SERVER"
# - "traefik.http.routers.traefik-rtr.tls.domains[1].main=$DOMAINNAME2" # Pulls main cert for second domain
# - "traefik.http.routers.traefik-rtr.tls.domains[1].sans=*.$DOMAINNAME2" # Pulls wildcard cert for second domain
## Services - API
- "traefik.http.routers.traefik-rtr.service=api#internal"
## Healthcheck/ping
#- "traefik.http.routers.ping.rule=Host(`traefik.$DOMAINNAME_CLOUD_SERVER`) && Path(`/ping`)"
#- "traefik.http.routers.ping.tls=true"
#- "traefik.http.routers.ping.service=ping#internal"
## Middlewares
#- "traefik.http.routers.traefik-rtr.middlewares=chain-no-auth#file" # For No Authentication
#- "traefik.http.routers.traefik-rtr.middlewares=chain-auth-basic#file" # For Basic HTTP Authentication
#- "traefik.http.routers.traefik-rtr.middlewares=chain-oauth#file" # For Google OAuth
#- "traefik.http.routers.traefik-rtr.middlewares=chain-authelia#file" # For Authelia Authentication
- "traefik.http.routers.traefik-rtr.middlewares=middlewares-basic-auth#file"
db_nextcloud:
image: linuxserver/mariadb:arm64v8-latest
restart: always
volumes:
- db_nextcloud:/var/lib/mysql
environment:
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=${MYSQLPASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQLROOTPASSWORD}
networks:
- nextcloud
nextcloud:
image: nextcloud:24
restart: always
depends_on:
- db_nextcloud
volumes:
- nextcloud_root:/var/www/html
- nextcloud_data:/var/www/html/data
- nextcloud_config:/var/www/html/config
- nextcloud_apps:/var/www/html/apps
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
environment:
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_PASSWORD=${MYSQLPASSWORD}
- MYSQL_HOST=db_nextcloud
- NEXTCLOUD_ADMIN_USER=ncpraef
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUDADMINPASSWORD}
- NEXTCLOUD_TRUSTED_DOMAINS="${NEXTCLOUDURL}"
- OVERWRITEPROTOCOL=https
- TRUSTED_PROXIES="172.17.0.0/12,192.168.90.0/24"
networks:
- t2_proxy
- nextcloud
labels:
- "traefik.enable=true"
# - "traefik.http.routers.nextcloud.rule=Host(`${NEXTCLOUDURL}`)"
# - "traefik.http.routers.nextcloud.middlewares=redirect-to-https#docker"
# - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
# - "traefik.http.routers.nextcloud-secure.entrypoints=web-secure"
- "traefik.http.routers.nextcloud-secure.rule=Host(`${NEXTCLOUDURL}`)"
- "traefik.http.routers.nextcloud-secure.tls=true"
# - "traefik.http.routers.nextcloud.middlewares=chain-no-auth#file" # No Authentication
# - "traefik.http.routers.traefik-secure-secured.tls.certresolver=letsencrypthttpchallenge"
- "traefik.http.routers.nextcloud.tls.passthrough=true"
- "traefik.http.routers.nextcloud.tls.certResolver=dns-cloudflare"
- "traefik.http.routers.nextcloud.middlewares=nextcloudheaders#docker,nextcloud-dav#docker"
- "traefik.http.routers.nextcloud.service=nextcloud"
- "traefik.docker.network=t2_proxy"
- "traefik.docker.network=nextcloud"
- "traefik.http.routers.nextcloud-secure.middlewares=nextcloudheaders#docker,nextcloud-dav#docker"
- "traefik.http.middlewares.nextcloudheaders.headers.customRequestHeaders.X-Forwarded-Proto=https"
- "traefik.http.middlewares.nextcloudheaders.headers.accessControlAllowOrigin=*"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.regex=^/.well-known/ca(l|rd)dav"
- "traefik.http.middlewares.nextcloud-dav.replacepathregex.replacement=/remote.php/dav/"
**
Any idea would be appreciated why I cannot access the NextCloud URL cloud.<DOMAIN>?
I am trying to make local setup of graylog 4 with elasticsearch 7 and mongo 4 using docker-compose. I am working on mac.
Here is my docker-compose.yml: https://gist.github.com/gandra/dc649b37e165d8e3fc5b20c30a8b5a79
After running:
docker-compose up -d --build
I can not see any data on http://localhost:9000/
When open that url I see :
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
Any idea how to make it working?
Here's the configuration I'm using in my project to get it working (compose v3).
###################################
# Greylog container logging start #
###################################
# Taken from https://docs.graylog.org/en/4.0/pages/installation/docker.html
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
###################################
# Greylog container logging end #
###################################
I will say, this took a fair bit of time to start. The output logs ran awhile while Graylog, MongoDB, and Elastisearch did their setup work. At the end of it, though, it did eventually become available (took about a full two minutes). Until it was ready, though, I saw the same response that you did.
Graylog does not support Elasticsearch versions 7.11 or greater, so you'll need to change the Elasticsearch version to 7.10.2. Beyond that, what are you seeing in Graylog's server.log?
I am pretty new to docker. I wanted to make an Odoo 8 container using xcgd/odoo’s image file.
Below is my docker compose file. The web container died with exit code 0 soon after I docker-compose up the yml.
I know that xcgd/odoo requires to link the db container, I’ve seen it in the documentation.
$ docker run -p 8069:8069 --rm --name="xcgd.odoo" --link pg93:db xcgd/odoo:7.0 start
Am I missing this link in my yaml? I thought I already define the link using networks?
Can anyone point my mistake?
My yaml file
version: '3.3'
services:
# Web Application Service Definition
# --------
#
# All of the information needed to start up an odoo web
# application container.
web:
image: xcgd/odoo:8.0
depends_on:
- db
# Port Mapping
# --------
#
# Here we are mapping a port on the host machine (on the left)
# to a port inside of the container (on the right.) The default
# port on Odoo is 8069, so Odoo is running on that port inside
# of the container. But we are going to access it locally on
# our machine from localhost:9000.
#ports:
# - 80:8069
# Data Volumes
# --------
#
# This defines files that we are mapping from the host machine
# into the container.
#
# Right now, we are using it to map a configuration file into
# the container and any extra odoo modules.
volumes:
- ./config:/etc/odoo
- ./addons/logic:/mnt/logic-addons
- ./addons/data:/mnt/data-addons
# Odoo Environment Variables
# --------
# The odoo image uses a few different environment
# variables when running to connect to the postgres
# database.
#
# Make sure that they are the same as the database user
# defined in the db container environment variables.
environment:
- HOST=db
- USER=odoo
- PASSWORD=odoo
- VIRTUAL_HOST=proc.fullertonhealth.co.id
- VIRTUAL_PORT=8069
- LETSENCRYPT_HOST=proc.fullertonhealth.co.id
- LETSENCRYPT_EMAIL=info#fullertonhealth.co.id
expose:
- 8069
# Database Container Service Definition
# --------
#
# All of the information needed to start up a postgresql
# container.
db:
image: postgres:9.5
# Database Environment Variables
# --------
#
# The postgresql image uses a few different environment
# variables when running to create the database. Set the
# username and password of the database user here.
#
# Make sure that they are the same as the database user
# defined in the web container environment variables.
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres # Leave this set to postgres
networks:
default:
external:
name: nginx-proxy
it turns out I have to set command: "start" in the web service. My bad, I did not understand the parameters of docker run example given in the documentation.
I am using docker-compose file generated by docker-app
docker-app render | docker-compose -f - up
The docker app file looks like this and it works as expected. But I am not able to use volumes.
I use -v parameter while using docker run command like this...
-v /my/custom3399:/etc/mysql/conf.d
-v /storage/mysql/datadir3399:/var/lib/mysql
How do I declare volumes in compose file?
# vi hello.dockerapp
# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: hello
# A short description of the application
description:
# Namespace to use when pushing to a registry. This is typically your Hub username.
#namespace: myHubUsername
# List of application maitainers with name and email for each
maintainers:
- name: root
email:
# Specify false here if your application doesn't support Swarm or Kubernetes
targets:
swarm: false
kubernetes: false
--
# This section contains the Compose file that describes your application services.
version: "3.5"
services:
mysql:
image: ${mysql.image.version}
environment:
MYSQL_ROOT_PASSWORD: india${mysql.port}
ports:
- "${mysql.port}:3306"
--
# This section contains the default values for your application settings.
mysql.image.version: shantanuo/mysql:5.7
mysql.port: 3391
update:
The script mentioned above works well. But once I add volumes, I get an error:
version: "3.5"
services:
mysql:
image: ${mysql.image.version}
environment:
MYSQL_ROOT_PASSWORD: india${mysql.port}
ports:
- "${mysql.port}:3306"
volumes:
- type: volume
source: mysql_data
target: /var/lib/mysql
volumes:
mysql_data:
external: true
And the error is:
docker-app render | docker-compose -f - up
Recreating e5c833e2187d_hashi_mysql_1 ... error
ERROR: for e5c833e2187d_hashi_mysql_1 Cannot create container for service mysql: Duplicate mount point: /var/lib/mysql
ERROR: for mysql Cannot create container for service mysql: Duplicate mount point: /var/lib/mysql
ERROR: Encountered errors while bringing up the project.
As mentioned in the comment, the rendered output is as follows:
# /usr/local/bin/docker-app render
version: "3.5"
services:
mysql:
environment:
MYSQL_ROOT_PASSWORD: india3391
image: shantanuo/mysql:5.7
ports:
- mode: ingress
target: 3306
published: 3391
protocol: tcp
volumes:
- type: volume
source: mysql_data
target: /var/lib/mysql
volumes:
mysql_data:
name: mysql_data
external: true
This issue was resolved once I changed the directory name.
# cd ..
# mv hashi/ hashi123/
# cd hashi123
Not sure how this worked. But since I am able to start the server stack, I am posting it as answer.
Just wondering what the working_dir file really does for an image being loaded via docker-compose. A sample docker-compose.yml file as follows:
dev:
extends:
file: common.yml
service: workspace
volumes:
- $ATOMSPACE_SOURCE_DIR:/atomspace
- $COGUTILS_SOURCE_DIR:/cogutils
# Uncomment the following lines if you want to work on moses
# - $MOSES_SOURCE_DIR:/moses
working_dir: /opencog # This is the same as the volume mount point below
links:
- postgres:db
- relex:relex
postgres:
image: opencog/postgres
# Uncomment the following lines if you want to work on a production
# system.
# NOTE: The environment variable `PROD` is set `True` then the entrypoint
# script in opencog/postgres does additional configurations.
# environment:
# - PROD=True
relex:
image: opencog/relex
command: /bin/sh -c "./opencog-server.sh"
working_dir sets the working directory of the container that is created. It is the same as the --workdir flag to docker run.