Odoo 8 using docker-compose - docker-compose

I am pretty new to docker. I wanted to make an Odoo 8 container using xcgd/odoo’s image file.
Below is my docker compose file. The web container died with exit code 0 soon after I docker-compose up the yml.
I know that xcgd/odoo requires to link the db container, I’ve seen it in the documentation.
$ docker run -p 8069:8069 --rm --name="xcgd.odoo" --link pg93:db xcgd/odoo:7.0 start
Am I missing this link in my yaml? I thought I already define the link using networks?
Can anyone point my mistake?
My yaml file
version: '3.3'
services:
# Web Application Service Definition
# --------
#
# All of the information needed to start up an odoo web
# application container.
web:
image: xcgd/odoo:8.0
depends_on:
- db
# Port Mapping
# --------
#
# Here we are mapping a port on the host machine (on the left)
# to a port inside of the container (on the right.) The default
# port on Odoo is 8069, so Odoo is running on that port inside
# of the container. But we are going to access it locally on
# our machine from localhost:9000.
#ports:
# - 80:8069
# Data Volumes
# --------
#
# This defines files that we are mapping from the host machine
# into the container.
#
# Right now, we are using it to map a configuration file into
# the container and any extra odoo modules.
volumes:
- ./config:/etc/odoo
- ./addons/logic:/mnt/logic-addons
- ./addons/data:/mnt/data-addons
# Odoo Environment Variables
# --------
# The odoo image uses a few different environment
# variables when running to connect to the postgres
# database.
#
# Make sure that they are the same as the database user
# defined in the db container environment variables.
environment:
- HOST=db
- USER=odoo
- PASSWORD=odoo
- VIRTUAL_HOST=proc.fullertonhealth.co.id
- VIRTUAL_PORT=8069
- LETSENCRYPT_HOST=proc.fullertonhealth.co.id
- LETSENCRYPT_EMAIL=info#fullertonhealth.co.id
expose:
- 8069
# Database Container Service Definition
# --------
#
# All of the information needed to start up a postgresql
# container.
db:
image: postgres:9.5
# Database Environment Variables
# --------
#
# The postgresql image uses a few different environment
# variables when running to create the database. Set the
# username and password of the database user here.
#
# Make sure that they are the same as the database user
# defined in the web container environment variables.
environment:
- POSTGRES_PASSWORD=odoo
- POSTGRES_USER=odoo
- POSTGRES_DB=postgres # Leave this set to postgres
networks:
default:
external:
name: nginx-proxy

it turns out I have to set command: "start" in the web service. My bad, I did not understand the parameters of docker run example given in the documentation.

Related

Cannot connect to mongodb container from another compose [duplicate]

I have two separate docker-compose.yml files in two different folders:
~/front/docker-compose.yml
~/api/docker-compose.yml
How can I make sure that a container in front can send requests to a container in api?
I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.
Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.
Another form of this question might thus be:
Can I attribute a fixed IP address to a particular container using docker-compose?
But in the end what I'm looking after is:
How can two different docker-compose projects communicate with each other?
You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.
# front/docker-compose.yml
version: '2'
services:
front:
...
networks:
- some-net
networks:
some-net:
driver: bridge
...
# api/docker-compose.yml
version: '2'
services:
api:
...
networks:
- front_some-net
networks:
front_some-net:
external: true
Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added
They can then talk to each other using the service name. From front you can do ping api and vice versa.
UPDATE: As of compose file version 3.5:
This now works:
version: "3.5"
services:
proxy:
image: hello-world
ports:
- "80:80"
networks:
- proxynet
networks:
proxynet:
name: custom_network
docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done
Now, you can do this:
version: "2"
services:
web:
image: hello-world
networks:
- my-proxy-net
networks:
my-proxy-net:
external:
name: custom_network
This will create a container that will be on the external network.
I can't find any reference in the docs yet but it works!
Just a small adittion to #johnharris85's great answer,
when you are running a docker compose file, a "default" network is created
so you can just add it to the other compose file as an external network:
# front/docker-compose.yml
version: '2'
services:
front_service:
...
...
# api/docker-compose.yml
version: '2'
services:
api_service:
...
networks:
- front_default
networks:
front_default:
external: true
For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.
All containers from api can join the front default network with following config:
# api/docker-compose.yml
...
networks:
default:
external:
name: front_default
See docker compose guide: using a pre existing network (see at the bottom)
The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".
Hope this example make more clear to you:
Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:
svc11 needs to connect to svc22's container
svc21 needs to connect to svc11's container.
So the configuration should be like this:
this is app1/docker-compose.yml:
version: '2'
services:
svc11:
container_name: container11
[..]
networks:
- default # this network
- app2_default # external network
external_links:
- container22:container22
[..]
svc12:
container_name: container12
[..]
networks:
default: # this network (app1)
driver: bridge
app2_default: # external network (app2)
external: true
this is app2/docker-compose.yml:
version: '2'
services:
svc21:
container_name: container21
[..]
networks:
- default # this network (app2)
- app1_default # external network (app1)
external_links:
- container11:container11
[..]
svc22:
container_name: container22
[..]
networks:
default: # this network (app2)
driver: bridge
app1_default: # external network (app1)
external: true
Everybody has explained really well, so I'll add the necessary code with just one simple explanation.
Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.
Further explanation can be found here.
First docker-compose.yml file should define network with name giveItANamePlease as follows.
networks:
my-network:
name: giveItANamePlease
driver: bridge
The services of first docker-compose.yml file can use network as follows:
networks:
- my-network
In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:
networks:
my-proxy-net:
external:
name: giveItANamePlease
And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.
networks:
- my-proxy-net
Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:
networks:
default:
name: my-app
The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).
Other answers have pointed the same; this is a simplified summary.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
ner/docker-compose.yml
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
scoring/docker-compose.yml
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.
COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.
NB: You'll get warnings for "orphaned" containers created from other projects.
So many answers!
First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.
Example: my-api won't work. myapi or api will work.
What worked for me is:
# api/docker-compose.yml
version: '3'
services:
api:
container_name: api
...
ports:
- 8081:8080
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
and
# front/docker-compose.yml
version: '3'
services:
front:
container_name: front
...
ports:
- 81:80
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
NOTE: I added ports to show how services can access each other, and how they are accessible from the host.
IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.
Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.
From the host
You can access either container using the published ports defined in docker-compose.yml.
You can access the Front container: curl http://localhost:81
You can access the API container: curl http://localhost:8081
From the API container
You can access the Front container using the original port, not the one you published in docker-compose.yml.
Example: curl http://front:80
From the Front container
You can access the API container using the original port, not the one you published in docker-compose.yml.
Example: curl http://api:8080
For using another docker-compose network you just do these(to share networks between docker-compose):
Run the first docker-compose project by up -d
Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
Then use that name by this structure at below in the second docker-compose file.
second docker-compose.yml
version: '3'
services:
service-on-second-compose: # Define any names that you want.
.
.
.
networks:
- <put it here(the network name that comes from "docker network ls")>
networks:
- <put it here(the network name that comes from "docker network ls")>:
external: true
I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:
docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d
If you are
trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
developing locally and want to imitate communication between two docker compose projects
running two docker-compose projects on localhost
developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
getting Connection refused while trying to communicate between two containers
And you want to
container api_a communicate to api_b (or vice versa) without the same "docker network"
(example below)
you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):
import socket
def get_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# doesn't even have to be reachable
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except:
IP = '127.0.0.1'
finally:
s.close()
return IP
Example:
project_api_a/docker-compose.yml:
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_a
image: api_a:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_a container you are running Django app:
manage.py runserver 0.0.0.0:8000
and second docker-compose.yml from other project:
project_api_b/docker-compose-yml :
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_b
image: api_b:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_b container you are running Django app:
manage.py runserver 0.0.0.0:8001
And trying to connect from container api_a to api_b then URL of api_b container will be:
http://<get_ip_from_script_above>:8001/
It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you could check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql
I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:
make sure you have created a public network
docker network create nginx-proxy-man
/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...
version: "3.9"
services:
webserver:
build:
context: ./bin/${PHPVERSION}
container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
...
networks:
- default # network outside
- internal # network internal
database:
build:
context: "./bin/${DATABASE}"
container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
...
networks:
- internal # network internal
networks:
default:
external: true
name: nginx-proxy-man
internal:
internal: true
.env file just change COMPOSE_PROJECT_NAME
COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56
DATABASE=mysql57
webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.
Note: container_name is unique in the same network.
database.container_name: domain1_com-mysql57 - easier to distinguish
In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true
Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.
Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip
example
app1 - new-network created in the service lines, mark as external: true at the bottom
app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.
With this, you should be able to talk with each other
*this way is just for local-test focus, in order to don't do an over complex configuration
** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this
Answer for Docker Compose '3' and up
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:
first docker-compose.yaml
version: '3.9'
.
.
.
networks:
net:
driver: overlay
attachable: true
docker-compose -p app up
since I have specified the app name as app using -p the initial network will be app_net.
Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:
second docker-compose.yaml
version: '3.9'
.
.
.
networks:
net-ref:
external: true
name: app_net
docker stack deploy -c docker-compose.yml mystack
No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.
PS: It's important to make sure to check your docker-compose version.
version: '2'
services:
bot:
build: .
volumes:
- '.:/home/node'
- /home/node/node_modules
networks:
- my-rede
mem_limit: 100m
memswap_limit: 100m
cpu_quota: 25000
container_name: 236948199393329152_585042339404185600_bot
command: node index.js
environment:
NODE_ENV: production
networks:
my-rede:
external:
name: name_rede_externa
Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:
1st foldername/docker-compose.yml:
version: '2'
services:
some-contr:
container_name: []
build: .
...
networks:
- somenet
ports:
- "8080:8080"
expose:
# Opens port 8080 on the container
- "8080"
environment:
PORT: 8080
tty: true
networks:
boomnet:
driver: bridge
2nd docker-compose.yml:
version: '2'
services:
pushapiserver:
container_name: [container_name]
build: .
command: "tail -f /dev/null"
volumes:
- ./:/[work_dir]
working_dir: /[work dir]
image: [name of image]
ports:
- "8060:8066"
environment:
PORT: 8066
tty: true
networks:
- foldername_somenet
networks:
foldername_somenet:
external: true
Now you can make api calls to one another services(b/w diff containers)like:
http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml
Two common mistakes (atleast i made them few times):
take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.
docker will use docker container port[8066] and not host machine mapped port
[8060]

What is the separator used by Docker Compose for container names, dash "-" or underscore "_"?

pymediawikidocker automatically generates docker images and containers for mediawiki to get a "one-click" experience to setup a whole cluster of mediawikis using different versions / extensions and database settings for testing.
To be able to control the process the library https://github.com/gabrieldemarmiesse/python-on-whales is used which handles the docker compose commands.
The controlling python software now needs to work with the automatically generated containers and tries to calculate the name as docker compose does.
I get mixed results which might depend on operating system and Docker Compose versions, e.g. mw1_35_8-mw-1 or mw_135_8_mw_1
According to https://github.com/docker/for-mac/issues/6035 I tried:
separator="-" if platform.system()=="Darwin" else "_"
but that still doesn't get consistent results and my CI tests in https://github.com/WolfgangFahl/pymediawikidocker keep failing.
I am working around the problem now by trying out both but would love to know what the rules are for creating the container name.
Here is an example docker-compose.yml I am generating:
version: "3"
# 2 services
# db - database
# mw - mediawiki
services:
# MySQL compatible relational database
db:
# use original image
image: mariadb:10.9
restart: always
environment:
MYSQL_DATABASE: wiki
MYSQL_USER: wikiuser
MYSQL_PASSWORD: "BOqdADGJYADBsZK7fg"
MYSQL_ROOT_PASSWORD: "hnOZz1xbkvySSh3RJg"
ports:
- 9308:3306
volumes:
- etc:/etc
# mediawiki
mw:
#image: mediawiki:1.35.8
# use the Dockerfile in this directory
build: .
restart: always
ports:
- 9082:80
links:
- db
depends_on:
- db
volumes:
- wikiimages:/var/www/html/images
# After initial setup, download LocalSettings.php to the same directory as
# this yaml and uncomment the following line and use compose to restart
# the mediawiki service
# - ./LocalSettings.php:/var/www/html/LocalSettings.php
volumes:
etc:
driver: local
wikiimages:
driver: local%

Getting a Docker postgres container to use hosts database files

I have a Postgres database running on my host. The datafiles for the database is stored at /usr/local/var/postgresql#13.
To get the full system running easily I'd like to have a Docker with a Postgres service running for other Docker apps to connect to. I would however like to have the Docker Postgres service to use the existing datafiles on the host ...
How do I set up the volume correctly to point to the hosts database files?
Do have to have a user/password when running the Docker against existing datafiles?
I have the following but can get the volume to work ...
version: "3.9"
services:
web:
build: .
ports:
- 8081:3011
depends_on:
- db
environment:
- PGHOST=db
- PGDATABASE=loggingtestdb
- PGUSER=postgres
- PGPASSWORD=postgres
db:
image: postgres
ports:
- 5432:5432
volumes:
- /usr/local/var/postgresql#13 <--- Need help here.
How do I map the container pg datafile location to the hosts pg datafile location? 🙏
Update 1
This is the datafile folder for the db on the host
After comments I updated the volumes to below
volumes:
- /usr/local/var/postgresql#13:/var/lib/postgresql/data
But when running docker compose I only get
Error response from daemon: invalid mount config for type "bind": bind source path does not exist: /usr/local/var/postgresql#13
Update 2
/use/local works fine. But as soon as I add the /var folder to the path Docker for some reason can’t find it … What am I missing here?

NAS Synology docker-compose not found

New to this so not sure what I'm missing.
I'm trying to follow these instructions to install elabftw as a docker container: https://doc.elabftw.net/install-nas.html
this is the container: https://registry.hub.docker.com/r/elabftw/elabimg/
Edited the docker-compose.yml but can't seem to run
docker-compose up -d
bash: docker-compose: command not found
I thought docker-compose already comes installed?
I'd appreciate some help!
Thanks
Danny
Update:
Can't even seem to install docker-container in the actual container
bash-5.1# curl -L https://github.com/docker/compose/releases/download/1.27.4/docker-compose-`uname -s`-`uname -m` -o docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 633 100 633 0 0 4645 0 --:--:-- --:--:-- --:--:-- 4654
100 11.6M 100 11.6M 0 0 3891k 0 0:00:03 0:00:03 --:--:-- 4083k
bash-5.1# ls
cache config.php docker-compose docker-compose.yml mysql uploads web
bash-5.1# chmod +x docker-compose
bash-5.1# docker-compose --version
bash: docker-compose: command not found
bash-5.1#
I can install docker-container on the actual NAS and update it, but not in the docker container itself.
Edit II: docker-container.yml
# docker-elabftw configuration file
# use : "docker-compose up -d" to start containers
# this config file contains all the possible configuration options, shown with default values
# https://hub.docker.com/r/elabftw/elabimg/
# https://www.elabftw.net
version: '3'
# our first container is nginx + php-fpm + elabftw
services:
web:
# the latest tag points to the latest stable version
# use the next tag to use alpha/beta version
# use a specific version to pin the image
# example: elabftw/elabimg:4.0.5
# default value: elabftw/elabimg:latest
image: elabftw/elabimg:latest
# this ensures the container will be restarted after a reboot of the server
# default value: always
restart: always
# comment this out if you use several containers with redis, as you can't have several containers with the same name
# default value: elabftw
container_name: elabftw
# limit number of processes
pids_limit: 42
# drop some capabilities not needed by the app
cap_drop:
- SYS_ADMIN
- AUDIT_WRITE
- MKNOD
- SYS_CHROOT
- SETFCAP
- NET_RAW
- SYS_PTRACE
# environment variables passed to the container to configure options at run time (when container is started)
# commented variables are optional
environment:
#######################
# MYSQL CONFIGURATION #
#######################
# name of the MySQL server (by default "mysql" the name of the mysql container in default elabftw Docker configuration)
# you can put here the IP address of an existing MySQL server if you already have one running
# default value: mysql
- DB_HOST=mysql
# port on which the MySQL server is listening
# you probably don't need to modify this value
# default value: 3306
- DB_PORT=3306
# name of the MySQL database
# you probably don't need to modify this value
# default value: elabftw
- DB_NAME=elabftw
# MySQL user with write access to the previously named database
# you probably don't need to modify this value
# default value: elabftw
- DB_USER=elabftw
# MySQL password; a random password has been generated for you but feel free to change it if needed
# default value: generated randomly if you get the config from get.elabftw.net
- DB_PASSWORD=
# Mysql Cert path: you only need this if you connect to a mysql server with tls
# Use a volume that points to /mysql-cert in the container
# optional
#- DB_CERT_PATH=/mysql-cert/cert.pem
#####################
# PHP CONFIGURATION #
#####################
# the timezone in which the server is
# better if changed (see list of available values: http://php.net/manual/en/timezones.php
- PHP_TIMEZONE=Europe/Paris
# again
- TZ=Europe/Paris
# optional: set the limit of simultaneous request that will be server
# see http://php.net/manual/en/install.fpm.configuration.php
# default value: 50
#- PHP_MAX_CHILDREN=50
# optional: adjust the max execution time of PHP scripts. Allows for bigger ZIP exports.
# default value: 120
#- PHP_MAX_EXECUTION_TIME=120
# optional: adjust the amount of memory available to PHP, increase it if you run into memory issues due to the size of your database
# default value: 256M
#- MAX_PHP_MEMORY=256M
#########################
# ELABFTW CONFIGURATION #
#########################
# The secret key is used for encrypting the SMTP password
# A random one has been generated for you, if you wish to change it you can
# get your secret key from https://demo.elabftw.net/install/generateSecretKey.php
# if you don't want to get it from an external source you can also do that:
# docker run --rm -t --entrypoint '/bin/sh' elabftw/elabimg -c "php /elabftw/web/install/generateSecretKey.php"
# default value: generated randomly if you get the config from get.elabftw.net
- SECRET_KEY=def00000becc6e2c28e5dfd0f4728d5dc0f6d1f4244783e241e567a3860a6b4c01469042e6a9ebdc278d1ed026d8a0be1ce6b0c2c30891069daedbb01256d69adc42a0be
# optional: adjust maximum size of uploaded files
# default value: 100M
#- MAX_UPLOAD_SIZE=100M
#######################
# NGINX CONFIGURATION #
#######################
# change to your server name in nginx config
# default value: localhost
# example value: elab.uni.edu
- SERVER_NAME=localhost
# optional: disable https, use this to have an http server listening on port 443
# useful if the SSL stack is handled by haproxy or something alike
# default value: false
- DISABLE_HTTPS=true
# set to true to use letsencrypt or other certificates
# note: does nothing if DISABLE_HTTPS is set to true
# default value: false
- ENABLE_LETSENCRYPT=false
# optional: enable ipv6 (make sure you have an AAAA dns record!)
# default value: false
#- ENABLE_IPV6=false
# optional: adjust the user/group that will own the uploaded files
# useful in very particular situations, like with NFSv4
# you don't really need to change this in most situations
# so this is left commented (default values are shown)
# default value: nginx
#- ELABFTW_USER=nginx
# default value: nginx
#- ELABFTW_GROUP=nginx
# default value: 101
#- ELABFTW_USERID=101
# default value: 101
#- ELABFTW_GROUPID=101
# optional: enable if you want nginx to be configured with set_real_ip_from directives
# default value: false
#- SET_REAL_IP=false
# the IP address/addresses. Separate them with a , AND A SPACE. Several set_real_ip_from lines will be added to the nginx config. One for each.
# this does nothing if SET_REAL_IP is set to false
#- SET_REAL_IP_FROM=192.168.31.48, 192.168.0.42, 10.10.13.37
# optional: adjust the number of worker processes nginx will spawn
# default value: auto
# if auto doesn't work for you, use the number of cores available on the server (or less)
#- NGINX_WORK_PROC=auto
#######################
# REDIS CONFIGURATION #
#######################
# optional: use a redis server to store the PHP sessions
# default value: false
#- USE_REDIS=false
# optional: set an IP or hostname for the redis server
# default value: redis
#- REDIS_HOST=redis
# optional: set a custom port for redis
# default value: 6379
#- REDIS_PORT=6379
#################
# MISCELLANEOUS #
#################
# optional: be less verbose during init
# default value: false
#- SILENT_INIT: false
#######
# DEV #
#######
# set to true for development
# default value: false
#- DEV_MODE: false
ports:
# if you want elabftw to run on a different port, change the first number
# host:container
- '3148:443'
# if you are aiming for running multiple instances of this container you can put a range like so:
# - "3100-3200:443"
# use redis for session storage if that is the case, or configure your load balancer with sticky sessions
volumes:
# this is where you will keep the uploaded files persistently
# for Windows users it might look like this
# - D:\Users\Nico\elab-data\web:/elabftw/uploads
# host:container
- /volume1/docker/Container/elabftw/web:/elabftw/uploads
#
# TLS configuration
#
# Note: if your certificate is not from letsencrypt, make sure to have those two files:
#
# /etc/letsencrypt/live/SERVER_NAME/fullchain.pem
# /etc/letsencrypt/live/SERVER_NAME/privkey.pem
#
# in the folder /etc/letsencrypt (or any folder you like as long as you adapt the line below
# replace SERVER_NAME with the value of SERVER_NAME of course.
#
# if you have enabled letsencrypt, uncomment the line below
# path to the folder with TLS certificate + private key
# host:container
#- /etc/letsencrypt:/ssl
#
# MYSQL cert path
#- /path/to/cert/folder:/mysql-cert
networks:
- elabftw-net
# the mysql database image
# Note: if you already have a MySQL server running, you don't need to use this image, as you can use the already existing one
# In this case, add the IP address of the server in DB_HOST and comment out or remove this block
mysql:
image: mysql:8.0
restart: always
# fix issue with "The server requested authentication method unknown to the client [caching_sha2_password]"
command: --default-authentication-plugin=mysql_native_password
container_name: mysql
# drop some capabilities
cap_drop:
- AUDIT_WRITE
- MKNOD
- SYS_CHROOT
- SETFCAP
- NET_RAW
cap_add:
- SYS_NICE
environment:
# need to change
- MYSQL_ROOT_PASSWORD=X54DtNOryK2flSYOIo2raoc4m0qUQ90
# no need to change
- MYSQL_DATABASE=elabftw
# no need to change
- MYSQL_USER=elabftw
# need to change IMPORTANT: this should be the same password as DB_PASSWORD from the elabftw container
- MYSQL_PASSWORD=
# need to change, this is your timezone, see PHP_TIMEZONE from the elabftw container
- TZ=Europe/Paris
volumes:
# this is where you will keep the database persistently
# for Windows users it might look like this
# - D:\Users\Nico\elab-data\mysql:/var/lib/mysql
# host:container
- /var/elabftw/mysql:/var/lib/mysql
expose:
- '3306'
networks:
- elabftw-net
# example of a redis container
# uncomment if you want to spawn a redis container to manage sessions
#redis:
# image: redis:6.0-alpine
# restart: always
# container_name: redis
# networks:
# - elabftw-net
###############################################################
# EVERYTHING BELOW THIS LINE IS FOR DEVELOPMENT PURPOSES ONLY #
###############################################################
# PHPMYADMIN
# uncomment this part if you want to have phpmyadmin running too
#phpmyadmin:
# image: phpmyadmin/phpmyadmin
# container_name: phpmyadmin
# environment:
# - PMA_PORT=3307
# links:
# - mysql:db
# ports:
# - "8080:80"
# networks:
# - elabftw-net
# LDAP
# example for ldap server + admin interface
# uncomment if you want to work on LDAP authentication
#ldap:
# image: osixia/openldap:1.4.0
# container_name: ldap
# restart: always
# hostname: example.org
# environment:
# - LDAP_TLS_VERIFY_CLIENT=try
# - LDAP_OPENLDAP_UID=1000
# - LDAP_OPENLDAP_GID=1000
# ports:
# - "389:389"
# - "636:636"
# volumes:
# - /var/elabftw/ldap-data/ldap:/var/lib/ldap
# - /var/elabftw/ldap-data/slapd.d:/etc/ldap/slapd.d
# networks:
# - elabftw-net
#ldapadmin:
# image: osixia/phpldapadmin:0.9.0
# container_name: ldapadmin
# environment:
# - PHPLDAPADMIN_LDAP_HOSTS=ldap
# restart: always
# ports:
# - "6443:443"
# networks:
# - elabftw-net
# the internal elabftw network
networks:
elabftw-net:
It means that docker-compose is not installed.
You should to try to install it first then install docker-compose.
https://docs.docker.com/get-docker/
https://docs.docker.com/compose/install/
You should execute that command in sudo mode.
sudo -i
# enter password
docker-compose up -d

Ambassador API Gateway doesn't pickup services

I'm a new Ambassador user here. I have walked thru the tutorial, in an effort to understand how use ambassador gateway. I am attempting to run this locally via Docker Compose until it's ready for deployment to K8s in production.
My use case is that all http traffic comes in on port 80, and then directed to the appropriate service. Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory? I ask because this doesn't appear to actually pickup my files (the postgres startup doesn't show in console). And when I run "docker ps" I only show:
CONTAINER ID IMAGE PORTS NAMES
8bc8393ac04c 05a916199684 k8s_statsd_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
1c00f2341caf d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-q97l9_default_e775d686-a93c-11e8-9caa-025000000001_0
fe20c4819514 05a916199684 k8s_statsd_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
ba6415b028ba d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-xzvkl_default_e775ffe6-a93c-11e8-9caa-025000000001_0
9df07dc5083d 05a916199684 k8s_statsd_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
682e1f9902a0 d7cf7cf837f9 k8s_ambassador_ambassador-8564bfb874-w5vsq_default_e773ed53-a93c-11e8-9caa-025000000001_0
bb6d2f749491 quay.io/datawire/ambassador:0.40.2 0.0.0.0:80->80/tcp apigateway_ambassador_1
I have a docker-compose.yaml:
version: '3.1'
# Define the services/containers to be run
services:
ambassador:
image: quay.io/datawire/ambassador:0.40.2
ports:
- 80:80
volumes:
# mount a volume where we can inject configuration files
- ./config:/ambassador/config
postgres:
image: my-postgresql
ports:
- '5432:5432'
and in /config/mapping-postgres.yaml:
---
apiVersion: ambassador/v0
kind: Mapping
name: postgres_mapping
rewrite: ""
service: postgres:5432
volumes:
- ../my-postgres:/docker-entrypoint-initdb.d
environment:
- POSTGRES_MULTIPLE_DATABASES=db1, db2, db3
- POSTGRES_USER=<>
- POSTGRES_PASSWORD=<>
volumes and environment are not valid configs for Ambassador Mappings. Ambassador lets you proxy to postgres but the authentication has to be handled by your application.
Having said that, it looks like your Postgres container is not starting. (Perhaps because it needs an initial config). You can check for errors with:
$ docker ps -a | grep postgres
$ docker logs <container-id-from-previous-step>
You can also check a postgres docker compose example here.
Is it considered best practice to have a docker-compose.yaml file in the working directory that refers to services in the /config directory?
It's pretty standard, but you can use any directory you like for this.