I'm currently trying to set up Mattermost on Docker following the official guide.
After copying and adjusting the .env-file, and while deploying the container (using docker-compose.without-nginx as I'm going to set up traefik as the reverse proxy):
sudo docker-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d
the following error message is returned:
WARN[0000] The "POSTGRES_IMAGE_TAG" variable is not set. Defaulting to a blank string.
WARN[0000] The "RESTART_POLICY" variable is not set. Defaulting to a blank string.
WARN[0000] The "POSTGRES_DATA_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "RESTART_POLICY" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_CONFIG_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_DATA_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_LOGS_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_PLUGINS_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_CLIENT_PLUGINS_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_BLEVE_INDEXES_PATH" variable is not set. Defaulting to a blank string.
WARN[0000] The "MATTERMOST_CONTAINER_READONLY" variable is not set. Defaulting to a blank string.
error while interpolating services.mattermost.read_only: failed to cast to expected type: invalid boolean:
My .env file looks something like this:
# Domain of service
DOMAIN=my-sub.domain.com
# Container settings
## Timezone inside the containers. The value needs to be in the form 'Europe/Berlin'.
## A list of these tz database names can be looked up at Wikipedia
## https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=Europe/Berlin
RESTART_POLICY=unless-stopped
# Postgres settings
## Documentation for this image and available settings can be found on hub.docker.com
## https://hub.docker.com/_/postgres
## Please keep in mind this will create a superuser and it's recommended to use a less privileged
## user to connect to the database.
## A guide on how to change the database user to a nonsuperuser can be found in docs/creation-of-nonsuperuser.md
POSTGRES_IMAGE_TAG=13-alpine
POSTGRES_DATA_PATH=./volumes/db/var/lib/postgresql/data
POSTGRES_USER=adjustedusername
POSTGRES_PASSWORD=adjustedpassword
POSTGRES_DB=mattermost
# Nginx
## The nginx container will use a configuration found at the NGINX_MATTERMOST_CONFIG. The config aims
## to be secure and uses a catch-all server vhost which will work out-of-the-box. For additional settings
## or changes ones can edit it or provide another config. Important note: inside the container, nginx sources
## every config file inside */etc/nginx/conf.d* ending with a *.conf* file extension.
## Inside the container the uid and gid is 101. The folder owner can be set with
## `sudo chown -R 101:101 ./nginx` if needed.
NGINX_IMAGE_TAG=alpine
## The folder containing server blocks and any additional config to nginx.conf
NGINX_CONFIG_PATH=./nginx/conf.d
NGINX_DHPARAMS_FILE=./nginx/dhparams4096.pem
CERT_PATH=./volumes/web/cert/cert.pem
KEY_PATH=./volumes/web/cert/key-no-password.pem
#GITLAB_PKI_CHAIN_PATH=<path_to_your_gitlab_pki>/pki_chain.pem
#CERT_PATH=./certs/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
#KEY_PATH=./certs/etc/letsencrypt/live/${DOMAIN}/privkey.pem
## Exposed ports to the host. Inside the container 80 and 443 will be used
HTTPS_PORT=443
HTTP_PORT=80
# Mattermost settings
## Inside the container the uid and gid is 2000. The folder owner can be set with
## `sudo chown -R 2000:2000 ./volumes/app/mattermost`.
MATTERMOST_CONFIG_PATH=./volumes/app/mattermost/config
MATTERMOST_DATA_PATH=./volumes/app/mattermost/data
MATTERMOST_LOGS_PATH=./volumes/app/mattermost/logs
MATTERMOST_PLUGINS_PATH=./volumes/app/mattermost/plugins
MATTERMOST_CLIENT_PLUGINS_PATH=./volumes/app/mattermost/client/plugins
MATTERMOST_BLEVE_INDEXES_PATH=./volumes/app/mattermost/bleve-indexes
## Bleve index (inside the container)
MM_BLEVESETTINGS_INDEXDIR=/mattermost/bleve-indexes
## This will be 'mattermost-enterprise-edition' or 'mattermost-team-edition' based on the version of Mattermost you're installing.
MATTERMOST_IMAGE=mattermost-team-edition
MATTERMOST_IMAGE_TAG=7.1
## Make Mattermost container readonly. This interferes with the regeneration of root.html inside the container. Only use
## it if you know what you're doing.
## See https://github.com/mattermost/docker/issues/18
MATTERMOST_CONTAINER_READONLY=false
## The app port is only relevant for using Mattermost without the nginx container as reverse proxy. This is not meant
## to be used with the internal HTTP server exposed but rather in case one wants to host several services on one host
## or for using it behind another existing reverse proxy.
APP_PORT=8065
## Configuration settings for Mattermost. Documentation on the variables and the settings itself can be found at
## https://docs.mattermost.com/administration/config-settings.html
## Keep in mind that variables set here will take precedence over the same setting in config.json. This includes
## the system console as well and settings set with env variables will be greyed out.
## Below one can find necessary settings to spin up the Mattermost container
MM_SQLSETTINGS_DRIVERNAME=postgres
MM_SQLSETTINGS_DATASOURCE=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#postgres:5432/${POSTGRES_DB}?sslmode=disable&connect_timeout=10
## Example settings (any additional setting added here also needs to be introduced in the docker-compose.yml)
MM_SERVICESETTINGS_SITEURL=https://${DOMAIN}
Any advice? Thanks and best regards.
Don't rename your .env-file to something like mattermost.env just leave it at .env and you'll have no problems.
Related
setting up a .devcontainer configuration ...
I have a mount that normally lives in one spot, but when working remotely, can live in a different path. I want to set up a devcontainer so that I can change an environment variable locally, without making a commit change to the repos. I can't change the target because it is a convention that spans many tools and systems.
"mounts": [
"type=bind,source=${localEnv:MY_DIR},target=/var/local/my_dir,readonly"
]
For example, in a .env file in the project root:
MY_DIR=~/workspace/my_local_dir
When I try this, no matter what combinations, this environment variable always ends up blank, and then Docker is sad when it gets a --mount configuration that is missing a source:
[2022-02-12T19:11:17.633Z] Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR --mount type=bind,source=,target=/var/local/my_dir,readonly --entrypoint /bin/sh vsc-sdmx-9c04deed4ad9f1e53addf97c69727933 -c echo Container started
[2022-02-12T19:11:17.808Z] docker: Error response from daemon: invalid mount config for type "bind": field Source must not be empty.
See 'docker run --help'.
The reference shows that this type of syntax is allowed for a mount, and that a .env will be picked up by VSCode. But I guess it only does that for the running container and not VSC itself.
How do I set up a developer-specific configuration change?
I don't really make sense of docker-compose's behavior with regards to environment variables files.
I've defined a few variables for an simple echo server setup with 2 flask application running.
In .env:
FLASK_RUN_PORT=5000
WEB_1_PORT=80
WEB_2_PORT=8001
Then in docker-compose.yml:
version: '3.8'
x-common-variables: &shared_envvars
FLASK_ENV: development
FLASK_APP: main.py
FLASK_RUN_HOST: 0.0.0.0
COMPOSE_PROJECT_NAME: DOCKER_ECHOES
x-volumes: &com_volumes
- .:/project # maps the current directory, e.g. project root that is gitted, to /proj in the container so we can live-reload
services:
web_1:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_1_PORT}:${FLASK_RUN_PORT}" # flask runs on 5000 (default). docker-compose --env-file .env up loads whatever env vars specified & allows them to be used this way here.
volumes: *com_volumes
environment:
<<: *shared_envvars # DRY: defined common stuff in a shared section above, & use YAML merge language syntaxe to include that k-v mapping here. pretty neat.
FLASK_NAME: web_1
web_2:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_2_PORT}:${FLASK_RUN_PORT}" # flask by default runs on 5000 so keep it on container, and :8001 on host
volumes: *com_volumes
environment:
<<: *shared_envvars
FLASK_NAME: web_2
If I run docker-compose up with the above, everything works as expected.
However, if I simply change the name of the file .env for, say, flask.env, and then accordingly change both env_file: .env to env_file: flask.env, then I get:
(venv) [fv#fv-hpz420workstation flask_echo_docker]$ docker-compose up
WARNING: The WEB_1_PORT variable is not set. Defaulting to a blank string.
WARNING: The FLASK_RUN_PORT variable is not set. Defaulting to a blank string.
WARNING: The WEB_2_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
So obviously the envvars defined in the file were not loaded in that case. I know that according to the documentation, the section environement:, which I am using, overrides what is loaded in the env_file:. But those aren't the same variables. And at any rate, if that was the issue, it shouldn't work either with the first way, right?
What's wrong with the above?
Actually, the env_file is loaded AFTER the images have been built. We can verify this. With the code I have posted above, I can see that env_file.env has not been loaded at build time, because of the error message that I get (telling me WEB_PORT_1 is not set etc.).
But this could simply be that the file is never loaded. To rule that out, we build the image (say by providing the missing arguments with docker-compose build -e (...), then we can verify that it is indeed loaded (by logging its value in the flask application in my case, or a simple print to screen etc.).
This means the the content of env_file is available to the running container, but not before (such as when building the image).
If those variables are to be used within the docker-compose.yml file at BUILD time this file MUST be named .env (unless there is a way to provide a name other than the default, but if so I haven't found any). This is why changing env_file: flask.env to env_file: .env SEEMED to make it work - but the real reason why it worked then was because my ports were specified in a .env with the default name that docker-compose parses anyways. It didn't care that I specified it in docker-compose.yml file or not.
To summarize - TL;DR
If you need to feed environment variables to docker-compose for build-time, you must store them in a .env. No further actions needed, other than ensure this file is in the same directory as the docker-compose.yml. You can't change the default name .env
To provide envars at container run-time, you can put them in foo.env and then specify env_file:foo.env.
For run-time variable, another option is to specify them environment: [vars], if just hard-coding them in the docker-compose.yml is acceptable.. According to doc (not tested) those will override any variables also defined by the env_file
I want to export some data from MongoDB Atlas.
If I execute the command below, It tries connect localhost and export the data.
mongoexport --uri="mongodb+srv://<username>:<password>#name-of-project-x2lpw.mongodb.net/test" --collection users --out /tmp/testusers.json
Note: If you run this command from Windows CMD, it works fine
After researching the problem and with the help of a user, everything seems to point to a DNS problem and to the related resolv.conf file.
Below the original /etc/resolv.conf:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0
search name.com
At the beginning that resulted into a connection failure as shown below:
But if I would change that address into the following public available address according to what advised on this post to 1.1.1.1 the connection is successful, see below:
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 1.1.1.1
options edns0
search name.com
Which resulted into a connection success as shown below:
HOWEVER the problem is that instead of explicitly connecting to the name of the MongoDB cluster, it will connect to the localhost, which is very strange as I successfully exported the files I was looking for from the real connection.
Which means that the machine was correctly connecting to the database but via localhost.
Everything seems to lead, also according to this source and also here to a DNS problem while connecting to MongoDB via terminal to export collections.
Now from this last post it not advisable to manually change this address for several reasons, therefore right after successfully exporting the data using DNS 1.1.1.1 I changed it back to its original DNS 127.0.0.53.
However I don't think this should be a proper behavior as every time I need to export data I will have to continuously and manually change this address.
What could be the reason for this strange behavior? And therefore what could be a long term solution without manually switching between DNS addresses?
Thanks for pointing to the right direction for solving this issue.
It seems you all ready have the answer in the links you mentioned. I will summarize this:
Install resolvconf (for Ubuntu apt install resolvconf), add the line nameserver 8.8.8.8 to /etc/resolvconf/resolv.conf.d/base, then run sudo resolvconf -u and to be sure service resolvconf restart.
To verify run systemd-resolve --status.
You should see on the first line your DNS server like here:
DNS Servers: 8.8.8.8
DNS Domain: sa-east-1.compute.internal
DNSSEC NTA: 10.in-addr.arpa
16.172.in-addr.arpa
This solution persists between reboots.
I use traefik 1.7.14 and I want use basic auth for my grafana-docker-compose service.
I followed e.g. https://medium.com/#xavier.priour/secure-traefik-dashboard-with-https-and-password-in-docker-5b657e2aa15f
but I also looked at other sources.
In my docker-compose.yml I have for grafana:
grafana:
image: grafana/grafana
labels:
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.port=3000"
- "traefik.frontend.rule=Host:grafana.my-domain.io"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.auth.basic.users=${ADMIN_CREDS}
ADMIN_CREDS is in my .env file. I created the content with htpasswd -nbm my_user my_password I also tried htpasswd -nbB my_user my_password for not md5 but bcrypt encryption.
In .env
ADMIN_CREDS=test:$apr1$f0uSe/rs$KGSQaPMD.352XdXIzsfyY0
You see: I did not escape $ signs in the .env file.
When I inspect my container at runtime I see exactly the same encrypted password as in my .env file!
docker inspect 47aa3dbc3623 | grep test
gives me:
"traefik.frontend.auth.basic.users": "test:$apr1$f0uSe/rs$KGSQaPMD.352XdXIzsfyY0",
I also tried to put the user/password string directly into the docker-compose.yml. this time by escaping the $ sign.
The inspect command was successful too.
BUT: When I call my grafana-URL I get a basic auth dialog-box and when I type in my user/password combination I get always a
{"message":"Invalid username or password"}
What could be still wrong here? I have currently no idea.
This message actually means that you passed the basic auth of traefik. Because the basic auth window would pop up again if you would enter invalid credentials.
Grafana on its own uses basic auth and this one is failing.
DO NOT DO IT IN PRODUCTION: To prove it you could configure grafana to ask for the same user and password. Then it will accept the forwarded basic auth of traefik and would allow access.
However, you should either setup basic auth using traefik OR using the grafana basic auth.
You also might want to check the information on running grafana behind a reverse proxy: https://grafana.com/tutorials/run-grafana-behind-a-proxy/#1
and escpecially https://grafana.com/docs/grafana/latest/auth/auth-proxy/
Another option besides forwarding the auth headers would be to disable forwording it:
labels:
...
- "traefik.http.middlewares.authGrafana.basicauth.removeheader=true"
Now you should see the grafana login page.
I'm using Openshift and Sinatra to host my website. But it's not binding to the right port.
set :port, ENV["OPENSHIFT_RUBY_PORT"]
set :port, ENV["OPENSHIFT_RUBY_IP"]
...
puts ENV["OPENSHIFT_RUBY_PORT"]
puts settings.port
puts ENV["OPENSHIFT_RUBY_IP"]
puts settings.bind
This returns the correct output. But when the server actually starts...
Listening on localhost:9292, CTRL+C to stop
The error:
no acceptor (port is in use or requires root privileges) (RuntimeError)
How do I get it to bind to the right port?
set :port, ... sets the port for Sinatra’s builtin server, but you are using rackup, so this setting is not used (9292 is the default port for Rack).
You can use the -p or --port options to rackup to set the port. From the command line you can do:
$ bundle exec rackup -p $OPENSHIFT_RUBY_PORT
You can also specify command line options in the first line of the config.ru, but I don’t think you can specify environment variables there.
If you want to avoid specifying the port on the command line, you may need to create a wrapper script that reads the environment variables and calls rackup.