Maybe I'm being really dumb and not understanding it properly; I want to define my docker-compose.yml file like so:
web:
...
environment:
FOO: bar
BAR: foo
And I want to expose those environment variables for my runtime applications that will run inside the container (I guess by mapping them to shell variables?).
What's the best way to achieve this?
I think you just have a minor syntax issue in your yml, try changing it to:
web:
...
environment:
- FOO=bar
- BAR=foo
You should then be able to access those variables in your code as expected.
Related
I am trying to run latest Artifactory 7 OSS with docker-compose. And I do not want to use Derby database, but postgresql.
According to the documentation, I need to define the ENV variables:
environment:
- DB_TYPE=postgresql
- DB_USER=user
- DB_PASSWORD=mypass
- DB_URL=jdbc:postgresql://postgresql-docker:5432/database
That seems correct, but when launching the server, I get:
java.lang.RuntimeException: Driver org.apache.derby.jdbc.EmbeddedDriver claims to not accept jdbcUrl, jdbc:postgresql://postgresql-docker:5432/database
Seems that is still using the Derby driver and not the postgresql (that I expect by using the parameter DB_TYPE).
How can I force it to use postgresql? There is no DB_TYPE variable or similar on the documentation. Which ones are the correct parameters for Artifactory 7 OSS?
Ok, seems that I am following a deprecated documentation. Looking into the logs, the variables on the docker must now be:
- JF_SHARED_DATABASE_TYPE=postgresql
- JF_SHARED_DATABASE_USERNAME=user
- JF_SHARED_DATABASE_PASSWORD=mypass
- JF_SHARED_DATABASE_URL=jdbc:postgresql://postgresql-docker:5432/database
And searching more, now I have an extra variable:
JF_SHARED_DATABASE_DRIVER=org.postgresql.Driver
That is the one that solves the problem.
Our devteam uses different development configs (let's say docker-compose.local.yml and docker-compose.remote.yml, maybe more to come) when coding with vscode containers. Some want a local database, others want it remote, for example. We want to let them choose, but we also want to keep them updated. Those configs are slightly similar, but one or two containers might be different (and it's not just environment variables). We really need multiple docker-compose files.
We also want to keep all our different docker-compose files and our devcontainer.json in git, this way we can change our docker-compose or devcontainer.json files and each dev will automatically be up to date. No one should be able to push its docker-compose choice in our codebase.
Here's our devcontainer.json :
{
"name": "XXXXXX-Backend",
"dockerComposeFile": "docker-compose.local.yml", // Ideally, this line should change according to the dev
"service": "backend-app",
"workspaceFolder": "/workspace",
...
}
Is there a way for us to keep the docker-compose choice our dev made in their machine, without ever pushing it to our codebase, but to keep their config up to date ?
We tried :
Asking the dev to copy/paste his config in a dedicated, gitignored folder, from which the devcontainer.json will load it. But this means that if we change a config, we need to ask our dev to copy/paste it again.
Load a variable (the name of the docker-compose file, defined locally by the dev) in devcontainer.json, but that's not doable
Adding the line, in the json, to the gitignore, this way the choice is kept local and docker-compose files are updated, but ... it feels like a hack. Ultimately, this will be our choice if nothing else comes out of this.
Thank you !
Ref: https://code.visualstudio.com/docs/remote/create-dev-container#_extend-your-docker-compose-file-for-development
You can solve these and other issues like them by extending your entire Docker Compose configuration with multiple docker-compose.yml files that override or supplement your primary one.
For example, consider this additional .devcontainer/docker-compose.extend.yml file:
version: '3'
services:
your-service-name-here:
volumes:
# Mounts the project folder to '/workspace'. While this file is in .devcontainer,
# mounts are relative to the first file in the list, which is a level up.
- .:/workspace:cached
# [Optional] Required for ptrace-based debuggers like C++, Go, and Rust
cap_add:
- SYS_PTRACE
security_opt:
- seccomp:unconfined
# Overrides default command so things don't shut down after the process ends.
command: /bin/sh -c "while sleep 1000; do :; done"
This same file can provide additional settings, such as port mappings, as needed. To use it, reference your original docker-compose.yml file in addition to .devcontainer/devcontainer.extend.yml in a specific order:
{
"name": "[Optional] Your project name here",
// The order of the files is important since later files override previous ones
"dockerComposeFile": ["../docker-compose.yml", "docker-compose.extend.yml"],
"service": "your-service-name-here",
"workspaceFolder": "/workspace",
"shutdownAction": "stopCompose"
}
VS Code will then automatically use both files when starting up any containers. You can also start them yourself from the command line as follows:
docker-compose -f docker-compose.yml -f .devcontainer/docker-compose.extend.yml up
I am fairly new to Eclipse Ditto and have just started using it for my project.
I am trying to connect Cloud hosted mongodb instance to ditto.
Following the documentation I know that I need to add some variables and pass them to docker-compose. The problem is that I do not know what should be the values of these variables as there are no examples.
Are all these variables necessary or will just the URI work?
This is my current .env file config
MONGO_DB_URI=mongodb+srv://username:pass#IP
MONGO_DB_READ_PREFERENCE=primary
MONGO_DB_WRITE_CONCERN=majority
The command I am using to start ditto is
docker-compose --env-file .env up
I have removed mongodb service from docker-compose.yml
Nice to hear that you started using Ditto in your project.
You need to set the following env variables to connect to your Cloud hosted MongoDB.
MONGO_DB_URI: Connection string to MongoDB
For more detailes see: https://docs.mongodb.com/manual/reference/connection-string/
If you have a ReplicaSet your MongoDB URI should look like this: mongodb://[username:password#]mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
I assume you also need to enable SSL to connect to your MongoDB.
To do so set this env var.
MONGO_DB_SSL_ENABLED: true
If you want to use a specific Ditto version you can set the following env var
DITTO_VERSION= e.g. 2.1.0-M3
If you use .env as file name you can start Ditto with:
docker-compose up
The other options for pool size, read preference and write concern aren't necessary as there are default values in place.
I have a dockerfile for frontend, one for backend, and one for the database.
In the backend portion of the project, I have a dockerfile and a docker-compose.yml file.
the dockerfile is great for the backend because it configures the backend, copies and sets up the information etc. I like it alot.
The issue i have come to though is that if i can easily create a dockerfile for the dbms, but it requires me to put it in a different directory, where i was hoping to just define it in the same directory as the backend, and because of the fact the backend and the dbms is so tightly coupled, i figured this is where docker-compose would go.
My issue I ran into is that in a compose file, I cant do a COPY into the dbms container. I would just have to create another dockerfile to set that up. I was thinking that would work.
When looking on github, there was a big enhancement thread about it, but the closest people would get is just creating volume relationship, which fails to do what I want.
Ideally, All i want to be able to do is to stand up a postgres dbms in a fashion such that i could conduct load balancing on it later down the line with 1 write, 5 read or something, and have its initial db defined in my one sql file.
Am I missing something? I thought i was going about it correctly, but maybe I need to create a whole new directory with a dockerfile for the dbms.
Thoughts on how I should accomplish this?
Right now i was doing something like:
version: '2.0'
services:
backend:
build: .
ports:
- "8080:8080"
database:
image: "postgres:10"
environment:
POSTGRES_USER: "test"
POSTGRES_PASSWORD: "password"
POSTGRES_DB: "foo"
# I shouldnt have volumes as it would copy the entire folder and its contents to db.
volumes:
- ./:/var/lib/postgresql/data
To copy things with docker there an infinite set of possibilities.
At image build time:
use COPY or ADD instructions
use shell commands including cp,ssh,wget and many others.
From the docker command line:
use docker cp to copy from/to hosts and containers
use docker exec to run arbitrary shell commands including cp, ssh and many others...
In docker-compose / kubernetes (or through command line):
use volume to share data between containers
volume can be local or distant file systems (network disk for example)
potentially combine that with shell commands for example to perform backups
Still how you should do it dependy heavily of the use case.
If the data you copy is linked to the code and versionned (in the git repo...) then treat as it was code and build the image with it thanks to the Dockerfile. This is for me a best practice.
If the data is a configuration dependrnt of the environement (like test vs prod, farm 1 vs farm 2), then go for docker config/secret + ENV variables.
If the data is dynamic and generated at production time (like a DB that is filled with user data as the app is used), use persistant volumes and be sure you understand well the impact of container failure for your data.
For a database in a test system it can make sense to relauch the DB from a backup dump, a read only persistant volume or much simpler backup the whole container at a known state (with docker commit).
Okay so I have a yaml file for deploying a spring boot image. So i want to set my spring datasource password in my environment yaml. But when I use docker-compose up , it shows error
ERROR: Invalid interpolation format for "environment" option in service "apps": "spring.datasource.password= "my_password!##$"
I believe that this error occured because my datasource password has symbols in it. How do I resolve it?
Here is my .yml file:
version: '3.0'
services:
apps:
image: student
ports:
- 8085:8080
environment:
- spring.datasource.url=jdbc:postgresql://192.168.100.3/my_database
- spring.datasource.username= my_user
- spring.datasource.password= my_password!##$
The $ at the end signals to Compose that it needs to do variable substitution; without an actual environment variable name after it, you get the "invalid interpolation" error. To get around this:
You can use a $$ (double-dollar sign) when your configuration needs a literal dollar sign.
So set:
- spring.datasource.password=my_password!##$$
(Double-check: should the username and password begin with spaces? You might need to remove the space after the equals sign, the environment: documentation does not show spaces here and does not suggest it removes whitespace from values.)