I am using VS Code with remote containers extension
My container accepts json files to override specific configurations supported by it.
For example, if I running from a terminal, the command looks something like this:
$ docker run -it my-container --Flag1 /path/to/file.json --Flag2 /path/to/file2.json
However, I can't find a way to pass these flags using either:
devcontainer.json
or
devcontainer.env
I believe they should go in the devcontainer.env file - but I can't seem to find a way to specify it.
If I use in my devcontainer.env:
Flag1=/path/to/file.json
Flag2=/path/to/file2.json
The container will not start.
Related
I am learning docker and during my project, i can't enter the mongo db with this command:
mongo -u "username" -p "mypassword"
It throws me this error:
bash: mongo: command not found
I am not sure what the issue is. I have installed the community edition of mongo db and i also tried different terminals but i can't enter the db.
Any suggestions?
Thanks in advance!
I assume, you did the following: Create docker-compose.yml as you wrote before. Start docker compose up. This will start a container on your system, having mongodb installed in it. It will not affect your "normal" system outside this container. (You can imagine it as kind of a virtual machine, though it is not really the same.) So, if you did not install mongodb on your local host system as well, the error you encounter is quite explicable.
If you want to access the mongodb running within the container, you have two possibilities:
1. From outside the container (which is the more common use case)
You will have to install mongo on your regular PC (or anywhere you want to access your db from) as well. Then you would issue mongo 127.0.0.1:3000. The 3000 is important as your docker-compose.yml says, mongo is listening on port 3000. Note that you might have to get your network configuration adapted before this works, especially from other PCs, where 127.0.0.1 won't be correct.
2. From within the container
Once your container is started, you can also execute a command inside it, like this: docker exec -it ${container_id} /bin/bash. You'll have to find out the container's ID beforehand, using something like docker-compose ps -q. This will start a bash shell inside the container and "connect" you to it. (If there's no /bin/bash installed in the container, this will not work. Try e. g. /bin/sh instead.) Now your terminal will be inside the container and just be able to use the commands present there. So, to get back to your local PC, don't forget to issue exit.
Conclusion
IMHO, the crucial point is, that the physical PC you are working in front of and the container running inside it are almost completely different systems, connected only by the docker daemon and some virtual network access. You'll have to keep that in mind and decide what you want to do/run inside the container and what to do outside, on the host.
Here is a little further reference that might help you. And this answer is about how to find out your container ID in an automated way. (Assuming that you are running just that one container!)
As other answers have noted, I'm trying to run multiple commands in docker-compose, but my container exits without any errors being logged. I've tried numerous variations of this in my docker-compose file:
command: echo "the_string_for_the_file" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend
The Dockerfile command is:
CMD ["datahub-frontend/bin/datahub-frontend"]
My Real Goal
Before the application starts, I need to create a file named user.props in a location ./datahub-frontend/conf/ and add some text to that file.
Annoying Constraints
I cannot edit the Dockerfile
I cannot use a volume + some init file to do my bidding
Why? DataHub is an open source project for documenting data. I'm trying to create a very easy way for non-developers to get an instance of DataHub hosted in the cloud. The hosting we're using (AWS Elastic Beanstalk) is cool in that it will accept a docker-compose file to create a web application, but it cannot take other files (e.g. an init script). Even if it could, I want to make it really simple for folks to spin up the container: just a single docker-compose file.
Reference:
The container image is located here:
https://registry.hub.docker.com/layers/datahub-frontend-react/linkedin/datahub-frontend-react/465e2c6/images/sha256-e043edfab9526b6e7c73628fb240ca13b170fabc85ad62d9d29d800400ba9fa5?context=explore
Thanks!
You can use bash -c if your docker image has bash
Something like this should work:
command: bash -c "echo \"the_string_for_the_file\" > ./datahub-frontend/conf/user.props && datahub-frontend/bin/datahub-frontend"
Let's say that for some reason I don't want to launch VSC to get a devcontainer shell running, but I still want all of that devcontainer goodness without rewriting all of the configuration files. There's a devcontainer CLI, but at the moment, the only options available are open (VSC, connected to the container) and build (which builds the image, in the use case that many people are sharing the same devcontainer environment).
Ideally, there'd be a third option devcontainer shell which does all the build, spin up and connection work that is done inside VSC, but the just execs to the running container.
The .devcontainer folder contains a devcontainer.json file. In it, if you're using docker-compose, there will be a dockerComposeFile key with an array of docker-compose files, loaded in order. You can do the same with a command such as docker-compose -f first-compose-file.yml -f second-compose-file.yml.
That same folder usually has its own docker-compose.yml file. You will notice it declares your main service and usually sets up a volume to share between the host and the container (useful to work inside the container).
There are other interesting keys in devcontainer.json such as forwardPorts, remoteUser or postCreateCommand. You should be able to set up most of them in your docker-compose file (dev stuff should go into the .devcontainer/ one). The post-create command can be run with docker compose exec SERVICENAME COMMAND.
I don't know if there's a command to detect .devcontainer files and pick up the right settings, but it should not be hard to write one.
Is there a way to run docker_compose with parameters?
Something like the following:
docker-compose run --rm app_service python init_script
Now I use shell module for this.
Can I use the docker_compose module instead?
The documentation for the docker_compose module suggests that it can only do the equivalents of docker-compose up, down, and build. None of the other Ansible Docker modules connect to Compose at all.
You could use docker_container as an equivalent to a separate docker run command, but this has the same drawbacks as trying to docker run a separate container in a mostly-Compose environment (you don't get networks or volumes or dependencies declared in the docker-compose.yml file).
Falling back to shell is probably your best option here.
I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'