I can't enter into the mongo db cli in my docker project - mongodb

I am learning docker and during my project, i can't enter the mongo db with this command:
mongo -u "username" -p "mypassword"
It throws me this error:
bash: mongo: command not found
I am not sure what the issue is. I have installed the community edition of mongo db and i also tried different terminals but i can't enter the db.
Any suggestions?
Thanks in advance!

I assume, you did the following: Create docker-compose.yml as you wrote before. Start docker compose up. This will start a container on your system, having mongodb installed in it. It will not affect your "normal" system outside this container. (You can imagine it as kind of a virtual machine, though it is not really the same.) So, if you did not install mongodb on your local host system as well, the error you encounter is quite explicable.
If you want to access the mongodb running within the container, you have two possibilities:
1. From outside the container (which is the more common use case)
You will have to install mongo on your regular PC (or anywhere you want to access your db from) as well. Then you would issue mongo 127.0.0.1:3000. The 3000 is important as your docker-compose.yml says, mongo is listening on port 3000. Note that you might have to get your network configuration adapted before this works, especially from other PCs, where 127.0.0.1 won't be correct.
2. From within the container
Once your container is started, you can also execute a command inside it, like this: docker exec -it ${container_id} /bin/bash. You'll have to find out the container's ID beforehand, using something like docker-compose ps -q. This will start a bash shell inside the container and "connect" you to it. (If there's no /bin/bash installed in the container, this will not work. Try e. g. /bin/sh instead.) Now your terminal will be inside the container and just be able to use the commands present there. So, to get back to your local PC, don't forget to issue exit.
Conclusion
IMHO, the crucial point is, that the physical PC you are working in front of and the container running inside it are almost completely different systems, connected only by the docker daemon and some virtual network access. You'll have to keep that in mind and decide what you want to do/run inside the container and what to do outside, on the host.
Here is a little further reference that might help you. And this answer is about how to find out your container ID in an automated way. (Assuming that you are running just that one container!)

Related

How do I SSH from a Docker container to a remote server

I am building a docker image off postgres image, and I would like to seed it with some data.
I am following the initialization-scripts section of the documentation.
But the problem I am facing now, is that my initialisation scripts needs to ssh to a remote database and dumb data from there. Basically something like this:
ssh remote.host "pg_dump -U user -d somedb" > some.sql
but this fails with the error that ssh: command not found
Question now is, in general, how do I ssh from a docker container to a remote server. In this case, specifically how do I ssh from a docker container to a remote database server as part of the initialisation step of seeding a postgres database?
As a general rule you don't do things this way. Typical Docker images contain only the server they're running and some core tools, but network clients like ssh or curl generally aren't part of this. In the particular case of ssh, securely managing the credentials required is also tricky (not impossible, but not obvious).
In your particular case, I might rearrange things so that your scripts didn't have the hard assumption the database was running locally. Provision an empty database container, then run your script from the host targeting that empty database. It may even work to set the PGHOST and PGPORT environment variables to point to your host machine's host name and the port you publish the database interface on, and then run that script unmodified.
Looking closer at that specific command, you also may find it better to set up a cron job to run that specific database dump and put the contents somewhere. Then a developer can get a snapshot of the data without having to make a connection to the live database server, and you can limit the number of people who will have access. Once you have this dump file, you can use the /docker-entrypoint-initdb.d mechanism to cause it to be loaded at first startup time.

MongoDB: Is it ok to run multiple mongod on the same directory(dbpath)?

I'm currently working with Docker, I use mongo image for my DB container. For persistent storage, I mounted host machine's directory(e.g. /var/docker/data/db) to container(e.g. /data/db).
By now, whenever I wanted to run mongo shell and connect to my db I was doing these things:
Attach to running MongoDB container using docker exec -it <container> bash
Run mongo shell inside the container
Do some job
But I think it'll be much better if I could run mongod separately on the host machine, not in the container, and then connect to it even when the container is not running.
So is it possible to doing so? If I run two mongod process on the same directory(files), will there be an file access conflict?
If someone did similar kind of work, share me your experience please. Thanks.

How am I supposed to use a Postgresql docker image/container?

I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.

How to use the "Remote Systems" view in Eclipse to explore a Docker container file system?

The Eclipse Remote Systems view is a great tool to connect to VMs and explore their file systems, currently the following options are available:
First I find out the container IP by running this command:
docker inspect <container> | grep IPAddress | cut -d '"' -f 4
Once I have the IP, I launch the New Connection wizard from the Remote Systems view, I tried to select Linux, SSH only and FTP only and in the Hostname field I paste the container IP, click Finish and the connection seems to be successfully created, now when I try to expand the the Files node it prompts for User and Password, the problem is that I don't have that info, does the user/pass vary from container to container? how can I get this info?
You can just instantiate a container with that image but with a shell so that you can see what usernames are configured on that image.
docker run -it node /bin/bash
You can then configure users, password and do a:
docker commit <image-name> my-node:0.1
Then you can instantiate a new container:
docker run -d -p 80:9080 -p 443:9443 my-node
Is ssh also running in that container? If not you will have to install it into the container so that you can ssh to it.
A docker container only runs a single parent process at a time (on your host machine that parent process is 'init' which runs a bunch of system services). In the case of your node container, that parent process is a node server.
Eclipse connects to a remote machine by connecting to a listener on that machine using some protocol. SSH of FTP, for example. With the docker container, there is no process listening for this connection, so you cannot connect using Eclipse as it is. You have two options...
Use the command line and docker exec to connect to the machine and explore its filesystem. No pretty pictures, but you don't need a lot of knowledge.
Modify your container in some way to connect to it. you have two options here...
A. Modify your image to run an SSH daemon. A simple way to do that is to use the phusion/baseimage container as your parent, and have it spawn both the ssh daemon and the node server. You need to know a good amount about linux sysadmin to get this working (not a lot, but a good amount).
B. Launch a second copy of the container with a different command, such as ssh -d. You can then connect to the second copy. This has the downside that it won't be the same container you're interested in, and you STILL have to modify the image since I doubt the node image even has an ssh daemon installed... but it is less knowledge than wrapping your head around runit.

How do you pass an environment variable to Solr running inside Docker when the environment variable only exists inside the container?

I need to do a dataimport from a PostgreSQL container running inside docker to a Solr server also running inside of Docker.
In my docker run command I specify the --link option which creates the environment variable $POSTGRESQL_PORT_5432_TCP_ADDR inside the solr docker container, and I need to pass this into Solr to use in my solrconfig.xml file.
I've heard that this is possible by passing JVM environment variables to the Solr startup command, but docker run starts Solr automatically. The only workaround I've found is doing something like:
docker run --name solr -d -p 8983:8983 --link postgresql --volumes-from solr_cores makuk66/docker-solr /bin/true
Starting the container with bin/true so it does nothing, and then
docker exec -it solr /bin/bash
to get into the container, finally running the solr startup command myself with the flag
-Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR
However this is an involved manual process, and I'm wondering if there's a better way.
Looking on the page Taking Solr to Production you see
The bin/solr script simply passes options starting with -D on to the JVM during startup. For running in production, we recommend setting these properties in the SOLR_OPTS variable defined in the include file. Keeping with our soft-commit example, in /var/solr/solr.in.sh, you would do:
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=10000"
So all you need to do is edit the SOLR_OPTS environment variable in solr.bin.sh.
It's a bit different for Docker because you don't directly have access to solr.bin.sh, but it after some trial and error, it was as easy as adding this to my Dockerfile.
RUN echo 'SOLR_OPTS="$SOLR_OPTS -Dsolr.database.ip=$POSTGRESQL_PORT_5432_TCP_ADDR"' >> /opt/solr/bin/solr.in.sh
Then you can use it in the solrconfig.xml file as
${solr.database.ip}
An important thing to note is that you can call the JVM environment variable whatever you want as long as you make sure not to overwrite anything important. I could have called it
-Dsolr.potato
if I wanted to.
For some reason the solr.in.cmd file looks exactly the same as solr.in.sh which confused me on how to set variables there. In windows containers, the command to accomplish the same - from a dockerfile, would be:
RUN Add-Content C:\solr\bin\solr.in.cmd 'set SOLR_OPTS=%SOLR_OPTS% -Dsolr.database.ip=%POSTGRESQL_PORT_5432_TCP_ADDR%'