Can I run docker containers linked with different OS - operating-system

There is a datastorage, an mysql container, a php and a nginx. Is it possible to let these processes run on different oses?
So one is on debian, the other on centos and so on?
Example
this one is debian
docker run --name sql -d buildsql
this one is centos
docker run --name php --linked sql:db -d buildphp

Containers talk to each other over the network, so they are normally unaware of the OS being used by other containers, in exactly the same way that your browser doesn't really care about the OS of the webservers it talks to.
Most of the official images are based on Debian, so you quite often find your containers are all running Debian, but there's no need for this to be true. Some containers don't have an OS at all and just contain a binary that gets run when the container starts.
In short, there is no problem in using different OSs, unless you have some funky application specific problem with networking.

Related

I can't enter into the mongo db cli in my docker project

I am learning docker and during my project, i can't enter the mongo db with this command:
mongo -u "username" -p "mypassword"
It throws me this error:
bash: mongo: command not found
I am not sure what the issue is. I have installed the community edition of mongo db and i also tried different terminals but i can't enter the db.
Any suggestions?
Thanks in advance!
I assume, you did the following: Create docker-compose.yml as you wrote before. Start docker compose up. This will start a container on your system, having mongodb installed in it. It will not affect your "normal" system outside this container. (You can imagine it as kind of a virtual machine, though it is not really the same.) So, if you did not install mongodb on your local host system as well, the error you encounter is quite explicable.
If you want to access the mongodb running within the container, you have two possibilities:
1. From outside the container (which is the more common use case)
You will have to install mongo on your regular PC (or anywhere you want to access your db from) as well. Then you would issue mongo 127.0.0.1:3000. The 3000 is important as your docker-compose.yml says, mongo is listening on port 3000. Note that you might have to get your network configuration adapted before this works, especially from other PCs, where 127.0.0.1 won't be correct.
2. From within the container
Once your container is started, you can also execute a command inside it, like this: docker exec -it ${container_id} /bin/bash. You'll have to find out the container's ID beforehand, using something like docker-compose ps -q. This will start a bash shell inside the container and "connect" you to it. (If there's no /bin/bash installed in the container, this will not work. Try e. g. /bin/sh instead.) Now your terminal will be inside the container and just be able to use the commands present there. So, to get back to your local PC, don't forget to issue exit.
Conclusion
IMHO, the crucial point is, that the physical PC you are working in front of and the container running inside it are almost completely different systems, connected only by the docker daemon and some virtual network access. You'll have to keep that in mind and decide what you want to do/run inside the container and what to do outside, on the host.
Here is a little further reference that might help you. And this answer is about how to find out your container ID in an automated way. (Assuming that you are running just that one container!)

kubernetes: Is POD is also like a PC

I see that kubernets uses pod and then in each pod there can be multiple containers.
Example I create a pod with
Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
Whereas
I am coming for docker background
So in docker we do
docker run --name django -d -p 8000:8000 some-django
docker run --name reactjs -d -p 3000:3000 some-reactjs
So POD is also like PC with some ubunut os on it
No, a Pod is not like a PC/VM with Ubuntu on it.
There is no intermediate layer between your host and the containers in a pod. The only thing happening here is that the containers in a pod share some resources/namespaces in the host's kernel, and there are mechanisms in your host kernel to "protect" the containers from seeing other containers. Pods are just a mechanism to help you deploy a couple containers that share some resources (like the network namespace) a little easier. Fundamentally they are just linux processes directly on the host.
(one nuanced technicality/caveat on the above statement: Docker and tools like it will sometimes run their own VM and may try to make that invisible to you. For example, Docker Desktop does this. Usually you can ignore this layer, but it is great to know it is there. The answer holds though: That one single VM will host all of your pods/containers and there is not one VM per pod.)

Does vscode-dev-containers work with non-Docker containers like LXC?

In the website of visualstudio at the following link:
https://code.visualstudio.com/docs/remote/remote-overview
the website says that VS Code Remote Development can connect in 3 ways:
Remote SSH
Remote - Containers
Remote - WSL
In the link about Containers the page says:
Linux: Docker CE/EE 18.06+ and Docker Compose 1.21+. (The Ubuntu snap package is not supported.)
But also says:
Other glibc based Linux containers may work if they have needed Linux prerequisites.
So it is unclear if the extension works with non-Docker containers.
Is it possible to use this extension to develop software inside LXC containers(locally or remotely)?
LXC and LXD are system containers, therefore you can definitely use the Remote SSH method.
The Containers method has been designed for Docker. It might be possible to get it to work with LXD with an appropriate devcontainer.json, but you would need to figure this one out. I could not find an existing guide for this.
One could do with Ansible and LXD , infact it would be nicer.

bash TAB completion does not work on centos 8

I run a centos 8 distro on docker and I would like to have bash TAB completion with dnf package manager. According to other posts, I did the following once my docker container is started:
dnf clean all && rm -r /var/cache/dnf && dnf upgrade -y && dnf update -y
and then
dnf install bash-completion sqlite -y
After doing that I restart the container but there is still no bash completion. I also tried to source directly the bash completion file by doing:
source /etc/profile.d/bash_completion.sh
but without any better effect.
Would you know what I am doing wrong ?
You shouldn't need BASH Completion in a Docker container. The only time you should be manually connecting to a shell inside a Linux container is to troubleshoot why the process running in the container is behaving abnormally. In fact, some container design advice might even go as far as suggesting you not include a shell inside your base OS at all!
The reason this isn't working for you is due to the way in which Linux containers operate. A Container is simply a namespaced process that is managed by the kernel installed on the Host OS. This process cannot be modified or interrupted or the container will be destroyed since the process will be sent a SIGTERM. When you attempt to source the bash_completion.sh script, you are attempting to pass new configuration arguments to your existing namespaced process managed by Docker.
If you really wanted to do this the best way to do it would be to create a new Docker Container Image based on the original CentOS 8 Base Image. And then from there install the bash completion package and add an echo command to add the source line to your user's .bashrc file.
EDIT:
With regards to the additional question asked OP in the comments of this answer I have added additional information below.
Why should not I need bash completion in a container
The reason you do not need bash completion in a container is because containers are not meant to be attached to with a shell. A is simply supposed to be a single instance of a process running under specific configured criteria. Containers aren't meant to be used to create dev environments for you to connect to, they're meant to run processes and applications in software infrastructure.
Manually updating & installing packages
You mention that one of the first things you do when you spin up a container is install packages. This is also alarming to me because you are not supposed to be manually interacting with a container at all. This includes package installation. Instead, you should generate a new Container Image from the older Base Image and add additional RUN statements to the Dockerfile to update the system and install these desired packages.
Cannot believe it is not possible
It is possible if you create a new Dockerfile that purposely installs it on a new layer of the base image and produces a new container image for you to use. BUT the point is that you shouldn't be connecting to Docker containers in the first place to even get to a point where you could need something like bash completion!
Here is a great summary on the difference between a container and a virtual machine that might help clarify some of this for you. In a nutshell, containers are supposed to run, and only run, processes.

How am I supposed to use a Postgresql docker image/container?

I'm new to docker. I'm still trying to wrap my head around all this.
I'm building a node application (REST api), using Postgresql to store my data.
I've spent a few days learning about docker, but I'm not sure whether I'm doing things the way I'm supposed to.
So here are my questions:
I'm using the official docker postgres 9.5 image as base to build my own (my Dockerfile only adds plpython on top of it, and installs a custom python module for use within plpython stored procedures). I created my container as suggedsted by the postgres image docs:
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
After I stop the container I cannot run it again using the above command, because the container already exists. So I start it using docker start instead of docker run. Is this the normal way to do things? I will generally use docker run the first time and docker start every other time?
Persistance: I created a database and populated it on the running container. I did this using pgadmin3 to connect. I can stop and start the container and the data is persisted, although I'm not sure why or how is this happening. I can see in the Dockerfile of the official postgres image that a volume is created (VOLUME /var/lib/postgresql/data), but I'm not sure that's the reason persistance is working. Could you please briefly explain (or point to an explanation) about how this all works?
Architecture: from what I read, it seems that the most appropriate architecture for this kind of app would be to run 3 separate containers. One for the database, one for persisting the database data, and one for the node app. Is this a good way to do it? How does using a data container improve things? AFAIK my current setup is working ok without one.
Is there anything else I should pay atention to?
Thanks
EDIT: adding to my confusion, I just ran a new container from the debian official image (no Dockerfile, just docker run -i -t -d --name debtest debian /bin/bash). With the container running in the background, I attached to it using docker attach debtest and the proceeded to apt-get install postgresql. Once installed I ran (still from within the container) psql and created a table in the default postgres database, and populated it with 1 record. Then I exited the shell and the container stopped automatically since the shell wasn't running anymore. I started the container againg using docker start debtest, then attached to it and finally run psql again. I found everything is persisted since the first run. Postgresql is installed, my table is there, and offcourse the record I inserted is there too. I'm really confused as to why do I need a VOLUME to persist data, since this quick test didn't use one and everything apears to work just fine. Am I missing something here?
Thanks again
1.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword
-d postgres
After I stop the container I cannot run it again using the above
command, because the container already exists.
Correct. You named it (--name some-postgres) hence before starting a new one, the old one has to be deleted, e.g. docker rm -f some-postgres
So I start it using
docker start instead of docker run. Is this the normal way to do
things? I will generally use docker run the first time and docker
start every other time?
No, it is by no means normal for docker. Docker process containers are supposed normally to be ephemeral, that is easily thrown away and started anew.
Persistance: ... I can stop and start
the container and the data is persisted, although I'm not sure why or
how is this happening. ...
That's because you are reusing the same container. Remove the container and the data is gone.
Architecture: from what I read, it seems that the most appropriate
architecture for this kind of app would be to run 3 separate
containers. One for the database, one for persisting the database
data, and one for the node app. Is this a good way to do it? How does
using a data container improve things? AFAIK my current setup is
working ok without one.
Yes, this is the good way to go by having separate containers for separate concerns. This comes in handy in many cases, say when for example you need to upgrade the postgres base image without losing your data (that's in particular where the data container starts to play its role).
Is there anything else I should pay atention to?
When acquainted with the docker basics, you may take a look at Docker compose or similar tools that will help you to run multicontainer applications easier.
Short and simple:
What you get from the official postgres image is a ready-to-go postgres installation along with some gimmicks which can be configured through environment variables. With docker run you create a container. The container lifecycle commands are docker start/stop/restart/rm Yes, this is the Docker way of things.
Everything inside a volume is persisted. Every container can have an arbitrary number of volumes. Volumes are directories either defined inside the Dockerfile, the parent Dockerfile or via the command docker run ... -v /yourdirectoryA -v /yourdirectoryB .... Everything outside volumes is lost with docker rm. Everything including volumes is lost with docker rm -v
It's easier to show than to explain. See this readme with Docker commands on Github, read how I use the official PostgreSQL image for Jira and also add NGINX to the mix: Jira with Docker PostgreSQL. Also a data container is a cheap trick to being able to remove, rebuild and renew the container without having to move the persisted data.
Congratulations, you have managed to grasp the basics! Keep it on! Try docker-compose to better manage those nasty docker run ...-commands and being able to manage multi-containers and data-containers.
Note: You need a blocking thread in order to keep a container running! Either this command must be explicitly set inside the Dockerfile, see CMD, or given at the end of the docker run -d ... /usr/bin/myexamplecommand command. If your command is NON blocking, e.g. /bin/bash, then the container will always stop immediately after executing the command.