How to use Jupyter Notebook inside docker for a whole project - docker-compose

I am trying to run jupyter Notebook inside of a docker.
My docker-compose is made of multiple in-house services and modules - in python - that need to be accessed in order to run different experimentations.
Should I just add a new service docker that is using the same network as the other services?
Will it be enough to use modules that are specified in the other services?

I'm supposing you want to access Jupyter from a Docker based image,
if so, you can use the base image from
https://hub.docker.com/r/jupyter/minimal-notebook/tags?page=1&name=3.8
with port forwarding to your localhost
For example:
docker run -it -p 8888:8888 jupyter/minimal-notebook:python-3.8.8
or run it with docker-compose
#docker-compose.yaml
version: '3.8'
services:
fjupyter:
image: jupyter/minimal-notebook:python-3.8.8
ports:
- 8888:8888
Using this base image, you can add all desired packages from bash but that wouldn't be the best approach since containerization dedicates each container for a specific service,
so it's better to use a dedicated image (hence a container) for each service.

Related

Connect to PostgreSQL from Flask app in another docker container

On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks

Is a service running in a docker network secure from outside connection, even from localhost?

Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)

How to connect local Mongo database to docker

I am working on golang project, recently I read about docker and try to use docker with my app. I am using mongoDB for database.
Now problem is that, I am creating Dockerfile to install all packages and compile and run the go project.
I am running mongo data as locally, if I am running go program without docker it gives me output, but if I am using docker for same project (just installing dependencies with this and running project), it compile successfully but not gives any output, having error::
CreateSession: no reachable servers
my Dockerfile::
# Start from a Debian image with the latest version of Go installed
# and a workspace (GOPATH) configured at /go.
FROM golang
WORKDIR $GOPATH/src/myapp
# Copy the local package files to the container's workspace.
ADD . /go/src/myapp
#Install dependencies
RUN go get ./...
# Build the installation command inside the container.
RUN go install myapp
# Run the outyet command by default when the container starts.
ENTRYPOINT /go/bin/myapp
# Document that the service listens on port 8080.
EXPOSE 8080
EXPOSE 27017
When you run your application inside Docker, it's running in a virtual environment; It's just like another computer but everything is virtual, including the network.
To connect your container to the host, Docker gives it an special ip address and give this ip an url with the value host.docker.internal.
So, assuming that mongo is running with binding on every interface on the host machine, from the container it could be reached with the connection string:
mongodb://host.docker.internal:21017/database
Simplifying, Just use host.docker.internal as your mongodb hostname.
In your golang project, how do you specify connection to mongodb? localhost:27017?
If you are using localhost in your code, your docker container will be the localhost and since you don't have mongodb in the same container, you'll get the error.
If you are starting your docker with command line docker run ... add --network="host". If you are using docker-compose, add network_mode: "host"
Ideally you would setup mongodo in it's own container and connect them from your docker-compose.yml -- but that's not what you are asking for. So, I won't go into that.
In future questions, please include relevant Dockerfile, docker-compose.yml to the extent possible. It will help us give more specific answer.

How to deploy desktop based application on kubernetes

I want to deploy my desktop based application on Kubernetes. Can someone suggest some ways of doing it.
In Docker we used --net and --add-host for running same. But in Kubernetes we are not able to find any solution.
Please help!
There are a bunch of desktop applications with dockerfiles to run on Linux Desktops.
I am not sure if it is possible but the idea is to deploy Desktop-based(GUI applications) to kubernetes you need to consider a few things.
You need to make sure kubernetes nodes are Desktops not the server otherwise it wont work.
mount the node's x11 socket inside container running desktop application to allow x11 connection.
--volume /tmp/.X11-unix:/tmp/.X11-unix
export node's DISPLAY environment variable to container DISPLAY.
-e DISPLAY = unix$DISPLAY
Here is a docker-compose file I use at my Desktop.
version: '3.0'
services:
eclipse:
container_name: naeemrashid/eclipse
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- /home/$USER/containers/eclipse/workspace:/home/eclipse/workspace
environment:
- DISPLAY=unix$DISPLAY

Expose mongo port in other container

I have this (custom) container which runs a java program which requires mongo locally. Now, with docker I would like to setup mongo in its own container. So I guess, in order to expose this 27017 port locally in this java-container I need to setup an SSH-tunnel, right ? If there is a easier way please let me know.
So, there is this official mongo image image, but I get the impression ssh is not install or running. What would be the best approach to do this?
UPDATE: I've rephrased the question more focussed on port-forwarding here
You have to make your container run on the same network. No need to ssh into your mongo or app container.
https://docs.docker.com/engine/userguide/networking/
First define a network
docker network create --driver bridge isolated_nw
Start you containers using that newly network
docker run -p 27017:27017 --network=isolated_nw -itd --name=mongo-cont mongo
docker run --network=isolated_nw -itd --name=app your_image
The image of mongo includes EXPOSE 27017 so from your app container, you should be able to access to the mongo container using its name mongo-cont
You can build your custom image on top of mongodb official image, which gives you the flexibility to install additional required packages.
FROM mongo:latest
RUN apt-get install ssh
Also try to use docker-compose to build and link your containers together, it will ease the process greatly.
version: '2'
services:
mongo:
image: mongo:latest
ports:
- "27017"
custom_project:
build:
context: . # Parent directory address of Dockerfile
dockerfile: Dockerfile-Custom # Name of Dockerfile
command: /root/docker-entrypoint.sh
This is the image used for mongodb official image.
You are trying to SSH into your container to gain access to it, but that isn't how you connect. Docker provides functionality to securely connect via the following methods.
Connect into a running container - Docs:
docker exec -it <container name> bash
$ root#665b4a1e17b6:/#
Start a container from image, and connect to it - Docs:
docker run -it <image name> bash
$ root#665b4a1e17b6:/#
Note: If it is an Alpine based image, it may not have Bash installed. In that case using sh instead of bash in your commands should work. Mongo's Dockerfile looks to use debian:jessie which will have bash support.