How to deploy desktop based application on kubernetes - kubernetes

I want to deploy my desktop based application on Kubernetes. Can someone suggest some ways of doing it.
In Docker we used --net and --add-host for running same. But in Kubernetes we are not able to find any solution.
Please help!

There are a bunch of desktop applications with dockerfiles to run on Linux Desktops.
I am not sure if it is possible but the idea is to deploy Desktop-based(GUI applications) to kubernetes you need to consider a few things.
You need to make sure kubernetes nodes are Desktops not the server otherwise it wont work.
mount the node's x11 socket inside container running desktop application to allow x11 connection.
--volume /tmp/.X11-unix:/tmp/.X11-unix
export node's DISPLAY environment variable to container DISPLAY.
-e DISPLAY = unix$DISPLAY
Here is a docker-compose file I use at my Desktop.
version: '3.0'
services:
eclipse:
container_name: naeemrashid/eclipse
volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- /home/$USER/containers/eclipse/workspace:/home/eclipse/workspace
environment:
- DISPLAY=unix$DISPLAY

Related

Connect to PostgreSQL from Flask app in another docker container

On a virtual machine I have 2 docker containers running with the names <postgres> and <system> that run on the network with name <network>. I can't change options in these containers. I have created a flask application that connects to a database and outputs the required information. To connect from local computer I use
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<VM_ip>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
and it worked great.
But, when I run my application on the same VM and specify
conn = psycopg2.connect(
database="db", user='user1', password='user1_passwd'
host='<postgres>.<network>', port='<db_port>',
sslmode='require',
sslcert='./user1.crt',
sslkey='./user1.key')
I get an error:
psycopg2.OperationalError: could not parse network address \"<postgres>.<network>\": Name or service not known.
Local connections are allowed in pg_hba, the problem is in connecting from the new container in VM.
Here are the settings of my new container:
version: '3'
services:
app:
container_name: app
restart: always
build: ./app
ports:
- "5000:5000"
command: gunicorn -w 1 -b 0.0.0.0:8000 wsgi:server
I tried to make the same connection as from the local computer, specifying the VM_ip, but that didn't help either.
I also tried to specify the <postgres> container ip instead of its name in the host=, but this also caused an error.
Do you know what could be the problem?
You need to create a network first which you will use to communicate between containers. You can do that by:
docker network create <example> #---> you can name it whatever you want
Then you need to connect both containers with the network that you made.
docker run -d --net example --name <postgres_container> <postgres_image>
docker run -d --net example --name <flask_container> <flask_image>
You can read more about the docker network in its documentation here:
https://docs.docker.com/network/
from what I can see you might be using the docker-compose file for the deployment of the services, you can add one more layer above the service layer for the network where you can define the network that is supposed to be used by the services that are deployed. The network that is defined needs also be mentioned in the service definition this lets the Internal DNS engine that docker-compose creates in the background discover all the services in the network with the help of the service name.
A Bridge network may be a good driver to be used here.
You can use the following links for a better understanding of networks in docker-compose.
https://docs.docker.com/compose/compose-file/compose-file-v3/#network
https://docs.docker.com/compose/compose-file/compose-file-v3/#networks

How to use Jupyter Notebook inside docker for a whole project

I am trying to run jupyter Notebook inside of a docker.
My docker-compose is made of multiple in-house services and modules - in python - that need to be accessed in order to run different experimentations.
Should I just add a new service docker that is using the same network as the other services?
Will it be enough to use modules that are specified in the other services?
I'm supposing you want to access Jupyter from a Docker based image,
if so, you can use the base image from
https://hub.docker.com/r/jupyter/minimal-notebook/tags?page=1&name=3.8
with port forwarding to your localhost
For example:
docker run -it -p 8888:8888 jupyter/minimal-notebook:python-3.8.8
or run it with docker-compose
#docker-compose.yaml
version: '3.8'
services:
fjupyter:
image: jupyter/minimal-notebook:python-3.8.8
ports:
- 8888:8888
Using this base image, you can add all desired packages from bash but that wouldn't be the best approach since containerization dedicates each container for a specific service,
so it's better to use a dedicated image (hence a container) for each service.

kubernetes: Is POD is also like a PC

I see that kubernets uses pod and then in each pod there can be multiple containers.
Example I create a pod with
Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
Whereas
I am coming for docker background
So in docker we do
docker run --name django -d -p 8000:8000 some-django
docker run --name reactjs -d -p 3000:3000 some-reactjs
So POD is also like PC with some ubunut os on it
No, a Pod is not like a PC/VM with Ubuntu on it.
There is no intermediate layer between your host and the containers in a pod. The only thing happening here is that the containers in a pod share some resources/namespaces in the host's kernel, and there are mechanisms in your host kernel to "protect" the containers from seeing other containers. Pods are just a mechanism to help you deploy a couple containers that share some resources (like the network namespace) a little easier. Fundamentally they are just linux processes directly on the host.
(one nuanced technicality/caveat on the above statement: Docker and tools like it will sometimes run their own VM and may try to make that invisible to you. For example, Docker Desktop does this. Usually you can ignore this layer, but it is great to know it is there. The answer holds though: That one single VM will host all of your pods/containers and there is not one VM per pod.)

Is a service running in a docker network secure from outside connection, even from localhost?

Question:
Can anybody with access to the host machine connect to a Docker network, or are services running within a docker network only visible to other services running in a Docker network assuming the ports are not exposed?
Background:
I currently have a web application with a postgresql database backend where both components are being run through docker on the same machine, and only the web app is exposing ports on the host machine. The web-app has no trouble connecting to the db as they are in the same docker network. I was considering removing the password from my database user so that I don't have to store the password on the host and pass it into the web-app container as a secret. Before I do that I want to ascertain how secure the docker network is.
Here is a sample of my docker-compose:
version: '3.3'
services:
database:
image: postgres:9.5
restart: always
volumes:
#preserves the database between containers
- /var/lib/my-web-app/database:/var/lib/postgresql/data
web-app:
image: my-web-app
depends_on:
- database
ports:
- "8080:8080"
- "8443:8443"
restart: always
secrets:
- source: DB_USER_PASSWORD
secrets:
DB_USER_PASSWORD:
file: /secrets/DB_USER_PASSWORD
Any help is appreciated.
On a native Linux host, anyone who has or can find the container-private IP address can directly contact the container. (Unprivileged prodding around with ifconfig can give you some hints that it's there.) On non-Linux there's typically a hidden Linux VM, and if you can get a shell in that, the same trick works. And of course if you can run any docker command then you can docker exec a shell in the container.
Docker's network-level protection isn't strong enough to be the only thing securing your database. Using standard username-and-password credentials is still required.
(Note that the docker exec path is especially powerful: since the unencrypted secrets are ultimately written into a path in the container, being able to run docker exec means you can easily extract them. Restricting docker access to root only is also good security practice.)

Locally deploying a GCloud app with a custom Docker image

I have been deploying my app from a bash terminal using an app.yaml script and the command:
gcloud app deploy app.yaml
This runs a main.app script to set the environment from a custom made docker image.
How can I deploy this locally only so that I can make small changes and see their effects before actually deploying which takes quite a while?
If you want to run your app locally, you should be able to do that outside of the docker container. We actually place very few restrictions on the environment - largely you just need to make sure you're listening on port 8080.
However if you really want to test locally with docker - you can...
# generate the Dockerfile for your applications
gcloud beta app gen-config --custom
# build the docker container
docker build -t myapp .
# run the container
docker run -it -p 8080:8080 myapp
From there, you should be able to hit http://localhost:8080 and see your app running.
Hope this helps!