What is the purpose of Docker-compose and Portainer? - docker-compose

I am an engineering newbie who is trying to learn something while experimenting with Raspberry Pi. I wanted to try out Docker, so I followed along with a tutorial, which installed Docker, docker-compose and via that, Portainer.
Now I am a bit lost.
What is the purpose of docker-compose?
What is the purpose of Portainer?
And if I want to add some container like NextCloud/openVPN how do I do that? Thru Portainer? Or docker-compose?
Thanks!

Portainer is just a visualized tool for Docker suite. You can run docker commands everywhere a portainer-agent runs as long as you have access to it and it is not an official Docker product.
Docker-compose from the other is a set of docker commands as part of docker engine (community and enterprise) that will help you to orchestrate containers in a single node (PC or VM). If you want to orchestrate more than a single node, you should read about Docker-Swarm or Kubernetes.
A very nice article to understand swarm vs compose differences is here.
Portainer is just a tool on top of Docker that gives you a UI for free because Native Docker Universal Control Panel is on enterprise edition only.

Related

GITLAB Autoscaling in Kubernetes

I am using GitLab runner in Kubernetes for building our application. Since ours is a docker in a docker use case, we are using Kaniko to build images from a DockerFile.
I am having a hard time figuring out how to implement horizontal/vertical scaling for pods and instances.
Looking at this article https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnersmachine-section , it says to use docker+machine image, but
I don't want to have any docker dependency in our build process especially when Kubernetes has deprecated docker support in newer versions.
Any advice?

How to run docker image in Kubernetes (v1.21) pod for CI/CD scenario?

We're investigating using an AKS cluster to run our Azure Pipeline Agents.
I've looked into tools like Kaniko for building docker images, which should work fine, however some of our pipelines run docker commands, e.g. we run checkov using the docker image, but I've struggled to find any solution that seems to work, given the deprecation of the docker shim in kubernetes.
The obvious solution would be add those tools that we currently run from docker into our agent image but this isn't a great solution as it will mean that any time a developer wants to run a tool like that we would need to modify the new image, which is less than ideal.
Any suggestions?
Thanks
You could just use nerdctl or crictl for your commands and even create an alias for that (specially wiht nerdctl) = alias docker="nerdctl"
As docker images or better said container images using the OCI image format you will have no issues running them with containerd or cri-o.

Material on Building a REST api from within a docker container

I'm looking to build an api on a application that is going to run its own docker container. It needs to work with some applications via its REST apis. I'm new to development and dont understand the process very well. Can you share the broad steps necessary to build and release the APIs so that my application runs safely within the docker but externally whatever communication needs to happen they work out well.
For context: I'm going to be working on a Google Compute VM instance and the application I'm building is a HyperLedger Fabric program written in GoLang.
Links to reference material and code would also be appreciated.
REST API implementation is very easy in Go. You can use the inbuilt net/http package. Here's a tutorial which will help you understand its usage. https://tutorialedge.net/golang/creating-restful-api-with-golang/
Note : If you are planning on developing a production server, the default HTTP client is not recommended. It will knock down the server on heavy frequency calls. In that case, you have to use a custom HTTP client as described here, https://medium.com/#nate510/don-t-use-go-s-default-http-client-4804cb19f779
For learning docker I would recommend the docker docs they're very good and cover a handful of stuff. Docker swarm and orchestration are useful things to learn but most people aren't using docker swarm anymore and use things like kubernetes instead. Same principles, but different tech. I would definitely go through this website: https://docs.docker.com/ and implemented on your own computer. Then just practice by looking at other peoples dockerfiles and building your own. A good understanding a linux will definitely help with installing packages and so on.
I haven't used go myself but I suspect it shouldn't be too hard to deploy into a docker container.
The last production step of deployment will be similar for whatever your using if it's docker or no docker. The VM will need an webserver like apache or nginx to expose the ports you wish to use to the public and then you will run the docker container or the go server independently and then you'll have your system!
Hope this helps!

Learning to use Kuberentes on one single computer

I'm in the need of learning how to use Kubernetes. I've read the first sentences of a couple of introductory tutorials, and never have found one which explains me, step by step, how to build a simulated real world example on a single computer.
Is Kubernetes by nature so distributed that even the 101-level tutorials can only be performed on clusters?
Or can I learn (execute important examples) the important stuff there is to know by just using my Laptop without needing to use a stack of Raspberry Pi's, AWS or GCP?
The easiest might be minikube.
Minikube is a tool that makes it easy to run Kubernetes locally.
Minikube runs a single-node Kubernetes cluster inside a VM on your
laptop for users looking to try out Kubernetes or develop with it
day-to-day.
For a resource that explains how to use this, try this getting started guide. It runs through an entire example application using a local development environment.
If you are okay with using Google Cloud Platform (I think one gets free credits initially), there is hello-node.
If you want to run the latest and greatest (not necessary stable) and you're using Linux, is also possible to spin up a local cluster on Linux from a cloned copy of the kubernetes sources, using hack/local_up_cluster.sh.

For multiple projects using Docker, use Multiple VMs or Single Host with multiple containters

Suppose I had three apps that are currently hosted at Digital Ocean or AWS. Each of them use at least one VM for the database and one or more VMs for the web app.
Now let's say that I wanted to get one dedicated server at OVH with 64GB of RAM and use docker to deploy these apps. Each project would have its own docker-compose file. I'm thinking of two ways of doing this:
Install VMWare Esxi on the server, create one VM for each project and deploy docker containers for the web and database.
Just install Ubuntu as the host OS and manage containers for all apps using separate network entry points(IPs) for each project.
Would I be wasting too much of the server resources going for the first choice?
Would I be overcomplicating my infrastructure by going for the second?
I understand both are valid choices, but what would be the better/suggested way?
Thanks for the help!
will need a full VM for each app with all the memorysharing and I/O stuff. You may use memory ballooning with virtual box (ESXi should have such a feature too, maybe named different). And within every VM you'll have have the docker stack included.
If you use a native OS you'll need the docker stack only once.
What OS do you use within your docker images? An 200MB Ubuntu? Or a 5MB Alpine? If you choose Alpine as your host OS and/or your image OSes you'll be able to keep your "container overhead" much smaller
It depends what system services your apps need, like cron, upstart, ... how much resources does the app need? Is it JVM based app that needs an own JVM in every container? Etc...
At first hand I would plan an Alpine Host with Alpine docker images. If there are Apps that realy need Ubuntu images you can just use it in the image for that specific app.
Have also a look at Docker vs. VM