I am trying to install Kubernetes on Debian 9 (stretch) server, which is on cloud and therefore can't do virtualization. And it doesn't have systemd. Also, I'm trying for really minimal configuration, not big cluster.
I've found Minikube, https://docs.gitlab.com/charts/development/minikube/index.html which is supposed to run without virtualization using docker, but it requires systemd, as mentioned here https://github.com/kubernetes/minikube/issues/2704 (and yes I get the related error message).
I also found k3s, https://github.com/rancher/k3s which can run either on systemd or openrc, but when I install openrc using https://wiki.debian.org/OpenRC I don't have the "net" service it depends on.
Then I found microk8s, https://microk8s.io/ which needs systemd simply because snapd needs systemd.
Is there some other alternative or solution to mentioned problems? Or did Poettering already bribed everyone?
Since you are well off the beaten path, you can probably just run things by hand with k3s. It's a single executable AFAIK. See https://github.com/rancher/k3s#manual-download as a simple starting point. You will eventually want some kind of service monitor to restart things if they crash, if not systemd then perhaps Upstart (which is not packaged for Deb9) or Runit (which itself usually runs under supervision).
Related
Is there an ubuntu version of Kubernetes in docker for Ubuntu, that works like docker for mac(https://blog.docker.com/2018/01/docker-mac-kubernetes/).
and docker for windows (https://docs.docker.com/docker-for-windows/#kubernetes)
minikube consumes lots of resource, and I want to try out a lighter alternative, which I found docker for mac that supports kubernetes, but my machine is ubuntu 18.04.
As you may know there are a lot of projects that offer K8S solution, Minikube is the closest to an official mini distribution for local testing and development, but if you wanna try lightweight options you can check:
Kind runs Kubernetes clusters in Docker containers. It supports multi-node clusters as well as HA clusters. Because it runs K8s in Docker, kind can run on Windows, Mac, and Linux. Kind may not have developer-friendly features.
K3s is ma project by Rancher as a lightweight Kubernetes offering suitable for edge environments, IoT devices, CI pipelines, and even ARM devices, like Raspberry Pi's. It runs on any Linux distribution without any additional external dependencies or tools. K3s provides lightweight by replacing docker with containerd, and using sqlite3 as the default DB (instead of etcd). This solution consumes 512 MB of RAM and 200 MB of disk space.
K3d
It is based on a k3s which is a lightweight kubernetes distribution (similar to kind).
Microk8s runs upstream Kubernetes as native services on Linux systems supporting snap. A good option if you are running Ubuntu on your Laptop. There is a very good installation tutorial:
And there are plenty more. You can check what solution suits you best.
Check kind it is kubernetes in docker.
I am trying to install kubernetes on ubuntu 16.04.
I am able to install other kubernetes components but i dont know if kube-proxy is installed? Should i get separate binary package for it or does it come prepackaged with kubernetes apt-get installation?
In most cases installing kube-proxy onthe node it self is not required as a common pattern is running kube-proxy as a DaemonSet in your kube cluster.
In regular apt-get packages you would normally find kubectl, kubeadm and kubelet. If you use kubeadm to create the cluster it will automatically prepare kube-proxy as well (in the form of a container, as the rest of the elements of the kubernetes control panel). Therefore, you wouldn't need to install it separately.
If you use the official kubernetes tarball and try to manually install the cluster by yourself, you will need to configure kube-proxy just like the rest of the elements, but the binaries will be included in the tarball. This documentation shows the essential options to configure it: https://kubernetes.io/docs/getting-started-guides/scratch/#kube-proxy. Another resource is Kubernetes the hard way: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md
Is it possible to run a KVM virtual machine inside of a Google Compute Engine instance? Nested virtualization, in short?
As of right now, the virtualized environment the GCE instances run on doesn't offer the virtualization extensions KVM requires to function. During installation it does indicate so, and running:
sudo /etc/init.d/qemu-kvm start
[FAIL] Your system does not have the CPU extensions required to use
KVM. Not doing anything. ... failed!
PS - Even so, at least in theory, there's nothing preventing the execution of virtualized environments that do not depend on these extensions: Docker, QEMU (stand-alone), etc...
Yes, you can use nested virtualization in the GCE environment.
When you first asked this question, and when #sammy-villoldo first answered you could not.
But September 28, 2017 Google announced:
Google Compute Engine now supports nested virtualization in beta
It used to be that you needed to be careful as it is restricted to CPU architectures based on Haswell or newer, and those were not available everywhere. Scanning the list now it appears every GCE zone has Haswell or newer as the default so that's not a problem.
Their documentation contains all the details.
Even in CI environments layered on GCE now it is possible these days to do nested virtualization, Travis CI implements it for instance with their ubuntu bionic / language general (or bash) images. You can start a free github or gitlab account and connect a repo to Travis to play with it for zero cost if you like.
Here is an example config https://travis-ci.org/ankidroid/Anki-Android/builds/607187626/config
What are the pros and cons of using Debian packages to deploy a web application as opposed to using Fabric? I have only ever used Debian packages.
I'm also interested in hearing about problems you've bumped into when using Fabric and you wished you had used Debian packages.
Debian
It is a Package Manager. It allows user to manage packages through various programs like dpkg or apt on a system.
What it does for you :
builds package from source
handles package dependencies, package versions
installs, updates and removes programs on a system
works at low level, compiled binaries maybe system specific (i386, amd64)
Cons :
To deploy the application the configuration must be provided in your package, or some configuration has to be used as default
Different binaries for systems with different architecture
Fabric
It is a Python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
What it does for you :
configure your system
execute commands on local/remote server (systems administration)
deploy your application, do rollbacks, mainly automate deployment through a script
works on a higher level, does not depend on system architecture but on OS and package manager
How do you use pip, virtualenv and Fabric to handle deployment?
Cons:
It cannot replace package manager on a system, it manages packages on top of it
You should know the system, commands folders specific to your package manager / OS
Update
I was already familiar with Debian when Fabric came. So Debian has stayed as my preferable tool. Why I use Fabric, it eases deployment of applications and is handy tool for developers. Here are some reasons why I would use Debian over Fabric:
When I am not going into production, still developing and testing stuff. Debian is suitable most of the time, when code is being added/modified. Fabric just eases the transition from development to production.
Sometimes if I deploy application on my machine only, Fabric seems overkill. If deployment does not involve many machines, requires several dependencies, I would stick to Debian.
When rollback, or undoing is not an option. Fabric will simply execute your commands safe or not, if you are not adept at handling system errors/exceptions, try it somewhere before using Fabric. (Debian is part of system so have to use Debian and other system tools)
virtual machines hold great promise as a way to distribute hard to configure applications. i have been using jeos vmbuilder (and some bash scripts) to generate my appliances, but i'm looking for something more elegant.
in my case, i'm looking for a solution that will build a linux-based vm with configured versions of tomcat and mysql as a base. each future release would be a new war file and a sql update script. it'd be really nice if already deployed vms could self-update and test builds could be pushed to ec2.
in my brief search, i've found rpath rbuilder, turnkey linux,
vagrant up, suse studio, jeos vmbuilder, and vmware studio. rather than try all of these, i figure i'd ask what this community uses to build and distribute appliances...
I use pungi myself.