Does Kubernetes supports downloading of resources like mesosphere - kubernetes

We know that mesosphere provides mesosphere fetcher in DCOS to download resources into the sandbox directory. Does Kubernetes provides anything similar to same?

While Kubernetes does not have a feature like Mesosphere Fetcher, it is still possible to copy / download resources into a Docker container using the following ways:
Docker's COPY and ADD copy resources from the host into the container.
Docker's ADD supports tar extraction and remote URLs too.
Download / extract resources inside the container using commands like:
wget
curl
lynx
tar
gunzip

No. Kubernetes does not have any built-in feature to download and inject files into a container the way Mesos does.
The Mesos fetcher feature existed before Docker image support, in fact. Prior to images, the fetcher was the primary way to download the executable and any supporting files. Kubernetes never needed that feature because it requires a container image. That said, having both be optional can be useful.
The Mesos fetcher is supported by Mesos, Marathon, and Mesosphere DC/OS.
Kubernetes could hypothetically add support for arbitrary file fetching in the future, but there hasn’t been a lot of demand, and it would probably require either container dependencies within a pod (to use a controller-injected sidecar), a kubelet plugin (to download before container start), or a native fetcher-like feature.

Related

Persistence volume change: Restart a service in Kubernetes container

I have an HTTP application (Odoo). This app support install/updating modules(addons) dynamically.
I would like to run this app in a Kubernetes cluster. And I would like to dynamically install/update the modules.
I have 2 solutions for this problem. However, I was wondering if there are other solutions.
Solution 1:
Include the custom modules with the app in the Docker image
Every time I made a change in the custom module and push it to a git repository. Jinkins pull the changes and create a new image and then apply the new changes to the Kubernetes cluster.
Advantages: I can manage the docker image version and restart an image if something happens
Drawbacks: This solution is not bad for production however the list of all custom module repositories should all be included in the docker file. Suppose that I have two custom modules each in its repository a change to one of them will lead to a rebuild of the whole docker image.
Solution 2:
Have a persistent volume that contains only the custom modules.
If a change is made to a custom module it is updated in the persistent volume.
The changes need to apply to each pod running the app (I don't know maybe doing a restart)
Advantages: Small changes don't trigger image build. We don't need to recreate the pods each time.
Drawbacks: Controlling the versions of each update is difficult (I don't know if we have version control for persistent volume in Kubernetes).
Questions:
Is there another solution to solve this problem?
For both methods, there is a command that should be executed in order to take the module changes into consideration odoo --update "module_name". This command should include the module name. For solution 2, How to execute a command in each pod?
For solution 2 is it better to restart the app service(odoo) instead of restarting all the nodes? Meaning, if we can execute a command on each pod we can just restart the service of the app.
Thank you very much.
You will probably be better off with your first solution. Specially if you already have all the toolchain to rebuild and deploy images. It will be easier for you to rollback to previous versions and also to troubleshoot (since you know exactly which version is running in each pod).
There is an alternative solution that is sometime used to provision static assets on web servers: You can add an emptyDir volume and a sidecar container to the pod. The sidecar pull the changes from your plugins repositories into the emptyDir at fixed interval. Finally your app container, sharing the same emptyDir volume will have access to the plugins.
In any case running the command to update the plugin is going to be complicated. You could do it at fixed interval but your app might not like it.

creating a proper kubeconfig file for a 2 node gentoo linux kubernetes cluster

I have two servers at my home with Gentoo Linux ~amd64.I would like to install Kubernetes on them to play with it a bit.
Gentoo now packages all the Kubernetes related dependencies under one package called sys-cluster/kubernetes and the latest version available at the moment is 1.18.3.
the last time I played with Kubernetes was several years ago and I think I completely forgot everything.
so I installed kubernetes on both servers. since I use systemd and the package contains only kubelet systemd service I created systemd init scripts for also kube-apiserver, kube-controller-manager, kube-proxy and kube-scheduler.
now this package also comes with kubeadm but I would like to know how to install and configure kubernetes manually.
now I want to create a kubeconfig file for my cluster configuration.
I googled and found the following url: http://docs.shippable.com/deploy/tutorial/create-kubeconfig-for-self-hosted-kubernetes-cluster/
the first step is Make sure you can access the cluster but I thought I wanted to create kubeconfig in order for the services to properly know how to access my cluster!
this web site already talks about secrets that where already configured which aren't.. i'm starting from scratch and this is not probably the way to go.
In general I want to know how to properly create a kubeconfig file for my setup, then i'll configure the services to use this kubeconfig file and go on from there.
so any information regarding this issue would be greatly appreciated.
so I asked this also in Kubernetes slack channel and they provided me this project: https://github.com/kelseyhightower/kubernetes-the-hard-way
it's a documentation project on how to configure kubernetes the hard way, in the documentation they set it up in google cloud, but it's easy to understand what they did on cloud and how to configure the same on your network.

s390x images for controller node - K8s

I am trying to setup Kubernetes on a s390x machines. Having downloaded packages kubeadm, kubectl and kubelet specific to s390x architecture. I was under the impression that kubeadm init will download the control pane images for the same architecture, which proved to be incorrect.
kubeadm init seems to have downloaded amd64 images which results in the following error standard_init_linux.go:187: exec user process caused "exec format error
Can someone please let me know if there are s390x specific images for the below containers, if yes please provide me the container tags or link to it please
k8s.gcr.io/kube-apiserver:v1.17.2
k8s.gcr.io/kube-controller-manager:v1.17.2
k8s.gcr.io/kube-scheduler:v1.17.2
k8s.gcr.io/kube-proxy:v1.17.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
From the docs we can read:
kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x following the multi-platform proposal.
Multiplatform container images for the control plane and addons are also supported since v1.12.
Only some of the network providers offer solutions for all platforms. Please consult the list of network providers above or the documentation from each provider to figure out whether the provider supports your chosen platform.
Repo for s390x is available here.
I think it might be helpful to follow this guide for Installing Kubernetes 1.12 on SUSE Linux using kubeadm
To solve the problem:
Download the control pane containers specific to s390x (kube-controller-manager-s390x:v1.17.2 etc..,)
Tag to the names in which kubeadm lookup
Run kubeadm init command
More information you can find here: kubernestes-for-s309x, kubeadm-s390x.
Resolved following with the below steps:
1) Downloaded the control pane images for s390x from k8s docker repository (kube-controller-manager-s390x:v1.17.2,, likewise)
2) Had to tag the images to kube-controller-manager:v1.17.2 , because kubeadm manifests looks for this name
3) initialized my cluster and there it is " Your Kubernetes control-plane has initialized successfully! "

Install Custom Connector To Kafka Connect on Kubernetes

I'm running the kafka kubenetes helm deployment, however I am unsure about how to install a custom plugin.
When running custom plugin on my local version of kafka I mount the volume /myplugin to the Docker image, and then set the plugin path environment variable.
I am unsure about how to apply this workflow to the helm charts / kubernetes deployment, mainly how to go about mounting the plugin to the Kafka Connect pod such that it can be found in the default plugin.path=/usr/share/java.
Have a look at the last few slides of https://talks.rmoff.net/QZ5nsS/from-zero-to-hero-with-kafka-connect. You can mount your plugins but the best way is to either build a new image to extend the cp-kafka-connect-base, or to install the plugin at runtime - both using Confluent Hub.

Start Kubernetes job with http request

What is most simple way to start Kubernetes job with the http request (webhook)? I need to build docker image after push to github and have to do it inside cluster.
I think you are looking for KNative. Mainly the Build part of it.
KNative is still on early stages, but is pretty much what you need. If the build features does not attend your needs, you can still use the other features like Serving to trigger the container image from http calls and run the tools you need.
Here is the description from the Build Docs:
A Knative Build extends Kubernetes and utilizes existing Kubernetes
primitives to provide you with the ability to run on-cluster container
builds from source. For example, you can write a build that uses
Kubernetes-native resources to obtain your source code from a
repository, build a container image, then run that image.
While Knative builds are optimized for building, testing, and
deploying source code, you are still responsible for developing the
corresponding components that:
Retrieve source code from repositories.
Run multiple sequential jobs against a shared filesystem, for example:
Install dependencies.
Run unit and integration tests.
Build container images.
Push container images to an image registry, or deploy them to a cluster.
The goal of a Knative build is to provide a standard, portable,
reusable, and performance optimized method for defining and running
on-cluster container image builds. By providing the “boring but
difficult” task of running builds on Kubernetes, Knative saves you
from having to independently develop and reproduce these common
Kubernetes-based development processes.