Kubernetes adhoc command scheduler - kubernetes

I am a beginner in kubernetes and have just started playing around with it. I have a use case to run commands that I take in through a UI(can be a server running inside or outside the cluster) in the kubernetes cluster. Let's say the commands are python scripts like HelloWorld.py etc. When I enter the command the server should launch a container which runs the command and exits. How do I go about this in Kubernetes? What should the scheduler look like?

You can try the training classes on katacoda.com as follows.
https://learn.openshift.com/
https://www.katacoda.com/courses/kubernetes
It's interactive handson, so it's interesting and easy to make sense of OpenShift and Kubernetes.
I hope it help you. ;)

Related

How to start pods on demand in Kubernetes?

I need to create pods on demand in order to run a program. it will run according to the needs, so it could be that for 5 hours there will be nothing running, and then 10 requests will be needed to process, and I might need to limit that only 5 will run simultaneously because of resources limitations.
I am not sure how to build such a thing in kubernetes.
Also worth noting is that I would like to create a new docker container for each run and exit the container when it ends.
There are many options and you’ll need to try them out. The core tool is HorizontalPodAutoscaler. Systems like KEDA build on top of that to manage metrics more easily. There’s also Serverless tools like knative or kubeless. Or workflow tools like Tekton, Dagster, or Argo.
It really depends on your specifics.

How to run MirrorMaker 2.0 in production?

From the documentation, I see MirrorMaker 2.0 like this on the command line -
./bin/connect-mirror-maker.sh mm2.properties
In my case I can go to an EC2 instance and enter this command.
But what is the correct practice if this needs to run for many days. It is possible that EC2 instance can get terminated etc.
Therefore, I am trying to find out what are best practices if you need to run MirrorMaker 2.0 for a long period and want to make sure that it is able to stay up and running without any manual intervention.
You have many options, these include:
Add it as a service to systemd. Then you can specify that it should be started automatically and restarted on failure. systemd is very common now, but if you're not running systemd, there are many other process managers. https://superuser.com/a/687188/80826.
Running in a Docker container, where you can specify a restart policy.

NixOS within NixOS?

I'm starting to play around with NixOS deployments. To that end, I have a repo with some packages defined, and a configuration.nix for the server.
It seems like I should then be able to test this configuration locally (I'm also running NixOS). I imagine it's a bad idea to change my global configuration.nix to point to the deployment server's configuration.nix (who knows what that will break); but is there a safe and convenient way to "try out" the server locally - i.e. build it and either boot into it or, better, start it as a separate process?
I can see docker being one way, of course; maybe there's nothing else. But I have this vague sense Nix could be capable of doing it alone.
There is a fairly standard way of doing this that is built into the default system.
Namely nixos-rebuild build-vm. This will take your current configuration file (by default /etc/nixos/configuration.nix, build it and create a script allowing you to boot the configuration into a virtualmachine.
once the script has finished, it will leave a symlink in the current directory. You can then boot by running ./result/bin/run-$HOSTNAME-vm which will start a boot of your virtualmachine for you to play around with.
TLDR;
nixos-rebuild build-vm
./result/bin/run-$HOSTNAME-vm
nixos-rebuild build-vm is the easiest way to do this, however; you could also import the configuration into a NixOS container (see Chapter 47. Container Management in the NixOS manual and the nixos-container command).
This would be done with something like:
containers.mydeploy = {
privateNetwork = true;
config = import ../mydeploy-configuration.nix;
};
Note that you would not want to specify the network configuration in mydeploy-configuration.nix if it's static as that could cause conflicts with the network subnet created for the container.
As you may already know, system configurations can coexist without any problems in the Nix store. The problem here is running more than one system at once. For this, you need an isolation or virtualization tools like Docker, VirtualBox, etc.
NixOS Containers
NixOS provides an efficient implementation of the container concept, backed by systemd-nspawn instead of an image-based container runtime.
These can be specified declaratively in configuration.nix or imperatively with the nixos-container command if you need more flexibility.
Docker
Docker was not designed to run an entire operating system inside a container, so it may not be the best fit for testing NixOS-based deployments, which expect and provide systemd and some services inside their units of deployment. While you won't get a good NixOS experience with Docker, Nix and Docker are a good fit.
UPDATE: Both 'raw' Nix packages and NixOS run in Docker. For example, Arion supports images from plain Nix, NixOS modules and 'normal' Docker images.
NixOps
To deploy NixOS inside NixOS it is best to use a technology that is designed to run a full Linux system inside.
It helps to have a program that manages the integration for you. In the Nix ecosystem, NixOps is the first candidate for this. You can use NixOps with its multiple backends, such as QEMU/KVM, VirtualBox, the (currently experimental) NixOS container backend, or you can use the none backend to deploy to machines that you have created using another tool.
Here's a complete example of using NixOps with QEMU/KVM.
Tests
If the your goal is to run automated integration tests, you can make use of the NixOS VM testing framework. This uses Linux KVM virtualization (expose /dev/kvm in sandbox) to run integrations test on networks of virtual machines, and it runs them as a derivation. It is quite efficient because it does not have to create virtual machine images because it mounts the Nix store in the VM. These tests are "built" like any other derivation, making them easy to run.
Nix store optimization
A unique feature of Nix is that you can often reuse the host Nix store, so being able to mount a host filesystem in the container/vm is a nice feature to have in your solution. If you are creating your own solutions, depending on you needs, you may want to postpone this optimization, because it becomes a bit more involved if you want the container/vm to be able to modify the store. NixOS tests solve this with an overlay file system in the VM. Another approach may be to bind mount the Nix store forward the Nix daemon socket.

Running E2E conformance Tests within a custom cluster as a job

I am thinking of running an e2e test as a job within my cluster Kubernetes cluster. So far I have the Makefile, the Docker Image created and image pushed to AWS. There are some tests that are currently failing which I am trying to debug but apart from that. Is there anything else I need to be aware of, any tips, hints or resources will be greatly appreciated.
Thank You.
I've ran Sonobuoy on a cluster and in parallel some simple load-testing.
I'm not sure exactly which test sonobuoy ran at the time but it was obvious the HTTP server I was testing had degraded performance.
So - If you want to have E2E running along side production workloads take into account it might effect users and services.

Writing a startup script to google container engine

I found that startup scripts can be added to Google compute instances using either the console or cli(gcloud). I want to add the startup scripts to google container engine.
The goal is to notify me when the google container engine has changed its state to Running. I though one efficient way is to use startup scripts in container engine, as these scripts will only be executed when the container's status is changed to running.
Any idea how to add startup scripts to container engine or any other way of notifying when the container's status changes to running.
First of all your question is fairly complicated. The concept of startup scripts do not belong to the containers world. As far as I know you can't add startup scripts in Google Container Engine. This is because Container Engine instances are immutable (e.g. you can't or you are not supposed to modify the operating system, you should just run containers).
If you're trying to run scripts when a container starts/stops you need to forget about startup scripts concept in the Compute Engine world. You can use container lifecycle hooks in Kubernetes (the orchestrator running in Container Engine).
Here's documentation and tutorial about it:
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
You can approximate the behavior of startup scripts using a DaemonSet with a simple pod that runs in privileged mode. For example code, see https://github.com/kubernetes/contrib/tree/master/startup-script.
Project metadata works for this, here's a terraform example:
resource "google_compute_project_metadata_item" "main" {
project = abcdefg # this is optional
key = "startup-script"
value = "#! /bin/sh\necho hello > /tmp/world"
}