IBM Cloud: How to control number of container images caused by Code Engine builds? - ibm-cloud

I frequently build and deploy my app from source code using the IBM Cloud Code Engine feature. I noticed the container images filling up my container registry.
How can I reduce the number of images in the container registry or limit how many are kept at hand?

You can set a retention policy in the IBM Cloud Container Registry. This can be done by namespace. If you use a namespace for a Code Engine project, you can configure the number of container images that are kept.
When logged in to IBM Cloud and the container registry, run this CLI command to limit the number of container images to 5 for the namespace my-ce-apps:
ibmcloud cr retention-policy-set --images 5 my-ce-apps

Related

Auto joining newly created VM/Servers as kubernetes node to master

Hi I am working on Google Cloud platform where I am not using GKE. Rather I am creating k8s cluster manually. Following is my setup,
Total 11 server
Out of these 5 servers would be static servers and don't need any scaling
remaining 5 server would need up scaling if CPU or RAM consumption goes beyond certain
limit. i.e. I will spin only 3 servers initially and if CPU/RAM threshold is crossed then I will spin 2 more using Google Cloud Load balancer.
1 k8s Master server
To implement this load balancer I have already created one Custom Image on which I have installed docker and kubernetes. Using this I have create one instance template and then instance group.
Now the problem statement is ,
Although I have created image with everything installed , When I am creating a instance group in which 3 VM are being created , these VMs does not automatically connect to my k8s master. Is there any way to automatically connect newly created VM as a node to k8s master so that I do not have run join command manually on each server ?
Thanks for the help in advance.
so that I do not have run join command manually on each server
I am assuming that you can successfully run the join command to join the newly created VMs to the Kubernetes master manually.
If that is the case, you can use the startup-script feature in the Google Compute Engine.
Here is the documentation:
https://cloud.google.com/compute/docs/instances/startup-scripts
https://cloud.google.com/compute/docs/instances/startup-scripts/linux#passing-directly
In short, startup-script is the feature from Google Compute Engine to automatically run our customized script during start-up.
And, the script could look something like this:
#! /bin/bash
kubeadm join .......

Where/How to configure Cassandra.yaml when deployed by Google Kubernetes Engine

I can't find the answer to a pretty easy question: Where can I configure Cassandra (normally using Cassandra.yaml) when its deployed on a cluster with kubernetes using the Google Kubernetes Engine?
So I'm completely new to distributed databases, Kubernetes etc. and I'm setting up a cassandra cluster (4VMs, 1 pod each) using the GKE for a university course right now.
I used the official example on how to deploy Cassandra on Kubernetes that can be found on the Kubernetes homepage (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) with a StatefulSet, persistent volume claims, central load balancer etc. Everything seems to work fine and I can connect to the DB via my java application (using the datastax java/cassandra driver) and via Google CloudShell + CQLSH on one of the pods directly. I created a keyspace, some tables and started filling them with data (~100million of entries planned), but as soon as the DB reaches some size, expensive queries result in a timeout exception (via datastax and via cql), just as expected. Speed isn't necessary for these queries right now, its just for testing.
Normally I would start with trying to increase the timeouts in the cassandra.yaml, but I'm unable to locate it on the VMs and have no clue where to configure Cassandra at all. Can someone tell me if these configuration files even exist on the VMs when deploying with GKE and where to find them? Or do I have to configure those Cassandra details via Kubectl/CQL/StatefulSet or somewhere else?
I think the faster way to configure cassandra in Kubernetes Engine, you could use the next deployment of Cassandra from marketplace, there you could configure your cluster and you could follow this guide that is also marked there to configure it correctly.
======
The timeout config seems to be a configuration that require to be modified inside the container (Cassandra configuration itself).
you could use the command: kubectl exec -it POD_NAME -- bash in order to open a Cassandra container shell, that will allow you to get into the container configurations and you could look up for the configuration and change it for what you require.
after you have the configuration that you require you will need to automate it in order to avoid manual intervention every time that one of your pods get recreated (as configuration will not remain after a container recreation). Next options are only suggestions:
Create you own Cassandra image from am own Docker file, changing the value of the configuration you require from there, because the image that you are using right now is a public image and the container will always be started with the config that the pulling image has.
Editing the yaml of your Satefulset where Cassandra is running you could add an initContainer, which will allow to change configurations of your running container (Cassandra) this will make change the config automatically with a script ever time that your pods run.
choose the option that better fits for you.

ImagePullBackOff from GCR.io registry on Kubernetes Google Cloud

Kubernetes is unable to launch container using image from private gcr.io container registry.
The error says "ImagePullBackOff".
Both Kubernetes and Container registry are in the same Google Cloud project.
The issue was with permissions.
It turns out that a service account that is used to launch Kubernetes needs to have reading permissions for Google Cloud Storage (this is important as the registry itself is using buckets to store images)
Exact details here

Convert monolith application to microservice implementation in Kubernetes

I want to deploy my application in cloud using Kubernetes based deployment. It consits of 3 layers Kafka, Ignite(as DB and processing) and Python(ML engine).
From Kafka layer we get data stream input which is then passed to Ignite for processing(feature engg). After processing the data is passed to the python
server for further ML predictions. How can I break this monolith application to microservices in Kubernetes?
Also can using Istio provide some advantage?
You can use the bitnami/kafka on docker hub from bitnami if you want pre-build image.
Export the image to your container registry with the gcloud command.
gcloud docker -- push [your image container registry path]
Deploy the images using UI or gcloud command
Expose the port{2181 9092-9099} or which one is exposed in the pulled image after the deployment on kubernetes.
Here is the link of the Ignite image on Google Compute, you have just to deploy it on the kubernetes engine and expose the appropriate ports
For python you have just to Build your python app using dockerfile as ignacio suggested.
it is possible and in fact those tools are easy to deploy in Kubernetes. Firstly, you need to gain some expertise in Kubernetes basics, specially in statefulsets and persistent volumes, since Kafka and Ignite are stateful components.
To deploy a Kafka cluster in Kubernetes follow instructions form this repository: https://github.com/Yolean/kubernetes-kafka
There are other alternatives, but this is the only one I've tested in production environments.
I have not experience with Ignite, this docs provides a step-by-step guide. Maybe someone else could share other resources.
About Python, just dockerize your ML model as any other Python app. In the official docker image for Python you'll find a basic Dockerfile to do that. Once you have your docker image pushed to a registry, just create a YAML file describing the deployment and apply it to Kubernetes.
As an alternative for the last step, you can use Draft to dockerize and deploy Python code.
Good luck!

What image does Google Container Engine (GKE) use?

In the docs for GKE it says all nodes (currently) have the same VM instance. Does this refer to the underlying machine type or the OS image (or both)?
I was assuming it was just the machine type (micro, small,.. etc) and Google layered their own image with infrastructure on top of that (e.g. kubernetes).
If this is the case what image does Google use on GKE? I was thinking it may be CoreOS, since that would seem to be a good match, but I am not sure.
I'd like to set up staging machines with the same image as production... but perhaps we don't need to know this or it doesn't matter what is used.
All nodes in the cluster currently have the same machine type and OS image. By default, the machine type is n1-standard-1 and the image is a recent container-vm image.
If you use gcloud to create your cluster, both settings can be overridden on the command line using the --machine-type and --source-image options respectively (documentation).
If you are using the cloud console to create your cluster, you can specify the machine type but not currently the source image.
Be aware that if you specify a different source image, you may not end up with a functional cluster because the kubernetes software that is installed on top of the source image requires specific underlying packages to be present in the system software. If you want consistency between staging/prod, you can use
gcloud container clusters describe <staging-cluster-name>
To see what image is being used in your staging cluster and ensure that you end up with the same image for your production cluster.