How to deploy workload to GCP Kubernetes Programatically? - kubernetes

I have achieved vast amount of automation in terms of creating projects, creating kubernetes engine and other IaaS elements, by using GCP APIs from Python GCP Client.
But I am not very positive on deploying docker container workloads to the provisioned cluster. The GCP documents point to kubectl apply -f config.yaml, but this entails using command line tools by first switching to project etc...
This is exactly what I am trying to get away from. Is there a google API that lets us accomplish this?
And no, I do not want third party deployment automation tools for various reasons.

You can use Kubernetes client library to deploy workload programatically.
Here is some client for kubernetes:
Go client: client-go
Java client: kubernetes-client/java
Python client: kubernetes-client/python

Related

How does wso2 deploy on Kubernetes without using Google Cloud?

I want to deploy WSO2 API Manager with Kubernetes.
Should I use Google Cloud?
Is there another way?
The helm charts 1 for APIM can be deployed on GKE, AKS, EKS, etc. You can even deploy the all-in-one simple deployment pattern 2 in a local Kubernetes cluster like minikube, etc.
You might have to use a cloud provider for more advanced patterns since they require more resources to run.
All these charts are there as samples to get an idea about the deployment patterns. It is not recommended to deploy those as it is in real production scenarios as the resource requirements and infrastructure vary according to the use cases.
1 - https://github.com/wso2/kubernetes-apim
2 - https://github.com/wso2/kubernetes-apim/tree/master/simple/am-single

Good solutions to automate infrastructure deployment locally?

I have recently been reading more about infrastructure as a service (IaaS) and platform as a service (PaaS) and had some questions. I see when we opt for a PaaS solution, it is generally very easy to create the infrastructure as the cloud providers handle that for us and we can even automate the deployment using an infrastructure as code solution like Terraform.
But if we use an IaaS solution or even a local on premise cluster, we lose a lot of the automation it seems that PaaS allows. So I was curious, are there any good tools out there for automating infrastructure deployment on a local cluster that is not in the cloud?
The best thing I could think of was to run a local Kubernetes cluster and then Dockerize each of the infrastructure components, but this seems difficult as each node in the cluster will need its own specific configuration files.
From my basic Googling, it seems like there is not a good solution to this.
Edit:
I was not clear enough with my original intentions. I have two problems I am trying to solve.
How do I automate infrastructure deployment locally? For example, suppose I wanted to create a Hadoop HDFS cluster. I would need to configure one node to be the namenode with an accessible IP, and the other nodes to be datanodes that are aware of the namenode's IP. At the moment, I have to do this manually by logging into each node, checking it's IP, and then configuring each one. How would I automate this? If I were to use a Kubernetes approach, how do I specify that one of the running pods needs to be the namenode and the others are datanodes? How do I find the pods' IPs and have them be aware of the namenode IP?
The next problem I have is very similar to the first, but a slight modification. How would I deploy specific configuration files to each node. For instance in Kafka, the configuration file for one node, requires the IPs of the Zookeeper nodes, as well as the IP it should listen on. This may be different for every node in the cluster. Is there a good way to make these config files pod specific, so that I do not have to do bash text processing to insert the correct contents into each pod's config files?
You can use Terraform for all of your on-premise Infra. Automation, and Ansible for configuration management.
Let's say you have three HPE servers, Install K8s or VMware on them using Ansible, then you can treat them as three Avvaliabilty zones in one region, same as AWS. from this you can start deploying dockerize apps, or helm charts using Terraform.
Summary:
Ansbile for installing and configuration K8s.
Terraform for provisioning K8s.
Helm for installing apps on K8s.
After this you gonna have a base automated on-premise Infra.

Anthos showing wrong status of Deployment on on-premise external cluster

I wanted to give a try to GCP's Anthos On-Premise GKE offering.
For sake of my demo I setup a Kubernetes cluster in GCP itself using Google Compute Engine following instructions from (https://kubernetes.io/docs/setup/production-environment/turnkey/gce/)
After this I followed Anthos documentation to register my cluster to Anthos. I was able to register the cluster and Login into it using both Token based and Basic authentication based mechanisms.
Now when I try to deploy anything from GCP console, I get following error
But the deployment succeeds, I can see deployment and associated pods in Running state on my cluster.
Also when I try to deploy using Marketplace I get following error.
I wish to know if it is a bug in Anthos or my cluster has some missing configurations ?
You're not running Anthos GKE On-Prem, you're running open-source Kubernetes on Google Cloud. Things designed for Anthos - the marketplace and connecting clusters to Cloud Console - are not supposed to work in your setup. The fact that they mostly work despite that is an accident (and a testament to the portability and compatibility of Kubernetes).
To get Cloud Console integration and use the marketplace, you need to use either Anthos GKE On-Prem that runs on VMWare or regular GKE.

Deploy Kubernetes on Self-host Production environment

I am trying to install kubernetes on Self-hosted production environment running on Ubuntu 16.04. I am not able to find any helpful guide to setup production grade kubernetes master and connect worked nodes to it.
any help is much appreciated.
you can use the kubespray to self Host production environment.
https://github.com/kubernetes-incubator/kubespray
Depends on what you understand by saying "self-host". The most people think it's about deploying kubernetes in the own environment.
If you want to compare different approaches to deploy k8s in a custom environment, refer to this article which covers a bunch of options suitable for that.
If you are interested in how to set up an HA Kubernetes cluster using kubeadm, refer to this article.
However, in kubernetes, there is a different definition of "self-hosted". It means running kubernetes itself as a workload in kubernetes. If you are interested in a real self-hosted approach (on a custom environment), refer to this article
Hope this helps
You can use typhoon which can be used to provision an HA kubernetes cluster.
Here is a sample configuration which I used to bring up my own home cluster.
A few advantages of typhoon are that you have the option of choosing your choice of a cloud provider for provisioning your infrastructure, which is done using terraform and the fact that it gives you upstream k8s is a big plus too.
Internally, it uses bootkube to bring up the temporary control plane, which would consist of
api-server
controller-manager
scheduler
and then when we have the temporary control plane object, we inject the objects to the API server to have our k8s cluster.
Have a look at this kubecon talk given by CoreOS which explains how this is working.

Should we run a Consul container in every Pod?

We run our stack on the Google Cloud Platform (hosted Kubernetes, GKE) and have a Consul cluster running outside of K8s (regular GCE instances).
Several services running in K8s use Consul, mostly for it's CP K/V Store and advanced locking, not so much for service discovery so far.
We recently ran into some issues with using the Consul service discovery from within K8s. Right now our apps talk directly to the Consul Servers to register and unregister services they provide.
This is not recommended best-practice, usually Consul clients (i.e. apps using Consul) should talk to the local Consul agent. In our setup there are no local Consul agents.
My Question: Should we run local Consul agents as sidekick containers in each pod?
IMHO this would be a huge waste of ressources, but it would match the Consul best-practies better.
I tried searching on Google, but all posts about Consul and Kubernetes talk about running Consul in K8s, which is not what I want to do.
As the official Consul Helm chart and the documentation suggests the standard approach is to run a DaemonSet of Consul clients and then use a connect-side-car injector to inject sidecars into your node simply by providing an annotation of the pod spec. This should handle all of the boilerplate and will be inline with best practices.
Consul: Connect Sidecar; https://www.consul.io/docs/platform/k8s/connect.html