Deploy an Elastic Kubernetes Cluster with Openstack - kubernetes

I am working on a Cloud provider named Wekeo, which offers only static provisionning of instances. I have access to the Morpheus API and the underlying OpenStack API.
My goal is to deploy an elastic cluster (EKS for instance), but I'm getting lost through the many concepts and tools I found so any guidance would be appreciated!

Related

Can we configure AWS Secrets Manager to integrate with an on-premises k8s cluster

I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.
You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.

GKE - Hybrid Kubernetes cluster

I've been reading the Google Cloud documentation about hybrid GKE cluster with Connect or completely on prem with GKE on-prem and VMWare.
However, I see that GKE with Connect you can manage the on-prem Kubernetes cluster from Google Cloud dashboard.
But, what I am trying to find, is, to mantain a hybrid cluster with GKE mixing on-prem and cloud nodes. Graphical example:
For the above solution, the master node is managed by GCloud, but the ideal solution is to manage multiple node masters (High availability) on cloud and nodes on prem. Graphical example:
Is it possible to apply some or both of the proposed solutions on Google Cloud with GKE?
If you want to maintain hybrid clusters, mixing on prem and cloud nodes, you need to use Anthos.
Anthos is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments.
The primary computing environment for Anthos uses Anthos clusters, which extend GKE for use on Google Cloud, on-premises, or multicloud to manage Kubernetes installations in the environments where you intend to deploy your applications. These offerings bundle upstream Kubernetes releases and provide management capabilities for creating, scaling, and upgrading conformant Kubernetes clusters. With Kubernetes installed and running, you have access to a common orchestration layer that manages application deployment, configuration, upgrade, and scaling.
If you want to know more about Anthos in GCP please follow this link.

Anthos showing wrong status of Deployment on on-premise external cluster

I wanted to give a try to GCP's Anthos On-Premise GKE offering.
For sake of my demo I setup a Kubernetes cluster in GCP itself using Google Compute Engine following instructions from (https://kubernetes.io/docs/setup/production-environment/turnkey/gce/)
After this I followed Anthos documentation to register my cluster to Anthos. I was able to register the cluster and Login into it using both Token based and Basic authentication based mechanisms.
Now when I try to deploy anything from GCP console, I get following error
But the deployment succeeds, I can see deployment and associated pods in Running state on my cluster.
Also when I try to deploy using Marketplace I get following error.
I wish to know if it is a bug in Anthos or my cluster has some missing configurations ?
You're not running Anthos GKE On-Prem, you're running open-source Kubernetes on Google Cloud. Things designed for Anthos - the marketplace and connecting clusters to Cloud Console - are not supposed to work in your setup. The fact that they mostly work despite that is an accident (and a testament to the portability and compatibility of Kubernetes).
To get Cloud Console integration and use the marketplace, you need to use either Anthos GKE On-Prem that runs on VMWare or regular GKE.

How to integrate Kubernetes Service Type "LoadBalancer" with Specific Cloud Load Balancers

I have a question around K8S Service Type "LoadBalancer".
I am working on developing a new "Kubernetes As a Service" Platform (like GKE etc.) for multi cloud.
Question is: K8S Service Type "LoadBalancer" works with Cloud Load Balancers (which are external to Kubernetes). GKE & other cloud based solution provides direct integration with them, so If I create a GKE Cluster & implement a Service Type "LoadBalancer", it will transparently create a new GCP Load Balancer & show Load Balancer IP in Kubernetes (as External IP). Same applies to other Cloud Providers also.
I want to allow a similar feature on my new "Kubernetes As a Service" platform, where users can choose a cloud provider, create a Kubernetes Cluster & then apply a K8S Service Type "LoadBalancer" & this will result creating a Load Balancer on the (user selected) cloud platform.
I am able to automate the flow till Kubernetes Cluster Creation, but clueless when it comes to "K8S Service & External Load Balancer" Integration.
Can anyone please help me how can I approach integrating K8S Service Type "LoadBalancer" with Specific Cloud Load Balancers? Do I need to write a new CRD or is there any similar code available in Git (in case anyone know any link for reference) ?
You have to understand how kubernetes is interacting with cloud provider. Like for example previously I deployed the Kubernetes on AWS with kops. I see that kubernetes uses aws access key & access secret to interact with aws. If I remember correctly, I saw some CLI options in kube-proxy or kubelet to support AWS. (I have searched man pages for all kubernetes binaries for aws options, but I couldn't find any to provide to you).
For example look at the kubelet man page, they provided an option called --google-json-key to authenticate GCP. You will get some idea if you deploy kubernetes on AWS with kops or kube-aws and dig through the setup and its configuration/options etc.(Same applies to other cloud providers)

How to deploy workload to GCP Kubernetes Programatically?

I have achieved vast amount of automation in terms of creating projects, creating kubernetes engine and other IaaS elements, by using GCP APIs from Python GCP Client.
But I am not very positive on deploying docker container workloads to the provisioned cluster. The GCP documents point to kubectl apply -f config.yaml, but this entails using command line tools by first switching to project etc...
This is exactly what I am trying to get away from. Is there a google API that lets us accomplish this?
And no, I do not want third party deployment automation tools for various reasons.
You can use Kubernetes client library to deploy workload programatically.
Here is some client for kubernetes:
Go client: client-go
Java client: kubernetes-client/java
Python client: kubernetes-client/python