I am trying to deploy my application programmatically from one EKS cluster to all other EKS clusters. To do that I am getting kubeconfig details using clusterDescribe EKS api.
Steps in my code
Get name and region of EKS cluster
Describe EKS cluster using aws eks sdk
Using describe data, I am building kubeclient.
Using kubeclient, I can deploy application in EKS cluster.
The above steps work from my local machine for any EKS cluster in my account. but If I run my program from one EKS cluster(cluster1) to deploy my application into another(cluster2)
then I get a timeout error in my 4th step.
Can you help me what I am missing?
I am not sure what are you planning but their are tools available which you can deploy on EKS and can do deployment on any kubernetes cluster, cloud accound etc.
You should check Spinnaker open source tool for this.
My team help companies to use spinnaker in their environments.
Related
We have a project where we need to migrate EKS ( Elastic Kubernetes Services ) clusters to AKS ( Azure Kubernetes Services ). What are the steps that we need to follow to successfully migrate those clusters.
With some research found that we can only migrate though backing up kubernetes cluster on AWS storage bucket and then moving it to a blob storage on Azure and then configuring AKS settings.
Is this the right approach?
Yes, you can use the tools like Velero to Backup & restore the Kubernetes cluster.
i have written the Article you can refer the same : https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8
You can leverage the plugin with Velero tools as per requirement EKS & AKS both supported.
Well, I read the user guide of AWS EKS service. I created a managed node group for the EKS cluster successfully.
I don't know how to add the nodes running on my machine to the EKS cluster. I don't know whether EKS support. I didn't find any clue in its document. I read the 'self-managed node group' chapter, which supports add a self-managed EC2 instances and auto-scaling group to the EKS cluster rather than a private node running on other cloud instance like azure, google cloud or my machine.
Does EKS support? How to do that if supports?
This is not possible. It is (implicitly) called out in this page. All worker nodes need to be deployed in the same VPC where you deployed the control plane (not necessarily the same subnets though). EKS Anywhere (to be launched later this year) will allow you to deploy a complete EKS cluster (control plane + workers) outside of an AWS region (but it won't allow running the control plane in AWS and workers locally).
As far as I know, EKS service doesn't support adding self nodes to the cluster. But the 'EKS Anywhere' service does, which has not been online yet, but soon.
I'm new to Terraform.
I am trying to:
Create a Kubernetes cluster in GCP (GKE) using Terraform
Deploy a K8s deployment to the same cluster using Terraform
How can I process to create a new cluster on GCP, and deploy some service on the cluster just created?
You should use first provider with host, token and ca_certificate.
The one with config_path will use config from your host, so will try to run kubernetes provider using your user, not terraform service account.
Provider configuration looks good, I use same and it works.
Do you set up single cluster using Terraform or many? Can you share your terraform resources?
I am having a hard time deploying Spinnaker to my GKE-Cluster-A with Palo Alto in front. I ran to some issues along the way. So I am planning to create another GKE-cluster-B for Spinnaker only. Is it possible to deploy my application to my GKE-Cluster-A while my Spinnaker Continous Delivery is running on GKE-Cluster-B?
Cause base on my understanding, I have to deploy Spinnaker where I will deploy my application.
Spinnaker is able to work with more than one cluster. K8s cluster (GKE, EKS, on-demand) is a provider. You can configure Spinnaker to work with a lot of different providers. See https://www.spinnaker.io/setup/install/providers/ for details.
I want to setup my hyperledger blockchain application into kubernetes cluster.
I don't want to encourage questions like this but here are some steps that you could possibly help you:
Ensure your application runs correctly locally on Docker.
Construct your Kubernetes configuration files. What you will need:
A deployment or a statefulset for each of your peers.
A statefulset for the couchdb for each of your peers.
A deployment or a statefulset for each of your orderers.
One service per peer, orderer and couchdb (to allow them to communicate).
A job that creates and joins the channels.
A job that installs and instantiates the chaincode.
Generated crypto-material and network-artifacts.
Kubernetes Secrets or persistent volumes that hold your crypto-material and network-artifacts.
An image of your dockerized application (I assume you have some sort of server using an SDK to communicate with the peers) uploaded on a container registry.
A deployment that uses that image and a service for your application.
Create a Kubernetes cluster either locally or on a cloud provider and install the kubectl CLI on your computer.
Apply (e.g. kubectl apply -f peerDeployment.yaml) the configuration files on your cluster with this order:
Secrets
Peers, couchdb's, orderers (deployments, statefulsets and services)
Create channel jobs
Join channel jobs
Install and instantiate chaincode job
Your application's deployment and service
If everything was configured correctly, you should have a running HLF platform in your Kubernetes cluster. It goes without saying that you have to research each step to understand what you need to do. And to experiment, a lot.