Hello fellow developers.
I have the following problem:
I have a solution that relies on a few services each having a container. The way I provide the solution at the moment is create a manual deployment for each client each time using Cloud services. The requirements include scalability, fault tolerance, cost tracking, and access management.
However, instead of creating a deployment manually every time for each client, I would like them to go through another service or app so the new requirements are:
Dashboard to issue a deployment.
Dashboard gives overview of health of services.
Dashboard tracks costs and issues API keys automatically.
My first thought is using kubernetes and creating deployments issued by a user through the dashboard.
I have little experience with kubernetes, so I would appreciate anyone pointing me in a direction and not necessity explaining everything.
Thanks all
Related
I'm new to Kubernetes and trying to point all requests to the domain to another local service.
Both applications are running in the same cluster under a different namespace
Example domains
a.domain.com hosting first app
b.domain.com hosting the second app
When I do a curl request from the first app to the second app (b.domain.com). it travels through the internet to the second app.
Usually what I could do is in /etc/hosts point b.domain.com to localhost.
What do we do in this case in Kubernetes?
I was looking into Network Policies but I'm not sure if it correct approach.
Also As I understood we could just call service name.namespace:port from the first app. But I would like to keep the full URL.
Let me know if you need more details to help me solve this.
The way to do it is by using the Kubernetes Gateway API. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.
For what you want, Istio would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at Anthos, which is the managed version of Istio.
Finally, take a look at the blog Welcome to the service mesh era, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.
I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.
The application is an HTTP web backend.
Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.
I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.
So to summarize my question, does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?
The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.
I see 2 approaches:
In the new cluster get a new IP address and update the DNS record to point to the new load balancer
See if you can switch to Multi-cluster gateways, however that would probably require you to use approach 1 to switch to multi-cluster gateways as well: https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways
Few pain points you'll run into:
As someone used to DIY Kubernetes I hate GKE's managed Ingress Certs, because they make it very hard to pre-provision HTTPS certs in advance on the new cluster. (GKE's defacto method of provisioning HTTPS certs is to update DNS to point to the LB, and then wait 10-60 minutes. That means if you cutover to a new cluster the new cluster's HTTPS cert supplied by a managedcertificate Custom Resource, won't be ready in advance.)
It is possible to use pre-provision HTTPS certs using ACME-DNS challenge on GCP, but it's poorly documented and a god awful UX(user experience), there's no GUI, and the CLI API is terrible.
You can do it using gcloud services enable certificatemanager.googleapis.com, but I'd highly recommend against
that certificate-manager service that went GA in June, 2022. The UX is painful.
GKE's official docs are pretty bad when it comes to this scenario
You basically want to do 2 things:
Follow this how to guide for a zero downtime HTTPS cutover from cluster1 to cluster2 by leveraging Lets Encrypt Free Cert
https://gist.github.com/neoakris/4aafeac7628995da8dd423f1702c975b
(I know link only answers are bad, but it's github (great uptime) and it's way too long and nuanced to post here.)
Use Velero to migrate workloads from cluster1 to cluster2 (it migrate can do CRDs, CRs, generic yaml objects, and PV/PVCs. One thing of note is Velero works best when you're migrate to and from a cluster of the same version, if you go from a really old version to a really new version you could encounter issues where kubernetes yaml APIs got removed in the new version. Going from old version to new version can be done, but it's best left to an experienced hand. For Happy Path results migrating to and from a cluster of the same version is best.)
I'm looking into deploying a cluster on Google Kubernetes Engine in the near future. I've also been looking into using Vault by Hashicorp in order to manage the secrets that my cluster has access to. Specifically, I'd like to make use of dynamic secrets for greater security.
However, all of the documentation and Youtube videos that cover this type of setup always mention that a set of nodes strictly dedicated to Vault should operate as their own separate cluster - thus requiring more VMs.
I am curious if a serverless approach is possible here. Namely, using Google Cloud Run to create Vault containers on the fly.
This video (should start at the right time) mentions that Vault can be run as a Deployment so I don't see there being an issue with state. And since Google mention that each Cloud Run service gets its own stable HTTPS endpoint, I believe that I can simply pass this endpoint to my configuration and all of the pods will be able to find the service, even if new instances are created. However, I'm new to using Kubernetes so I'm not sure if I'm entirely correct here.
Can anyone with more experience using Kubernetes and/or Vault point out any potential drawbacks with this approach? Thank you.
In beta since 3 weeks, and not officially announced (It should be in a couple of days) you can have a look to secret-manager. It's a serverless secret manager with, I think, all the basic requirements that you need.
The main reason that it has not yet announced, it's because the client library in several languages aren't yet released/finished
The awesome guy on your video link, Seth Vargo, has been involved in this project.
He has also released Berglas. It's write in Python, use KMS for ciphering the secret and Google Cloud Storage for storing them. I also recommend it.
I built a python library to easily use Berglas secret in Python.
Hope that this secret management tool will meet your expectation. In any case, it's serverless and quite cheap!
I'm new to Kubernetes, and after I've seen how huge it is I thought I'd ask for a bit of help.
The purpose of my company is to deploy a set of apps independantly for each of our clients. Say we have an app A, we want to deploy a first version for client 1, another version for client 2, etc. We will have a lot of clients in the future (maybe around 50). Of course we want to be able to manage them easily.
Which part of Kubernetes should I explore to achieve this, or if kubernetes is not fit for this what else should I consider ?
Thanks !
There is concept in kubernetes as namespaces which are isolated with each other and provide the isolation between deployments in it.
so you can use and explore the namespaces in kubernetes which will help to isolate the client versions and deployments.
if kubernetes is not fit for this what else should I consider ?
if donot think it may happen for your requirement kubernetes having lots of options for deployment for zero downtime in service, with kubernetes you can implement CI/CD so i think kubernetes will be easy to setup and manage any application.
Requirement - New Relic monitoring for an application running in pods as part of a kubernetes cluster.
I have installed Kube-state-metrics on my cluster and able to see kubernetes dashboard using newrelic insights.
Also, need to configure the Application monitoring for the same. Following https://blog.newrelic.com/2017/11/27/monitoring-application-performance-in-kubernetes/ for the same.
Have some questions for the same -
Can this be achieved using kube-state-metrics ?
Do I need to have separate yaml file for each pod containing license key?
Do I need to make changes in my application also or adding the information in spec will work?
Do I need to install Java agent in every pod? If yes, will it eat resources?
Somehow, Installation of application monitoring is becoming complex. Please explain the exact requirement of installation
You didn't mention your stack, you should follow instructions on their site for your language. Typically you just pull in their agent library and configure credentials to get started. You should not have a reason to tell your pods apart, so the agent credentials should be the same for all pods
Installing agents at infrastructure will let you have infrastructure data. So you'll get alerts if you're running out of memory/space/cpu and such. Infrastructure agent cannot possibly know about application data. If you want application performance data (apm) you need to install the agent at the application level too and you'll get data such as http request rates, error rates and response times if it's a webserver. You can also annotate current transaction with data which is all application specific. They have a bunch of client agents, see if there's one for your stack. For example all you need for a nodejs service is require('newrelic') at the top of your app and configuration