I already have 2 k8s deployments running without helm usage. Now, I have to add the following k8s object to it.
A NodePort Service
Toleration
NodeSelector
Host nginx as a load balancer service
I am trying to achieve it via helm. Can I use labels to connect helm charts with the existing deployment? Or, it is a mandate to use helm charts for the entire deployment?
Related
What I'm trying to do
I have deployed an aps.net core gRpc service on Docker for Desktop (Kubernetes enabled). To do client-side load balancing, I want to expose the same via a headless service. The deployment and service definition YAML files are as provided by the link viz. Deployment.yaml , service.yaml, and PV and PVC .yaml. When the deployment is run two replicas will be created. Now I want to expose them via a headless service and do a DNS lookup of the pods' IP addresses and do a client-side load balancing. For this, I installed the bitnami external-dns using the HELM charts. I did not make any modifications to the default chart values. Now when I try to do a nslookup of my service this is not working.
My expectation
Deploy the bitnami external-dns on Docker for Desktop with Kubernetes enabled and configured service to expose as DNS on the load balancer. I was expecting the nslookup to succeed in getting the pod IPs as a result
Can someone help me to get the same working?
I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}
I created a deployment using kubernetes Deployment in openshift cluster. i expected it to create the service for all the container ports like openshift DeploymentConfig does.
but i did not find any service created by the kubernetes Deployment.
does kubernetes Deployment not create the service automatically like openshift DeploymentConfig does ?
My openshift version is 3.11
Both Deployment and DeploymentConfig does not create the Service component in OpenShift. These component are used for creation of Replication Control of the Pod.
Service has to be configured separately with the selector parameter to point to the specific Pods.
selector:
name: as in the deployment or in deploymentConfig.
This link would help you on the same.
https://docs.openshift.com/container-platform/3.3/dev_guide/deployments/how_deployments_work.html#creating-a-deployment-configuration
Deployment and service are different kubernetes objects. Deployment doesnt automatically create service object. you need to define service definition in a YAML targeting the ports from the pod definition inside deployment manifests. You need to deploy both deployment and service objects. you can deploy then separately or bundle them together in a single YAML and deploy.
Further details follow the link --> https://kubernetes.io/docs/concepts/services-networking/service/
I need to deploy NGINX to a Kubernetes cluster, for which I can either use a Helm chart or a Docker image. But I am not clear of the benefits of using a Helm chart. I guess my question is not specific to NGINX but in general.
A helm chart and a container image aren't equivalent things to compare in Kubernetes
A container image is the basic building block of what kubernetes runs. An image will always be required to run an application on kubernetes, no matter how it is deployed.
Helm is a packaging and deployment tool. It makes management of deployments to kubernetes easier. This deployment would normally include a container image. It is possible to write a helm chart that just manages other kubernetes resources but fairly rare.
Other tools in the same arena as helm are kustomize, kompose, or using kubectl to apply or create resources. These are all clients of the kubernetes API.
Helm Charts: making it simple to package and deploy common applications on Kubernetes [1]. Helm brings three major benefits to your service deployments [2]:
Deployment speed
Helm chart on Kubernetes for application configuration templates
Application testing
Use of Helm charts is recommended, because they are maintained and typically kept up to date by the Kubernetes community [3].
[1] https://kubernetes.io/blog/2016/10/helm-charts-making-it-simple-to-package-and-deploy-apps-on-kubernetes/
[2] https://www.nebulaworks.com/blog/2019/10/30/three-benefits-to-using-a-helm-chart-on-kubernetes/
[3] https://cloud.google.com/community/tutorials/nginx-ingress-gke
Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.Some time for beginner its very confusing So what is basic difference between Helm,helm and tiller.?
Helm is made of two components: the CLI binary named helm that allows you to perform communication with a remote component, named tiller that lives inside your Kubernetes cluster that is responsible to perform patches and changes to resources you ask to manage.
In fact, once deployed tiller using the command helm init, you can notice a new Deployment resource (commonly named tiller-deploy) running inside kube-system namespace.
The real question should be why to use Tiller and not interacting directly with Kubernetes API?
As usual, it is a matter of security concerns, recapped by these list items:
- Role-based access control, or RBAC
- Tiller's gRPC endpoint and its usage by Helm
- Tiller release information
- Helm charts