I already have some services in my k8s cluster and want to mantain them separately. Examples:
grafana with custom dashboards and custom dockerfile
prometheus-operator instead of basic prometheus
jaeger pointing to elasticsearch as internal storage
certmanager in my own namespace (also I use it for nginx-ingress legacy routing)
Is it possible to use existing instances instead of creating istio-specific ones? Can istio communicate with them or it's hardcoded?
Yes - it is possible to use external services with istio. You can disable grafana and prometheus just by setting proper flags in values.yaml of istio helm chart (grafana.enabled=false, etc).
You can check kyma-project project to see how istio is integrated with prometheus-operator, grafana deployment with custom dashboards, and custom jaeger deployment. From your list only certmanager is missing.
Kubernetes provides quite a big variety of Networking and Load Balancing features from the box. However, the idea to simplify and extend the functionality of Istio sidecars is a good choice as they are used for automatic injection into the Pods in order to proxy the traffic between internal Kubernetes services.
You can implement sidecars manually or automatically. If you choose the manual way, make sure to add the appropriate parameter under Pod's annotation field:
annotations:
sidecar.istio.io/inject: "true"
Automatic sidecar injection requires Mutating Webhook admission controller, available since Kubernetes version 1.9 released, therefore sidecars can be integrated for Pod's creation process as well.
Get yourself familiar with this Article to shed light on using different monitoring and traffic management tools in Istio.
Related
I'm trying to migrate our ingress controllers from the old stable/nginx-ingress to the newer kubernetes/ingress-nginx
I have followed their instructions for zero downtime deployments.
Create a second nginx-controller with the kubernetes/ingress-nginx helm chart.
The instanceClassName has to be different than the original.
original instanceClassName: nginx
new instanceClassName: nginx2
Update dns to point to the new nginx 2 ELB.
Get rid of the old nginx-controller
This is all great, but all of our services/deployments are attached to instanceClassName: nginx. We can update the DNS, but then the services attached to it won't receive traffic. We can update the services at the same time, but they update at different times. This will cause an outage of some type while updating.
All of the research I have done seems to stop at that controller level. It doesn't go deeper and explain how to keep all the services connected during the switch.
How can I get both nginx controllers to route traffic to the application at the same time? I have not been able to get that to happen at the service or nginx controller level.
Or maybe I'm thinking about incorrectly, and it can work in a different way.
thanks.
There are multiple methods and I have given below with Istio and providing alternate method documentations for your reference.
You can avoid down time while migrating via splitting the traffic. There are few traffic splitting tools. Kubernetes has traffic splitting in-built feature with Istio and it will help you to direct a percentage of traffic to the new ingress controller while keeping the rest of the traffic on the old ingress controller.
Install Istio in your cluster and configure your ingress resources to use Istio gateway instead of the ingress controllers directly.
Install Istio in your cluster.
Configure your ingress resources to use the Istio gateway.
Create virtual service for your ingress resources and gradually increase the traffic to the new ingress controller and also make sure to update your DNS records to point to the new ingress controllers IP.
Reference and for further information please check the official Istio page and Istio Service Mesh Workshop.
For alternative methods please refer below options:
Canary Deployments and Type Loadbalancer
I have deployed the istio service mesh on the GKE cluster using base & istiod helm charts using this documents in the istio-system namespace.
I have deployed Prometheus, grafana & alert-manager using kube-prometheus-stack helm chart.
Every pod of this workload is working fine; I didn't see any error. Somehow I didn't get any metrics in Prometheus UI related to istio workload. Because of that, I didn't see any network graph in kiali dashboard.
Can anyone help me resolve this issue?
Istio expects Prometheus to discover which pods are exposing metrics through the use of the Kubernetes annotations prometheus.io/scrape, prometheus.io/port, and prometheus.io/path.
The Prometheus community has decided that those annotations, while popular, are insufficiently useful to be enabled by default. Because of this the kube-prometheus-stack helm chart does not discover pods using those annotations.
To get your installation of Prometheus to scrape your Istio metrics you need to either configure Istio to expose metrics in a way that your installation of Prometheus expects (you'll have to check the Prometheus configuration for that, I do not know what it does by default) or add a Prometheus scrape job which will do discovery using the above annotations.
Details about how to integrate Prometheus with Istio are available here and an example Prometheus configuration file is available here.
Need to add additionalScrapConfigs for istio in kube-prometheus-stack helm chart values.yaml.
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- {{ add your scrap config for istio }}
According to this, it is possible to configure mirroring for a Traefik server.
Now I am using a Traefik ingress controller and want to configure mirroring. One possible solution seems to be using the CRD IngressRoute. While that would probably work, it is possible to configure mirroring also per annotation? This would allow me to rely on the Kubernetes Ingress abstraction instead of using CRDs.
The background of that is, that I don't want to make assumptions about which technology is used an ingress controller.
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
As we all know there are different cloud provider and many types of settings that are related to the deployment of your ingress resource which depends on your target environments: AWS, OpenShift, plain vanilla K8S, google cloud, Azure.
On cloud deployments like Amazon, Google, etc., ingresses need also special annotations, most of which are common to all micro services in need of an ingress.
If we deploy also a mesh like Istio on top of k8s then we need to use an Istio gateway with ingress. if we use OCP then it has special kind called “routes”.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
So maybe the best approach is to create an operator to deploy the Ingress resource because of the many different setups here?
Is it important to create some generic component to deploy the Ingress while keeping cloud agnostic?
How do other companies deploy their ingress resources to the k8s cluster?
What is the best approach to create the ingress resource that interact with ELB into target deployment environment that runs on Kubernetes?
On AWS the common approach is to use ALB, and the AWS ALB Ingress Controller, but it has its own drawbacks in that it create one ALB per Ingress resource.
Is we deploy also a mesh like Istio then we need to use Istio gateway with ingress.
Yes, then the situation is different, since you will use VirtualService from Istio or use AWS App Mesh - that approach looks better, and you will not have an Ingress resource for your apps.
I'm looking for the best solution that targets to use more standard options, decreasing the differences between platforms to deploy ingress resource.
Yes, this is in the intersection between the cloud provider infrastructure and your cluster, so there are unfortunately many different setups here. It also depends on if your ingress gateway is within the cluster or outside of the cluster.
In addition, the Ingress resource, just become GA (stable) in the most recent Kubernetes, 1.19.
My current version of istio is 0.2.12.
I have a deployment that is deployed with istio kube-inject and tries to connect to a service/deployment inside of the kubernetes cluster that not uses Istio, how is it possible to allow access from the istio using deployment to the not istio using deployment.
In this case is the istio baked deployment a Spring boot application and the other is an ephemeral MySQL server.
Any ideas?
You should be able to access all the kubernetes services (Istio-injected and the regular Kubernetes ones) from Istio-injected pods.
This now possible, please see the
"Can I enable Istio Auth with some services while disable others in the same cluster?"
question in the security section of the faq: https://istio.io/help/faq.html