Is it possible to use Istio without kubernetes or docker? - kubernetes

I have 4 microservices running on my laptop listening at various ports. Can I use Istio to create a service mesh on my laptop so the services can communicate with each other through Istio? All the links on google about Istio include kubernetes but I want to run Istio without Kubernetes. Thanks for reading.

In practice, not really as of this writing, since pretty much all the Istio runbooks and guides are available for Kubernetes.
In theory, yes. Istio components are designed to be 'platform independent'. Quote from the docs:
While Istio is platform independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.
But unless you know really well the details of each of the components: Envoy, Mixer, Pilot, Citadel, and Galley and you are willing to spend a lot of time it becomes not practically feasible to get it running outside of Kubernetes.
If you want to use something less tied to Kubernetes you can take a look at Consul, although it doesn't have all the functionality Istio has, it has overlap with some of its features.

I do some googles, and found that istio claim to support apps running outside k8s, like in vm. But I never try.
https://istio.io/latest/news/releases/0.x/announcing-0.2/#cross-environment-support
https://jimmysong.io/blog/istio-vm-odysssey/

Related

multiple environment for websites in Kubernetes

I am a newbie in Kubernetes.
I have 19 LAN servers with 190 machines.
Each of the 19 LANs has 10 machines and 1 exposed IP.
I have different websites/apps and their environments that are assigned to each LAN.
how do I manage my Kubernetes cluster and do setup/housekeeping.
Would like to have a single portal or manager to manage the websites and environment(dev, QA, prod) and keep isolation.
Is that possible?
I only got a vague idea of what you want to achieve so here goes nothing.
Since Kubernetes has a lot of convenience tools for setting a cluster on a public cloud platform, I'd suggest to start by going through "kubernetes-the-hard-way". It is a guide to setup a cluster on Google Cloud Platform without any additional scripts or tools, but the instructions can be applied to local setup as well.
Once you have an operational cluster, next step should be to setup an Ingress Controller. This gives you the ability to use one or more exposed machines (with public IPs) as gateways for the services running in the cluster. I'd personally recommend Traefik. It has great support for HTTP and Kubernetes.
Once you have the ingress controller setup, your cluster is pretty much ready to use. Process for deploying a service is really specific to service requirements but the right hand rule is to use a Deployment and a Service for stateless loads, and StatefulSet and headless services for stateful workloads that need peer discovery. This is obviously too generalized and have many exceptions.
For managing different environments, you could split your resources into different namespaces.
As for the single portal to manage it all, I don't think that anything as such exists, but I might be wrong. Besides, depending on your workflow, you can create your own portal using the Kubernetes API but it requires a good understanding of Kubernetes itself.

Kubernetes one service multiple deployments. Pros and cons?

In this question the accepted answer shows how to have multiple deployments in a single service so they can talk to each other using their internal loadbalancer ports. What are the pros and cons? My guesses:
Pros:
Easier to deploy?
Easier communication between pods (no need for ingress)?
Is there any added security since the backend could be accesses only from within the cluster?
Cons:
You have to deploy all of the connected pods every time (not a
microservice architecture)?
The only common reason to have multiple deployments in one service is for blue/green stuff, or maybe canary deploys (though usually that's done via a proxy with better control over the canary scaling factors). Beyond that, it comes up pretty rarely, sometimes maybe for integrating with Prometheus Operator's ServiceMonitor, or very niche tricks with in-place system rewrites.

Is it possible to run both Istio and gRPC in a GKE cluster

Istio and gRPC seem complementary and I'd like to use both in the clusters.
The thing is that they both add an extra container which receives/proxy communication between pods / microservices.
Is it advised or not to use both in parallel in all pods?
Are there particular adaptations to do if one uses both?
Thanks for any advice!
Istio and gRPC do work well together, when declaring your services ports' to istio just make sure to name them grpc-something so the proxy knows it is h2/grpc traffic and route it properly
You mention that gRPC adds an extra container - why not having your service speak gRPC natively ?
We do have future plans with protocol transcoding and rich integrated gRPC/istio libraries that would skip layers but that's not there yet.

Both public and intranet services on the same OpenShift cluster

In my company we have few public websites and many internal webapps. Currently they are are running in different AWS security groups.
Is it possible to run both kind of services on the same OpenShift cluster and make sure internal services are not accessible from the Internet?
Thanks!
The traditional(?) way that is solved is through Internet-facing ELB/ALBs pointed to the NodePorts on the cluster. I personally haven't tried Service of kind: LoadBalancer since 1.2 to be able to speak to its functionality, but I do know kubernetes has a lot of users on AWS, so it's plausible it works fine by now.
You can also run your own Ingress Controller, several of which have support for ip white/black listing, authentication, SSL/TLS, all the fancy toys, if you'd prefer not to deal with the ELB headache.
If you're not already considering it, Calico SDN has support for in-cluster networking policies, so you could also apply an extra level of locked-down-ness to ensure no Internet app breaks out of its allowed network path; thus, security-groups moving down into the cluster.

Scalability of kube-proxy

I have encountered a scalability problem when trying out the kubernetes cluster. To simplify the topology in my test machine, NodePort type is used to expose the individual service externally. The baremetal to host the node and master is a RHEL 7 with 24 CPUs and 32G RAM; I don't yet have a dedicated load balancer, or a cloud provider like infrastructure. A snippet of the service definition looks just like below
"spec": {
"ports": [{
"port": 10443,
"targetPort": 10443,
"protocol": "TCP",
"nodePort": 30443
} ],
"type": "NodePort",
Through this way the application can be accessible via https://[node_machine]:30443/[a_service]
Such service is only backed by one Pod. Ideally I would want to have several services deployed on the same node (but using different NodePort's), and and running concurrently.
Things were working well until it became evident that for a similar workload, increasing the number of services deployed (therefore backend pods as well) makes the applications degrade in performance. Surprisingly, when breaking down the service loading time, I noticed there's dramatic degradation in 'Connection Time' which seems to indicate there is a slowdown somewhere in the 'network' layer. Please note that the load isn't high enough to drive much of the CPU on the node yet. I read about the shortcomings in the doc, but not sure if what I hit is exactly the limitation of the kube-proxy/Service described there.
The questions are:
Is there any suggestion on how to make it more scalable? I.e. to be able to support more services/Pods without scarifying the applications' performance? The NodePort type is the easiest way to setup the 'public' address for our services but is there any limitation for scalability or performance if all services and Pods are setting up this way?
Would there be any difference if we change the type to LoadBalancer?
"type": "LoadBalancer"
Further more, is there a benefit to have a dedicated LoadBalancer or reverse proxy to improve the scalability, e.g. HAProxy or alike, that routes traffic from external to the backend Pods (or Services)? I noticed there's some work done for Nginx darkgaro/kubernetes-reverseproxy - unfortunately the doc seems incomplete and there's no concrete example. In some of the other threads folks talked about Vulcan - is it the recommended LB tool for kubernetes?
Your recommendation and help are highly appreciated!
Hello am kinda new to kubernetes but I have similar questions and concerns. Will try to answer some of them or redirect you to the relevant sections of the user guide.
In case you are deploying Kubernetes on a non cloud enabled providers like for example vagrant /local, etc then some features are not currently offered or automated by the platform for u.
One of those things is the 'LoadBalancer' type of Service. The automatic provision and assignment of a PUBLIC IP to the service (acting a L.B) happens currently only in platforms like Google Container engine.
See issue here and here.
The official documentation states
On cloud providers which support external load balancers, setting the
type field to "LoadBalancer" will provision a load balancer for your
Service.
Currently an alternative is being developed and documented, see here using HAProxy.
Maybe in the near future, kubernetes will eventually support that kind of feature in all the available platforms that can be deployed and operate, so always check their updated features.
What you are referring as performance degrade is most probably due to the fact, PublicIP (NodePort from version 1.0 and onwards) feature is working. Meaning that with the use of NodePort service type, kubernetes assigns a port on ALL nodes of the cluster for this kind of service. Then the kube-proxy intercepts the calls to this ports to the actual service etc.
An example on using HaProxy trying to solve the very same problem can be found here.
Hope that helped a bit.
I'm facing same problems. It seems that internal kube-proxy is not intended to be external load balancer. More specifically, we wanted to setup some timeout on kube-proxy or do re-tries etc.
I've found this article which describes similar issues. He recommends to look at vulcan as it internally uses etcd and probably the direction of this project is to provide fully featured LB for k8s in the future.