Istio on Kubernetes: pod to service communication doesn't work - kubernetes

I have two deployments (A and B), each one exposing ClusterIP Service. Before deploying Istio, I was able to communicate from pod A to any of B pods via its Service (e.g. http://B.default.svc.cluster.local/dosomecrazystuff)
After deploying Istio (1.0.5), I getting "http://B.default.svc.cluster.local refusing connection" when calling it from pod in deployment A.
What is default routing policy in Istio? I don't need some cleaver load balancing or version based routing, just straightforward communication from A to B (the same way as I would do that without Istio).
What the absolute minimal required configuration to make it work?

Well, it seems like some local issue I having on my MicroK8s deployment. On EKS and another MicroK8s I able to communicate as desired without anything special.
So, the answer is: no special configuration required to make it work, it supposed to be able to communicate just as is.

Related

What I need to provide to make calls from my k8s cluster?

I have a Kubernetes Cluster with my application running inside of it, also I have a host machine, that my application need to access.
All the infrastructure is located inside the VPN network
How can I setup egress to let my application send requests from the cluster to this host machine (does the Kubernetes Network Policies is an appropriate way to handle this stuff and actually solving this problem?)
(Sorry, if this is too obvious question, haven't found any solutions for that yet, that works)
I'm not sure if I get your question right, but by default no network connectivity is blocked by Kubernetes. I assume you haven't set up any NetworkPolicies, this means all Ingress & Egress communication is open and nothing will block access, at least from K8s perspective.
However, if you have only deployed your application but haven't exposed it yet (with Ingress or Service: LoadBalancer) you will not be able to reach your application from outside the cluster. If you're running on-prem you will need to install MetalLB or some sort of service that allows you to create Services of Type LoadBalancer. The same goes for Ingress however, as the Ingress Controller will need some sort of access in the first place.

kind cluster how to access a service using loadbalancer

I am deploying a k8s cluster locally using Kind. The image gets deployed ok and when I view the list of services I see the following
the service I'm trying to access is chatt-service and if you notice the EXTERNAL-IP is pending. I know minikube has a command which makes this accessible, but how do I do it on a Kind cluster ?
for Loadbalancer service type you will not able to get public ip because you're running it locally and you will need to run it in a cloud provider which will provide the LB for you like ALB in aws or LoadBalancer in Digital ocean. however, you can access this service locally using the Kubectl proxy tool.
.
kubectl port-forward service/chatt-service 3002:3002
There are some additional options to work on LoadBalancer under Kind cluster. (While the port forwarding is the simplest way).
https://kind.sigs.k8s.io/docs/user/loadbalancer/
First way:
You can also expose pods and services using extra port mappings
this mean manually set ports in cluster-config.yaml
And maybe second way (but not actually the solution on LoadBalancer):
You may want to check out the Ingress Guide as a cross-platform
workaround

Having question about publishing service in Kubernetes

My cluster has one master and two slaves(not on any cloud platform), and I create a deployment with 2 replicas so each slave has one pod, the image I’m running is tensorflow-jupyter. Then I create a NodePort type service for this deployment and I thought I can separately run these two pods at the same time, but I was wrong.
Tensorflow-jupyter have to use token it gives to login, everything is fine if there has only 1 pod, but if the replicas is 2 or more, it will have server error after login and logout by itself after I press F5, then I can’t use the token to login anymore. Similar situation happens to Wordpress, too.
I think I shouldn’t use NodePort type to doing this, but I don’t know if other service type can solve this problem. I don’t have load balancer to try and I don’t know how to use ExternalName.
Is there has any way to expose a service for a deployment with 2 or more replicas(one pod per slave)? Or I only can create a lot of deployments all with 1 pod and then expose same amount of services for each deployment?
It seems the application you're trying to deploy requires sticky session support: this is not supported out-of-the-box with the NodePort Service, you have to go for exposing your application using an Ingress resource controlled by an Ingress Controller in order to take advantage of the reverse-proxy capabilities (in this case, the sticky-session).
I'm not suggesting you use the sessionAffinity=ClientIP Service option since it's allowed only for ClusterIP Service resources and according to your question it seems the application has to be accessed outside of the cluster.

Deploying a mobile app backend with kubernetes

I need to some advice regarding how to deploy a high traffic mobile app back-end using kubernetes. This deployment should support HA at-least. We have plans to run a DR site as well, but scope of this question does not include a DR.
We currently use hardware load-balancers to route incoming traffic to different IP addresses attached to different boxes. Each such box runs a nginx instance as a reverse proxy which also act as the https terminator. After https termination, traffic is directed to an apache web-server. Each box has one apacher server receiving all traffic from nginx running in the same box.
We want to introduce kubernetes to this setup so that we can utilize boxes better. Our traffic patterns are highly fluctuating and we believe kubernetes can help us utilize boxes in a more efficient manner.
My current plan is as follows:
-- Keep the hardware load balancer to route incoming traffic to different boxes. (this may not be needed but getting rid of HLB could become very political).
-- Run a kubenetes cluster utilizing all available boxes
-- pack apache + our app as docker image and deploy this image on docker container which in tern is run inside pods in the kubenetes cluster
-- setup ingress to accept external traffic, do https termination and load balance to above pods. A simple round robin or random load balancing algo is fine as our back ends are stateless
Does this sound right? Are there any alternatives? In the above case, where does the ingress controller run?
Your plan seems right. You can either pack apache with the code but it shall be better to keep it separate so that they can contact each other and any one of the version upgrades won't be dependent upon this one.
Also, the hardware load balancer will tickle the traffic on to the ingress which shall further bring it down to the k8s cluster and eventually on the pods.
The ingress controller runs inside the cluster. I guess you're looking to run kuberentes on-premise with your existing hardware. To use the existing hardware loadbalancer outside of kubernetes you could run the nginx ingress controller as a daemonset so that there'd be one instance on each node and expose it via HostPort so that each is exposed on the same port. Or if there are lots of nodes then you'd want to just use a Deployment. Then you'd would want to use NodePort so that Kuberentes would send the traffic to a node where an ingress controller pod runs.
Another alternative would be to expose the nginx ingress controller through LoadBalancer - to do that you'd need to integrate your loadbalancer with kubernetes using something like https://hackernoon.com/metallb-a-load-balancer-for-bare-metal-kubernetes-clusters-f7320fde52f2
Alternatively, you wouldn't necessarily have to use ingress. You could just run nginx in the cluster and expose it via NodePort.
It's not clear to me that you'd need apache http server in your container. I guess it depends how you are using it currently.

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).