How do I create an internal gateway using Istio? - kubernetes

Currently, we successfully setup Istio to create a couple ingress-gateways like api.example.com and app.example.com, that route traffic to a variety of services with destination rules, etc. In addition to this, we would love to use Istio's features for internal-only APIs, but we are unsure of how to set something like this up. Is it possible to use Istio's Gateway and VirtualServices CRDs to route traffic without exiting the cluster? If so, how would we go about setting that up?

Istio gateways are for traffic coming into the cluster or traffic leaving out the cluster. For traffic inside the cluster you should not use ingress/egress gateways. If you have configured Istio in the cluster to create a service mesh then you get all these benefits because Istio will inject a sidecar envoy for all your services inside the cluster. It's this sidecars which provides all the benefits of the mesh.When you use ingress or egress gateway you are actually using the sidecar deployed as ingress or egress gatway.You can use virtual service, destination rule, service entries, sidecars etc without a gateway because at that point you will using the sidecars deployed alongside your application pods.
Here is an example of virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v3
The virtual service hostname can be an IP address, a DNS name, or, depending on the platform, a short name (such as a Kubernetes service short name) that resolves, implicitly or explicitly, to a fully qualified domain name (FQDN). You can also use wildcard (”*”) prefixes, letting you create a single set of routing rules for all matching services. Virtual service hosts don’t actually have to be part of the Istio service registry, they are simply virtual destinations. This lets you model traffic for virtual hosts that don’t have routable entries inside the mesh.

I would add some things to Arghya Sadhu answer.
I think my example in another post is the answer to your question, specifically virtual service gateways and hosts. This example need additional Destination Rule since we have subsets which mark the route to proper subset of nginx here and they're defined in destination rule.
So, as an example, I would call something like internal-gateway/a or internal-gateway/b, and they would get routed to services A or B
I made something like that
2 nginx pods -> 2 services -> virtual service
Deployment1
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
Deployment2
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend2
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
Service1
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Service2
apiVersion: v1
kind: Service
metadata:
name: nginx2
labels:
app: frontend2
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend2
Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
hosts:
- nginx.default.svc.cluster.local
- nginx2.default.svc.cluster.local
http:
- name: a
match:
- uri:
prefix: /a
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
- name: b
match:
- uri:
prefix: /b
rewrite:
uri: /
route:
- destination:
host: nginx2.default.svc.cluster.local
port:
number: 80
Above virtual service works only internal in mesh gateway.
You have 2 matches for 2 nginx services.
root#ubu1:/# curl nginx/a
Hello nginx1
root#ubu1:/# curl nginx/b
Hello nginx2
I would recommend to check istio documentation and read about :
Gateways
Virtual Services
Destination Rules
And istio examples:
bookinfo
httpbin
So I can make up a DNS name or IP address that doesn't really exist
I think You misunderstood, it must exist, but not in the mesh. For example some database which is not in the mesh but You still can use, for example service entry to connect it to the mesh.
There is example with wikipedia in istio documentation and whole external services documentation.
I hope it will help You. Let me know if You have any more questions.

Related

How can I resolve connection issues between my frontend and backend pods in Minikube?

I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.

Connect two pods to the same external access point

I have two pods (deployments) running on minikube. Each pod has the same port exposed (say 8081), but use different images. Now I want to configure so that I can access either of the pods using the same external URL, in a load balanced way. So what I tried to do is put same matching label in both pods and map them to same service and then expose through NodePort. Example:
#pod1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep1
labels:
apps: dep1
tier: cloud
spec:
template:
metadata:
name: dep1-pod
labels:
app: deployment1
spec:
containers:
- name: cont1
image: cont1
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now second pod
#pod2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dep2
labels:
apps: dep2
tier: cloud
spec:
template:
metadata:
name: dep2-pod
labels:
app: deployment1
spec:
containers:
- name: cont2
image: cont2
ports:
- containerPort: 8081
selector:
matchLabels:
app:deployment1
Now the service:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- port: 8081
targetPort: 8081
nodePort: 30169
selector:
app: deployment1
Now this does not work as intended as it refuses to connect to my IP:30169. However, I can connect if only one of the pods are deployed.
Now I know I can achieve this functionality using replicas and just one image, but in this case, I want to do this using 2 images. Any help is much appreciated.
You can use Ingress to achieve it.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
In your situation Ingress will forward traffic to your services using the same URL. It depends which path you type: URL for first Pod and URL/v2 for second Pod. Of course you can change /v2 to something else.
On the beginning you need enable Ingress on minikube. You can do it using a command below. You can red more about it here
minikube addons enable ingress
Next step you need create a Ingress using a yaml file. Here is an example how to do it step by step.
Yaml file of Ingress looks as below.
As you can see in this configuration, you can access to one Pod using URL and it will forward traffic to first service attached to the first Pod. For the second Pod using URL/v2 it will forward traffic to second service on attached to the second Pod.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /v2
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080

Kubernetes Internal Service Axios NuxtJS

I'm trying to learn Kubernetes as I go and I'm currently trying to deploy a test application that I made.
I have 3 containers and each container is running on its own pod
Front end App (Uses Nuxtjs)
Backend API (Nodejs)
MongoDB
For the Front End container I have configured an External Service (LoadBalancer) which is working fine. I can access the app from my browser with no issues.
For the backend API and MongoDB I configured an Internal Service for each. The communication between Backend API and MongoDB is working. The problem that I'm having is communicating the Frontend with the Backend API.
I'm using the Axios component in Nuxtjs and in the nuxtjs.config.js file I have set the Axios Base URL to be http://service-name:portnumber/. But that does not work, I'm guessing its because the url is being call from the client (browser) side and not from the server. If I change the Service type of the Backend API to LoadBalancer and configure an IP Address and Port Number, and use that as my axios URL then it works. However I was kind of hoping to keep the BackEnd-API service internal. Is it possible to call the Axios base URL from the server side and not from client-side.
Any help/guidance will be greatly appreciated.
Here is my Front-End YML File
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mhov-ipp
name: mhov-ipp
namespace: mhov
spec:
replicas: 1
selector:
matchLabels:
app: mhov-ipp
template:
metadata:
labels:
app: mhov-ipp
spec:
containers:
- image: mhov-ipp:1.1
name: mhov-ipp
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "development"
- name: PORT
value: "8080"
- name: TITLE
value: "MHOV - IPP - K8s"
- name: API_URL
value: "http://mhov-api-service:4000/"
---
apiVersion: v1
kind: Service
metadata:
name: mhov-ipp-service
spec:
selector:
app: mhov-ipp
type: LoadBalancer
ports:
- protocol: TCP
port: 8082
targetPort: 8080
nodePort: 30600
Here is the backend YML File
apiVersion: apps/v1
kind: Deployment
metadata:
name: mhov-api-depl
labels:
app: mhov-api
spec:
replicas: 1
selector:
matchLabels:
app: mhov-api
template:
metadata:
labels:
app: mhov-api
spec:
containers:
- name: mhov-api
image: mhov-api:1.0
ports:
- containerPort: 4000
env:
- name: mongoURI
valueFrom:
configMapKeyRef:
name: mhov-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mhov-api-service
spec:
selector:
app: mhov-api
ports:
- protocol: TCP
port: 4000
targetPort: 4000
What is ingress and how to install it
Your guess is correct. Frontend is running in browser and browser "doesn't know" where backend is and how to reach out to it. You have two options here:
as you did with exposing backend outside your cluster
use advanced solution such as ingress
This will move you forward and will need to change some configuration of your application such as URL since application will be exposed to "the internet" (not really, but you can do it using cloud).
What is ingress:
Ingress is api object which exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Most common option is nginx ingress - their page NGINX Ingress Controller.
Installation depends on cluster type, however I suggest using helm. (if you're not familiar with helm, it's a template engine which uses charts to install and setup application. There are quite a lot already created charts, e.g. ingress-nginx.
If you're using minikube for example, it already has built-in nginx-ingress and can be enabled as addon.
How to expose services using ingress
Once you have working ingress, it's type to create rules for it.
What you need is to have ingress which will be able to communicate with frontend and backend as well.
Example taken from official kubernetes documentation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 8080
In this example, there are two different services available on different paths within the foo.bar.com hostname and both services are within the cluster. No need to expose them out of the cluster since traffic will be directed through ingress.
Actual solution (how to approach)
This is very similar configuration which was fixed and started working as expected. This is my answer and safe to share :)
As you can see OP faced the same issue when frontend was accessible, while backend wasn't.
Feel free to use anything out of that answer/repository.
Useful links:
kubernetes ingress

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

Problem using traefik as load balancer in Kubernetes

The situation is that I have two k8s services which are connected between them. Both are flask servers. The connection between them is as follows, if someone makes a POST to the first one, this get the text input and POST it to the second server which adds some more text to the original text that was posted by the user and, finally, the two texts together are returned to the first server and it returns the final text to the user.
To allow this connection between my k8s services (called master and slave, which matchlabels app-master and app-slave) I have the following networkPolicy:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: master-to-slave
namespace: innovation
spec:
podSelector:
matchLabels:
app: app-slave
ingress:
- ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
app: app-master
To make a curl from outside the tenant I have to use traefik because I am working in a tenant which already has traefik as NodePort, so I can NOT expose my master service as nodePort or convert it to kind LoadBalancer. The ingress I have for this application is the next
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-innovation
namespace: innovation
annotations:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /master
backend:
serviceName: master
servicePort: 5000
- path: /slave
backend:
serviceName: slave
servicePort: 5001
I have also a DNS which allows me to make request to an address (https://name_in_the_DNS) instead of doing the requests to the IP of my tenant. The problem is that when I try to do the following request:
curl https://name_in_the_DNS/master -X POST -d texto
Gives me an error (Gateway Timeout). While if I use "kubectl port-forward" the app works as expected. Any idea of how to solve this issue? I suppose it has something to do with the networkPolicy because I have other applications inside the tenant and the curl requests works for them.
Thanks in advance!
For looking at the services and deployments yamls: Could two cluster IP services be connected in Kubernetes?
The problem is already solved. The thing is that we have traefik in other namespace so another networkpolicy which allows communication between any namespace was missing. The yaml which fixed the issue is as follows.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: all-to-all
namespace: innovation
spec:
podSelector: {} #all pods in the namespace (innovation)
ingress:
- from:
- namespaceSelector: {} #from all namespaces, including the one in which traefik ingress is located
policyTypes:
- Ingress