Istio using IPv6 instead of IPv4 - kubernetes

I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000. However, accessing the gateway is inaccessible from 127.0.0.1:443 and instead is only accessible from [::1]:443.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1

Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.

Related

Access pod from another pod with kubernetes url

I have two pods created with deployment and service. my problem is as follows the pod "my-gateway" accesses the url "adm-contact" of "http://127.0.0.1:3000/adm-contact" which accesses another pod called "my-adm-contact" as can i make this work? I tried the following command: kubectl port-forward my-gateway-5b85498f7d-5rwnn 3000:3000 8879:8879 but it gives this error:
E0526 21:56:34.024296 12428 portforward.go:400] an error occurred forwarding 3000 -> 3000: error forwarding port 3000 to pod 2d5811c20c3762c6c249a991babb71a107c5dd6b080c3c6d61b4a275b5747815, uid : exit status 1: 2022/05/27 00:56:35 socat[2494] E connect(16, AF=2 127.0.0.1:3000, 16): Connection refused
Remembering that the images created with dockerfile are with EXPOSE 3000 8879
follow my yamls:
Deployment my-adm-contact:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: my-contact-adm
imagePullPolicy: Never
ports:
- containerPort: 8879
hostPort: 8879
name: admcontact8879
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Sevice my-adm-contact:
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact
labels:
run: my-adm-contact
spec:
selector:
app: my-adm-contact
ports:
- name: 8879-my-adm-contact
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
Deployment my-gateway:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: api-gateway
imagePullPolicy: Never
ports:
- containerPort: 3000
hostPort: 3000
name: home
#- containerPort: 8879
# hostPort: 8879
# name: adm
readinessProbe:
httpGet:
path: /adm-contact
port: 8879
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 6
Service my-gateway:
apiVersion: v1
kind: Service
metadata:
name: my-gateway
labels:
run: my-gateway
spec:
selector:
app: my-gateway
ports:
- name: 3000-my-gateway
port: 3000
protocol: TCP
targetPort: 3000
- name: 8879-my-gateway
port: 8879
protocol: TCP
targetPort: 8879
type: LoadBalancer
status:
loadBalancer: {}
What k8s-cluster environment are you running this in? I ask because the service.type of LoadBalancer is a special kind: at pod initialisation your cloud provider's admission controller will spot this and add in a loadbalancer config See https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
If you're not deploying this in a suitable cloud environment, your services won't do anything.
I had a quick look at your SO profile and - sorry if this is presumptious, I don't mean to be - it looks like you're relatively new to k8s. You shouldn't need to do any port-forwarding/kubectl proxying, and this should be a lot simpler than you might think.
When you create a service k8s will 'create' a DNS entry for you which points to the pod(s) specified by your selector.
I think you're trying to reach a setup where code running in my-gateway pod can connect to http://adm-contact on port 3000 and reach a listening service on the adm-contact pod. Is that correct?
If so, the outline solution is to expose tcp/3000 in the adm-contact pod, and create a service called adm-contact that has a selector for adm-contact pod.
This is a sample manifest I've just created which runs nginx and then creates a service for it, allowing any pod on the cluster to connect to it e.g. curl http://nginx-service.default.svc In this example I'm exposing port 80 because I didn't want to have to modify the nginx config, but the principle is the same.
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: ClusterIP
The k8s docs on Services are pretty helpful if you want more https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
a service can be reached on it's own name from pods in it's namespace:
so a service foo in namespace bar can be reached at http://foo from a pod in namespace bar
from other namespaces that service is reachable at http://foo.bar.svc.cluster.local. Change out the servicename and namespace for your usecase.
k8s dns is explained here in the docs:
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
I have taken the YAML you provided and assembled it here.
From another comment I see the URL you're trying to connect to is: http://gateway-service.default.svc.cluster.local:3000/my-adm-contact-service
The ability to resolve service names to pods only functions inside the cluster: coredns (a k8s pod) is the part which recognises when a service has been created and what IP(s) it's available at.
So another pod in the cluster e.g. one created by kubectl run bb --image=busybox -it -- sh would be able to resolve the command ping gateway-service, but pinging gateway-service from your desktop will fail because they're not both seeing the same DNS.
The api-gateway container will be able to make a connect to my-adm-contact-service on ports 3000 or 8879, and the my-adm-contact container will equally be able to connect to gateway-service on port 3000 - but only when those containers are running inside the cluster.
I think you're trying to access this from outside the cluster, so now the port/service types are correct you could re-try a kubectl port-forward svc/gateway-service 3000:3000 This will let you connect to 127.0.0.1:3000 and the traffic will be routed to port 3000 on the api-gateway container.
If you need to proxy to the other my-adm-contact-service then you'll have to issue similar kubectl commands in other shells, one per service:port combination. For completeness, if you wanted to route traffic from your local machine to all three container/port sets, you'd run:
# format kubectl port-forward svc/name src:dest (both TCP)
kubectl port-forward svc/gateway-service 3000:3000
kubectl port-forward svc/my-adm-contact-service 8879:8879
kubectl port-forward svc/my-adm-contact-service 3001:3000 #NOTE the changed local port, because localhost:3000 is already used
You will need a new shell for each kubectl, or run it as a background job.
apiVersion: v1
kind: Pod
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
containers:
- image: my-contact-adm
imagePullPolicy: Never
name: my-adm-contact
ports:
- containerPort: 8879
protocol: TCP
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
spec:
ports:
- port: 8879
protocol: TCP
targetPort: 8879
name: adm8879
- port: 3000
protocol: TCP
targetPort: 3000
name: adm3000
selector:
app: my-adm-contact
type: ClusterIP
---
apiVersion: v1
kind: Pod
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
containers:
- image: api-gateway
imagePullPolicy: Never
name: my-gateway
ports:
- containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: my-gateway
type: ClusterIP

Service Topology in k8s 1.17 doesn't works

I enabled a service topology in k8s, but when I create the following service the request is not redirected to local pod:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
topologyKeys:
- "kubernetes.io/hostname"

How to expose redis to outside with istio sidecar?

I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.
However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.
Then I removed the istio sidecar and just started the redis server in k8s, it worked.
After searching I added DestinationRule to the config, but it didn't help.
So what's the problem of it? Thanks for any tips!
Here is my redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
Update:
This is how I request
[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte
There are few thing that need to be different in case of exposing TCP application with istio.
The hosts: needs to be "*" as TCP protocol works only with IP:PORT. There are no headers in L4.
There needs to be TCP port match Your VirtualService that matches GateWay. I suggest to name it in a unique way and match Deployment port name.
I suggest avoiding using port 80 as it is already used in default ingress configuration and it could result in port conflict, so i changed it to 11337.
So Your GateWay should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
And Your VirtualService like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
Note that I removed namespaces for clarity.
Then add our custom port to default ingress gateway use the following command:
kubectl edit svc istio-ingressgateway -n istio-system
And add following next other port definitions:
- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
To access the exposed application use istio gateway external IP and port that we
just set up.
To get Your gateway external IP you can use:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
redis-cli -h $INGRESS_HOST -p 11337
If Your istio-ingressgateway does not have external IP assigned, use one of Your nodes IP address and port 31402.
Hope this helps.
Thanks for suren's answer.
But i think redis.basic.svc.cluster.local is outside DNS host to match by VirtualService, and VirtualService.host is route to service redis-svc with full namespace path.
Maybe not for that reason.

How to configure ingress gateway in istio?

I'm new to istio, and I want to access my app through istio ingress gateway, but I do not know why it does not work.
This is my kubenetes_deploy.yaml file content:
apiVersion: v1
kind: Service
metadata:
name: batman
labels:
run: batman
spec:
#type: NodePort
ports:
- port: 8000
#nodePort: 32000
targetPort: 7000
#protocol: TCP
name: batman
selector:
run: batman
#version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: batman-v1
spec:
replicas: 1
selector:
matchLabels:
run: batman
template:
metadata:
labels:
run: batman
version: v1
spec:
containers:
- name: batman
image: leowu/batman:v1
ports:
- containerPort: 7000
env:
- name: MONGODB_URL
value: mongodb://localhost:27017/articles_demo_dev
- name: mongo
image: mongo
And here is my istio ingress_gateway.yaml config file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: batman-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 7000
I created the ingress gateway from example, and it looks well but when I run kubectl get svc istio-ingressgateway -n istio-system I can't see the listening port 15000 in the output。I donot know way.
Is there anyone can help me? Thanks.
First of all, as #Abhyudit Jain mentioned you need to correct port in VirtualService to 8000
And then you just add another port to your istio-ingressgateway service
kubectl edit svc istio-ingressgateway -n istio-system
add section:
ports:
- name: http
nodePort: 30001
port: 15000
protocol: TCP
targetPort: 80
This will accept HTTP traffic on port 15000 and rout it to your destination service on port 8000
simple schema as follows:
incoming traffic --> istio-gateway service --> istio-gateway --> virtual service --> service --> pod
You batman service listens on port 8000 and forwards traffic to container's port 7000.
The istio traffic works like this:
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
So your virtual service should be like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 8000 <--- change

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.