A and C class IP addresses in cluster - kubernetes

Why there are two IP address classes in my Kubernetes cluster?
kubectl describe svc cara
Name: cara
Namespace: default
Labels: app=cara
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"cara"},"name":"cara","namespace":"default"},"spec":{"por...
Selector: app=cara
Type: NodePort
IP: 10.100.35.240
Port: cara 8000/TCP
TargetPort: cara/TCP
NodePort: cara 31614/TCP
Endpoints: 192.168.41.137:8000,192.168.50.89:8000
Port: vrde 6666/TCP
TargetPort: vrde/TCP
NodePort: vrde 30666/TCP
Endpoints: 192.168.41.137:6666,192.168.50.89:6666
Port: rdp 3389/TCP
TargetPort: rdp/TCP
NodePort: rdp 31490/TCP
Endpoints: 192.168.41.137:3389,192.168.50.89:3389
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
after master installation I did:
kubeadm init --v=0 --pod-network-cidr=192.167.0.0/16
kubectl apply --v=0 -f https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
I expect one range of IP addresses in my cluster network. Am I misunderstand something?

There are two main CIDRs in Kubernetes cluster - pod network and service network. It seems that your cluster has pod network 192.168.0.0/16 and service network 10.0.0.0/8.

Related

Kubernetes service port forwarding to multiple pods

I have configured a kubernetes service and starts 2 replicas/pods.
Here's the service settings and description.
apiVersion: v1
kind: Service
metadata:
name: my-integration
spec:
type: NodePort
selector:
camel.apache.org/integration: my-integration
ports:
- protocol: UDP
port: 8444
targetPort: 8444
nodePort: 30006
kubectl describe service my-service
Detail:
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: camel.apache.org/integration=xxx
Type: NodePort
IP Families: <none>
IP: 10.xxx.16.235
IPs: 10.xxx.16.235
Port: <unset> 8444/UDP
TargetPort: 8444/UDP
NodePort: <unset> 30006/UDP
Endpoints: 10.xxx.1.39:8444,10.xxx.1.40:8444
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What I can see is 2 endpoints are listed, unfortunately only one pod receives events. When I kill the pod, the other one will start receiving events.
My goal is to make 2 pod receive events in Round Robin but don't know how to configure the traffic policies.

Gloo gateway-proxy External IP on GKE : telnet: Unable to connect to remote host: Connection refused

I deploy nifi and gloo API Gateway on the same GKE cluster. The external IP exposed as LoadBalancer work well (open on Web browser or telnet). However, when I use telnet to connect gloo API Gateway on GKE cloud shell, my connection was refused.
Depends on relational causes and solutions, I have allow traffic to flow into cluster by creating firewall rule:
gcloud compute firewall-rules create my-rule --allow=all
How can I do for it?
kubectl get -n gloo-system service/gateway-proxy-v2 -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gloo","gateway-proxy-id":"gateway-proxy-v2","gloo":"gateway-proxy"},"name":"gateway-proxy-v2","namespace":"gloo-system"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"gateway-proxy":"live","gateway-proxy-id":"gateway-proxy-v2"},"type":"LoadBalancer"}}
labels:
app: gloo
gateway-proxy-id: gateway-proxy-v2
gloo: gateway-proxy
name: gateway-proxy-v2
namespace: gloo-system
spec:
clusterIP: 10.122.10.215
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30189
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30741
port: 443
protocol: TCP
targetPort: 8443
selector:
gateway-proxy: live
gateway-proxy-id: gateway-proxy-v2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.xx.xx.xx
kubectl get svc -n gloo-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-proxy-v2 LoadBalancer 10.122.10.215 34.xx.xx.xx 80:30189/TCP,443:30741/TCP 63m
gloo ClusterIP 10.122.5.253 <none> 9977/TCP 63m
You can try bumping to Gloo version 1.3.6.
Please take a look at https://docs.solo.io/gloo/latest/upgrading/1.0.0/ to track any possible breaking changes.

How to expose an external IP address for a sample Istio application

I am trying to set up the bookinfo sample application for Istio and Kubernetes on a small cluster.
The cluster consists of two machines, a master and a worker, running on Ubuntu 18.04 on two Amazon AWS EC2 instances.
Each of the instances has an external IP address assigned.
What I'm unable to do is figure out how to expose the bookinfo service to the outside world.
I am confused as to whether I need to expose the Istio ingress gateway or each one of the bookinfo services separately.
When listing the ingress gateway, the external IP field just says pending.
Also, when describing the worker node, there's no mention of an external IP address in the output.
I've gone through google but can't really find a proper solution.
Describing the ingress gateway only gives internal (i.e. 10.x.x.x) addresses.
Output from get and describe commands:
kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.96.39.4 <pending> 15020:31451/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31075/TCP,15030:32093/TCP,15031:31560/TCP,15032:30526/TCP,15443:31526/TCP 68m
kubectl describe svc istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
chart=gateways
heritage=Tiller
istio=ingressgateway
release=istio
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","chart":"gateways","heritage":"Til...
Selector: app=istio-ingressgateway,istio=ingressgateway,release=istio
Type: LoadBalancer
IP: 10.96.39.4
Port: status-port 15020/TCP
TargetPort: 15020/TCP
NodePort: status-port 31451/TCP
Endpoints: 10.244.1.6:15020
Port: http2 80/TCP
TargetPort: 80/TCP
NodePort: http2 31380/TCP
Endpoints: 10.244.1.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31390/TCP
Endpoints: 10.244.1.6:443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31400/TCP
Endpoints: 10.244.1.6:31400
Port: https-kiali 15029/TCP
TargetPort: 15029/TCP
NodePort: https-kiali 31075/TCP
Endpoints: 10.244.1.6:15029
Port: https-prometheus 15030/TCP
TargetPort: 15030/TCP
NodePort: https-prometheus 32093/TCP
Endpoints: 10.244.1.6:15030
Port: https-grafana 15031/TCP
TargetPort: 15031/TCP
NodePort: https-grafana 31560/TCP
Endpoints: 10.244.1.6:15031
Port: https-tracing 15032/TCP
TargetPort: 15032/TCP
NodePort: https-tracing 30526/TCP
Endpoints: 10.244.1.6:15032
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 31526/TCP
Endpoints: 10.244.1.6:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Any help appreciated.
Quoting Istio's official documentation:
If your cluster is running in an environment that does not support an
external load balancer (e.g., minikube), the EXTERNAL-IP of
istio-ingressgateway will say -pending-. To access the gateway, use
the service’s NodePort, or use port-forwarding instead.
Your cluster seems to fall into 'Custom (cloud)' way of setting up Kubernetes, which by default does not support Load Balancer.
Solution for you:
You must allow inbound traffic to your AWS EC2 instance serving worker role
(in other words you have to open NodePort of istio-ingressgateway's service on firewall, see below how to get this port number)
Get NodePort of istio-ingressgateway:
with command:
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
Get EXTERNAL_IP of your worker node
with command:
export INGRESS_HOST=$(kubectl get nodes --selector='!node-role.kubernetes.io/master' -o jsonpath='{.items[*].status.addresses[?(#.type=="ExternalIP")].address}')
and follow the remaining part of bookinfo sample without any changes.

Accessing nginx ingress controller on port 80

I am able to access the nginx ingress controller on the NodePort. My goal is to access the controller on port 80.
Output of kubectl -n ingress-nginx describe service/ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: NodePort
IP: 10.100.48.223
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30734/TCP
Endpoints: 192.168.0.8:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32609/TCP
Endpoints: 192.168.0.8:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I have few ideas of solving that problem:
redirect traffinc incoming on port 30734 to port 80 via iptables
resize the range for nodeports so port 80 can be a nodeport as well
I am not sure if these are common ways to do this, so I'd love to hear how you usually deal with this. Probably there is another component necessary?
The normal way to handle this is with a LoadBalancer mode service which puts a cloud load balancer in front of the existing NodePort so that you can remap the normal ports back onto it.
You should change from NodePort to LoadBalancer type for your nginx service.
Example manifest look like:
spec:
ports:
- name: nginx
port: 80
protocol: TCP
targetPort: 8000 // Your nginx port
type: LoadBalancer

kubetnetes port forward udp

My service exposes UDP port 6831, and other ports. Here is the service description:
kubectl describe service jaeger-all-in-one  1:51
Name: jaeger-all-in-one
Namespace: myproject
Labels: jaeger-infra=all-in-one
Annotations: <none>
Selector: name=jaeger-all-in-one
Type: ClusterIP
IP: 172.30.213.142
Port: query-http 80/TCP
Endpoints: 172.17.0.3:16686
Port: agent-zipkin-thrift 5775/UDP
Endpoints: 172.17.0.3:5775
Port: agent-compact 6831/UDP
Endpoints: 172.17.0.3:6831
Port: agent-binary 6832/UDP
Endpoints: 172.17.0.3:6832
Session Affinity: None
Events: <none>
Forward port:
kubectl port-forward jaeger-all-in-one-1-cc8wd 12388:6831
However it forwards only TCP ports, I would like to send data to the UDP port.