K8s Network policy endPort can not be applied - kubernetes

I'm trying to apply egress port range for my k8s network policy like this:
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 32000
endPort: 32768
Starting fine but when I describe that, I only see that port 32000 is allowed.
Do I miss something? Or have I made some mistake?
Thanks.

It seems you took this example from Targeting a range of Ports. Here are 2 questions:
I see endPort works only with NetworkPolicyEndPort enabled feature. Despite the fact it is states, this feature enabled by default, can you please
check if it turned for you?
Whats your CNI plugin and does it support endPort in NetworkPolicy spec?

Related

How are the various Istio Ports used?

Question
I am trying to learn Istio and I am setting up my Istio Ingress-Gateway. When I set that up, there are the following port options (as indicated here):
Port
NodePort
TargetPort
NodePort makes sense to me. That is the port that the Ingress-Gateway will listen to on each worker node in the Kubernetes cluster. Requests that hit there are going to route into the Kubernetes cluster using the Ingress Gateway CRDs.
In the examples, Port is usually set to the common port for its matching traffic (80 for http, and 443 for https, etc). I don't understand what Istio needs this port for, as I don't see any traffic using anything but the NodePort.
TargetPort is a mystery to me. I have seen some documentation on it for normal Istio Gateways (that says it is only applicable when using ServiceEntries), but nothing that makes sense for an Ingress-Gateway.
My question is this, in relation to an Ingress-Gateway (not a normal Gateway) what is a TargetPort?
More Details
In the end, I am trying to debug why my ingress traffic is getting a "connection refused" response.
I setup my Istio Operator following this tutorial with this configuration:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-controlplane
namespace: istio-system
spec:
components:
ingressGateways:
- enabled: true
k8s:
service:
ports:
- name: http2
port: 80
nodePort: 30980
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
profile: default
I omitted the TargetPort from my config because I found this release notes that said that Istio will pick safe defaults.
With that I tried to follow the steps found in this tutorial.
I tried the curl command indicated in that tutorial:
curl -s -I -H Host:httpbin.example.com "http://10.20.30.40:30980/status/200"
I got the response of Failed to connect to 10.20.30.40 port 30980: Connection refused
But I can ping 10.20.30.40 fine, and the command to get the NodePort returns 30980.
So I got to thinking that maybe this is an issue with the TargetPort setting that I don't understand.
A check of the istiod logs hinted that I may be on the right track. I ran:
kubectl logs -n istio-system -l app=istiod
and among the logs I found:
warn buildGatewayListeners: skipping privileged gateway port 80 for node istio-ingressgateway-dc748bc9-q44j7.istio-system as it is an unprivileged pod
warn gateway has zero listeners for node istio-ingressgateway-dc748bc9-q44j7.istio-system
So, if you got this far, then WOW! I thank you for reading it all. If you have any suggestions on what I need to set TargetPort to, or if I am missing something else, I would love to hear it.
Port, Nodeport and TargetPort are not Istio concepts, but Kubernetes ones, more specifically of Kubernetes Services, which is why there is no detailed description of that in the Istio Operator API.
The Istio Operator API exposes the options to configure the (Kubernetes) Service of the Ingress Gateway.
For a description of those concepts, see the documentation for Kubernetes Service.
See also
Difference between targetPort and port in Kubernetes Service definition
So the target port is where the containers of the Pod of the Ingress Gateway receive their traffic.
Therefore I think, that the configuration of ports and target ports is application specific and the mapping 80->8080 is more or less arbitrary, i.e. a "decision" of the application.
Additional details:
The Istio Operator describes the Ingress Gateway, which itself consists of a Kubernetes Service and a Kubernetes Deployment. Usually it is deployed in istio-system. You can inspect the Kubernetes Service of istio-ingressgateway and it will match the specification of that YAML.
Therefore the Istio Ingress Gateway is actually talking to its containers.
However, this is mostly an implementation detail of the Istio Ingress Gateway and is not related to a Service and a VirtualService which you define for your apps.
The Ingressgateway is itself a Service and receives traffic on the port you define (i.e. 80) and forwards it to 8080 on its containers. Then it processes the traffic according to the rules which are configured by Gateways and VirtualServices and sends it to the Service of the application.
I still don't really understand what TargetPort is doing, but I got the tutorial working.
I went back an uninstalled Istio (by deleting the operator configuration and then the istio namespaces). I then re-installed it but I took the part of my configuration out that specified the node port.
I then ran a kubectl get namespace istio-ingressgateway -o yaml -n istio-system. That showed me what the istio ingress gateway was using as its defaults for the port. I then went and updated my yaml for the operator to match (except for my desired custom NodePort). That worked.
In the end, the yaml looked like this:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-controlplane
namespace: istio-system
spec:
components:
ingressGateways:
- enabled: true
k8s:
service:
ports:
- name: status-port
nodePort: 32562
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 30980
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32013
port: 443
protocol: TCP
targetPort: 8443
hpaSpec:
minReplicas: 2
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 2
profile: default
I would still like to understand what the TargetPort is doing. So if anyone can answer with that (again, in context of the Istio Ingress Gateway service (not an istio gateway)), then I will accept that answer.
Configuring the istio-gateway with a service will create a kubernetes service with the given port configuration, which (as in a different answer already mentioned) isn't an istio concept, but a kubernetes one. So we need to take a look at the underlying kubernetes mechanisms.
The service that will be created is of type LoadBalancer by default. Also your cloud provider will create an external LoadBalancer that forwards traffic coming to it on a certain port to the cluster.
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 10.107.158.5 1.2.3.4 15021:32562/TCP,80:30980/TCP,443:32013/TCP
You can see the internal ip of the service as well as the external ip of the external loadbalancer and in the PORT(S) column that for example your port 80 is mapped to port 30980. Behind the scenes kube-proxy takes your config and configures a bunch of iptables chains to set up routing of traffic to the ingress-gateway pods.
If you have access to a kubernetes host you can investigate those using the iptables command. First start with the KUBE-SERVICES chain.
$ iptables -t nat -nL KUBE-SERVICES | grep ingressgateway
target prot opt source destination
KUBE-SVC-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:status-port cluster IP */ tcp dpt:15021
KUBE-FW-TFRZ6Y6WOLX5SOWZ tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:status-port loadbalancer IP */ tcp dpt:15021
KUBE-SVC-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:http2 cluster IP */ tcp dpt:80
KUBE-FW-G6D3V5KS3PXPUEDS tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */ tcp dpt:80
KUBE-SVC-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 10.107.158.5 /* istio-system/istio-ingressgateway:https cluster IP */ tcp dpt:443
KUBE-FW-7N6LHPYFOVFT454K tcp -- 0.0.0.0/0 1.2.3.4 /* istio-system/istio-ingressgateway:https loadbalancer IP */ tcp dpt:443
You'll see that there are basically six chains, two for each port you defined: 80, 433 and 15021 (on the far right).
The KUBE-SVC-* are for cluster internal traffic, the KUBE-FW-* for cluster external traffic. If you take a closer look you can see that the destination is the (external|internal) ip and one of the ports. So the traffic arriving on the node's network interface is for example for the destination 1.2.3.4:80. You can now follow down that chain, in my case KUBE-FW-G6D3V5KS3PXPUEDS:
iptables -t nat -nL KUBE-FW-G6D3V5KS3PXPUEDS | grep KUBE-SVC
target prot opt source destination
KUBE-SVC-LBUWNFSUU3FNPZ7L all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 loadbalancer IP */
follow that one as well
$ iptables -t nat -nL KUBE-SVC-LBUWNFSUU3FNPZ7L | grep KUBE-SEP
target prot opt source destination
KUBE-SEP-RZL3ZLWSG2M7ZJYD all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ statistic mode random probability 0.50000000000
KUBE-SEP-F7W3YTTYPP5NEPJ7 all -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */
where you see the service endpoints, which are round robin loadbalanced by 50:50, and finally (choosing one of them):
$ iptables -t nat -nL KUBE-SEP-RZL3ZLWSG2M7ZJYD | grep DNAT
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* istio-system/istio-ingressgateway:http2 */ tcp to:172.17.0.4:8080
where they end up being DNATed to 172.17.0.4:8080, which is the ip of one of the istio-ingressgateway pods ip and port 8080.
If you don't have access to a host/don't run in a public cloud environment, you wont have an external loadbalancer, so you wont find any KUBE-FW-* chains (also the EXTERNAL-IP in the service will stay in <pending>). In that case you would be using <nodeip>:<nodeport> to access the cluster from externally, for which iptables chains are also created. Run iptables -t nat -nL KUBE-NODEPORTS | grep istio-ingressgateway which will also show you three KUBE-SVC-* chains, which you can follow down to the DNAT as shown above.
So the targetport (like 8080) is used to configure networking in kubernetes and also istio uses it to defines on which ports the ingressgateway pods bind to. You can kubectl describe pod <istio-ingressgateway-pod> where you'll find the defined ports (8080 and 8443) as container ports. Change them to whatever (above 1000) and they will change accordingly. Next you would apply a Gateway with spec.servers where you define those ports 8080,8443 to configure envoy (= istio-ingressgateway, the one you define with spec.selector) to listen on those ports and a VirtualService to define who to handle the received requests. See also my other answer on that topic.
Why didn't you initial config work? If you omit the targetport, istio will bind to the port you define (80). That requires istio to run as root, otherwise the ingressgateway is unable to bind to ports below 1000. You can change it by setting values.gateways.istio-ingressgateway.runAsRoot=true in the operator, also see the release note you mentioned. In that case the whole traffic flow from above would look exactly the same, except the ingressgateway pod would bind to 80,443 instead of 8080,8443 and the DNAT would <pod-ip>:(80|443) instead of <pod-ip:(8080|8443)>.
So you basically just misunderstood the release note: If you don't run the istio-ingressgateway pod as root you have to define the targetPort or alternatively omit the whole k8s.service overlay (in that case istio will choose safe ports itself).
Note that I greped for KUBE-SVC, KUBE-SEP and DNAT. There will always be a bunch of KUBE-MARK-MASQ and KUBE-MARK-DROP that not really matter for now. If you want to learn more about the topic, there are some great articles about this out there, like this one.

istio failing with "failed checking application ports"

Using istio 1.0.2 and kubernetes 1.12 on GKE.
When deploying a web application, the pod never reaches the healthy status.
My main pod spits out healthy logs.
However, my sidecar, i.e. the istio-proxy container reads:
* failed checking application ports. listeners="0.0.0.0:15090","10.8.48.10:53","10.8.63.194:15443","10.8.63.194:443","10.8.58.47:15011","10.8.54.249:42422","10.8.48.44:443","10.8.58.10:44134","10.8.54.34:443","10.8.63.194:15020","10.8.49.250:8080","10.8.63.194:31400","10.8.63.194:15029","10.8.63.194:15030","10.8.60.185:11211","10.8.49.0:53","10.8.61.194:443","10.8.48.1:443","10.8.48.180:80","10.8.51.133:443","10.8.63.194:15031","10.8.63.194:15032","0.0.0.0:9901","0.0.0.0:9090","0.0.0.0:80","0.0.0.0:3000","0.0.0.0:8060","0.0.0.0:15010","0.0.0.0:8080","0.0.0.0:20001","0.0.0.0:7979","0.0.0.0:9091","0.0.0.0:9411","0.0.0.0:15004","0.0.0.0:15014","0.0.0.0:3030","10.8.33.8:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 5000
5000 is indeed the port my web app is listening on.
Any suggestions?
If there is a mismatch between deployment port and service port this can cause some issues in combination with the readiness of the sidecar.
Add the annotation readiness.status.sidecar.istio.io/applicationPorts in your deployment like this:
annotations:
readiness.status.sidecar.istio.io/applicationPorts: "5000"
You can add multiple ports by using comma separation.
#mkrobi I got this working as suggested in this post by adding the following-
readinessProbe:
httpGet:
path: /
port: 8080
scheme: HTTP
to the containers in my deployment. Make sure to change port 8080 to 5000.

Kubernetes network policy egress ports

I have the following network policy for restricting access to a frontend service page:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: namespace-a
name: allow-frontend-access-from-external-ip
spec:
podSelector:
matchLabels:
app: frontend-service
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?
You might have a wrong understanding of the network policy(NP).
This is how you should interpret this section:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
Open port 443 for outgoing traffic for all pods within 0.0.0.0/0 cidr.
The thing you are asking
how does Kubernetes know about that port, or is there a kind of port
mapping in the cluster so the traffic back to the client is on port
443 and gets mapped back to the clients original port when leaving the
cluster?
is managed by kube-proxy in following way:
For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.
Take a look at Kubernetes networking basics for a better understanding.

Kube-dns service discovery cannot discover port number of service

I am using DNS based service discovery to discover services in K8s cluster.
From this link it is very clear that to discover a service named my-service we can do name lookup "my-service.my-ns" and the pod should be able to find the services.
However in case of port discovery for the service the solution is to use
is "_http._tcp.my-service.my-ns"
where
_http refers to the port named http in my-service.
But even after using _http._tcp.my-service it doesn't resolve port number. Below are the details.
my-service which needs to be discovered
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
client-service yaml snippet trying to discover my-service and its port.
spec:
containers:
- name: client-service
image: client-service
imagePullPolicy: Always
ports:
- containerPort: 7799
resources:
limits:
cpu: "100m"
memory: "500Mi"
env:
- name: HOST
value: my-service
- name: PORT
value: _http._tcp.my-service
Now when I make a request it fails and logs following request which is clearly incorrect as it doesn't discover port number.
http://my-service:_http._tcp.my-service
I am not sure what wrong I am doing here, but I am following the same instructions mentioned in document.
Can somebody suggest what is wrong here and how we can discover port using DNS based service discovery? Is my understanding wrong here that it will return the literal value of port?
Cluster details
K8s cluster version is 1.11.5-gke.5
and Kube-dns is running
Additional details trying to discover service from busybox and its not able to discover port value 5000
kubectl exec busybox -- nslookup my-service
Server: 10.51.240.10
Address: 10.51.240.10:53
Name: my-service.default.svc.cluster.local
Address: 10.51.253.236
*** Can't find my-service.svc.cluster.local: No answer
*** Can't find my-service.cluster.local: No answer
*** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.google.internal: No answer
*** Can't find my-service.default.svc.cluster.local: No answer
*** Can't find my-service.svc.cluster.local: No answer
*** Can't find my-service.cluster.local: No answer
*** Can't find my-service.us-east4-a.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.c.gdic-infinity-dev.internal: No answer
*** Can't find my-service.google.internal: No answer
kubectl exec busybox -- nslookup _http._tcp.my-service
Server: 10.51.240.10
Address: 10.51.240.10:53
** server can't find _http._tcp.my-service: NXDOMAIN
*** Can't find _http._tcp.my-service: No answer
Since Services come with their own (Kubernetes-internal) IP addresses, the easy answer here is to not pick arbitrary ports for Services. Change to port: 80 in your Service definition, and clients will be able to reach it using the default HTTP port. When you set the environment variable, set
- name: PORT
value: "80"
DNS supports several different record types; for example, an A record translates a host name to its IPv4 address, and AAAA to an IPv6 address. The Kubernetes Service documentation you cite notes (emphasis mine)
you can do a DNS SRV query ... to discover the port number for "http".
While SRV records seems like they solve both halves of this problem (they provide a port and a host name for a service) in practice they seem to get fairly little use. The linked Wikipedia page has a list of services that use it, but "connect to the thing this SRV record points at" isn't an option in mainstream TCP clients that I know of.
You should be able to verify this with a command like (running this debugging image)
kubectl run debug --rm -it --image giantswarm/tiny-tools sh
# dig -t srv _http._tcp.my-service
(But notice the -t srv argument; it is not the default record type.)
Most things that expect a PORT environment variable or similar expect a number, or if not, a name they can find in an /etc/services file. The syntax you're trying to use here and trying to provide a DNS SRV name instead probably just won't work, unless you know the specific software supports it.

exposing CockroachDB on Kubernetes to public IP

I have a CockroachDB instance running in a Kubernetes cluster on Google Kubernetes Engine. I am trying to expose port 26257 so I can connect to it from my local machine.
As stated in this answer, port forwarding to the pod will not work.
I have an nginx-ingress controller which is used to map from my domain name paths to services, so I tried to use that:
I changed my db-cockroachdb-public service from ClusterIP to NodePort:
type: NodePort
I added these lines to my nginx-controller YAML:
-name: postgresql
nodePort: 30472
port: 26257
protocol: TCP
targetPort: 26257
and these lines to my ingress YAML:
- host: db.mydomain.com
http:
paths:
- path: /
backend:
serviceName: db-cockroachdb-public
servicePort: 26257
However, I'm unable to connect to the database - connection gets refused. I also tried to disable SSL redirects in the nginx controller, but it still doesn't work.
I also tried a ConfigMap but it didn't do anything:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md
There are a few ways to fix this. Most are related to changing your ingress configuration or how you're connecting to the service, which I'm not going to go into. Another option is to make port forwarding work to eliminate the need for the ingress machinery.
You can make port forwarding work by modifying the CockroachDB config file slightly. Change the name of the --host flag in the invocation of the Cockroach binary to be --advertise-host instead. That way, the process will listen on localhost in addition to on its hostname, which will make port forwarding work.
edit: To follow up on this, I've switched the default configuration in the CockroachDB repo to use --advertise-host instead of --host, so port forwarding works by default now.
I don't know if it technically should work to proxy a CockroachDB through a nginx instance, but your setup fails for another reason. When specifying a servicePort in the rules section, you tell k8s which port is exposed to the service. The mapping itself happens by default to port 80/443, not your desired port. So you should try just to ask port 80 in your case.