Whitelisting IP addresses for network traffic through Istio gateways - kubernetes

I tried whitelisting IP address/es in my kubernetes cluster's incoming traffic using this example :
Although this works as expected, wanted to go a step further and try if I can use istio gateways or virtual service when I set up Istio rule, instead of Loadbalancer(ingressgateway).
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
namespace: my-namespace
spec:
match: source.labels["app"] == "my-app"
actions:
- handler: whitelistip.listchecker
instances:
- sourceip.listentry
---
Where my-app is of kind: Gateway with a certain host and port, and labelled app=my-app.
Am using istio version 1.1.1
Also my cluster has all the istio-system running with envoy sidecars on almost all service pods.

You confuse one thing that, in above rule, match: source.labels["app"] == "my-app" is not referring to any resource's label, but to pod's label.
From OutputTemplate Documentation:
sourceLabels | Refers to source pod labels.
attributebindings can refer to this field using $out.sourcelabels
And you can verify by looking for resources with "app=istio-ingressgateway" label via:
kubectl get pods,svc -n istio-system -l "app=istio-ingressgateway" --show-labels
You can check this blog from istio about Mixer Adapter Model, to understand complete mixer model, its handlers,instances and rules.
Hope it helps!

Related

Prometheus returns error context deadline exceeded

I deployed Prometheus with an Helm chart from Rancher. Targets such as Alertmanager, Prometheus, Grafana, Node-exporter, Kubelet etc. are configured automatically. The endpoint from alertmanager refers to the IP address of the specific pod for example. I also configured multiple targets successfully like Jira and Confluence.
Since the service external-dns is running in the namespace kube-system, it's also configured automatically. But only this service is getting the error Context deadline exceeded.
I checked in a random pod if those metrics are accessible by running the command curl -s http://<IP-ADDRESS-POD>:7979/metrics. Also did this with the service ip address (kubectl get service external-dns and curl-s http://<IP-ADDRESS-SVC>:7979/metrics).
Both of these curl commands returned the metrics within a second. So increasing the scrape timeout won't help.
But when I exec in the Prometheus container and use the promtool debug metrics command it shows the same behaviour like in my browser. The external-dns returns a timeout with both of the IP addresses and if I try this with another target it just returns the metrics.
I also don't think it's a SSL issue, because I already injected the correct CA bundle for the targets Jira and Confluence.
So anybody an idea? :)
I had to edit the NetworkPolicy in the kube-system namespace. The containers from the cattle-monitoring-system namespace are now allowed to access the containers from the kube-system namespace. You can upload your NetworkPolicies here and it visualizes which resources has access or not. The NetworkPolicy looks like this now:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-network-policy
namespace: kube-system
spec:
ingress:
- from:
- namespaceSelector:
matchLabels:
name: cattle-monitoring-system
- from:
- podSelector: {}
podSelector: {}
policyTypes:
- Ingress

Add a simple A record to the CoreDNS service on Kubernetes

Here is the issue:
We have several microk8s cluster running on different networks; yet each have access to our storage network where our NAS are.
within Kubernetes, we create disks with an nfs-provisionner (nfs-externalsubdir). Some disks were created with the IP of the NAS server specified.
Once we had to change the IP, we discovered that the disk was bound to the IP, and changing the IP meant creating a new storage resource within.
To avoid this, we would like to be able to set a DNS record on the Kubernetes cluster level so we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed (when we upgrade or migrate our external NAS appliances, for instance)
for instance, I'd like to tell every microk8s environnements that:
192.168.1.4 my-nas.mydomain.local
... like I would within the /etc/hosts file.
Is there a proper way to achieve this?
I tried to follow the advices on this link: Is there a way to add arbitrary records to kube-dns?
(the answer upvoted 15 time, the cluster-wise section), restarted a deployment, but it didn't work
I cannot use the hostAliases feature since it isn't provided on every chart we are using, that's why I'm looking for a more global solution.
Best Regards,
You can set you custom DNS in K8s using the Kube-DNS (Core-DNS)
You have to inject/pass the configuration file as configmap to Core DNS volume.
Configmap will look like
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
You read more about at :
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
https://platform9.com/kb/kubernetes/how-to-customize-coredns-configuration-for-adding-additional-ext
Or else you can also use the external DNS with the Core DNS
You can annotate the service(resource) and external DNS will add the address to core-dns
Read more about it at :
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/coredns.md
https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deploy-mcp-cluster-using-drivetrain/deploy-k8s/external-dns/verify-external-dns/coredns-etxdns-verify.html
...we could create storage resources with the nfs provisionner based on a name an not an IP, and we could alter the DNS record when needed...
For this you can try headless service without touching coreDNS:
apiVersion: v1
kind: Service
metadata:
name: my-nas
namespace: kube-system # <-- you can place it somewhere else
labels:
app: my-nas
spec:
ports:
- protocol: TCP
port: <nas port>
---
apiVersion: v1
kind: Endpoints
metadata:
name: my-nas
subsets:
- addresses:
- ip: 192.168.1.4
ports:
- port: <nas port>
Use it as: my-nas.kube-system.svc.cluster.local

Can I guarantee the "kubernetes" Service will retain a consistent ClusterIP following cluster creation even if I attempt to modify or recreate it?

A few of our Pods access the Kubernetes API via the "kubernetes" Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the "kubernetes" Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.
Specifically, this value:
kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}'
Is it possible for the "kubernetes" Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.
The short answer is no.
More details :
You cannot change/edit clusterIP because it's immutable... so kubectl edit will not work for this field.
The service cluster IP can be changed easly by kubectl delete -f svc.yaml, then kubectl apply -f svc.yaml again.
Hence, never ever relies on service IP because services are designed to be referred by DNS :
Use service-name if the communicator is inside the same namespace
Use service-name.service-namespace if the communicator is inside or outside the same namespace.
Use service-name.service-namespace.svc.cluster.local for FQDN.
yes that is possible
if specify clusterIP in your service yaml file(Service.spec.clusterIP), the ip address of your service will not be random and always will be same. service yaml should be like this:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: 10.96.0.100
ports:
- name: https
port: 443
protocol: TCP
targetPort: 80
type: ClusterIP
be careful ip you choose should be unassigned in your cluster.

IP Blacklisting in Istio

The IP whitelisting/blacklisting example explained here https://kubernetes.io/docs/tutorials/services/source-ip/ uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use request.headers["X-Real-IP"], however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac.
https://github.com/istio/istio/issues/7328 mentions this issue and states:
With a proxy that terminates the client connection and opens a new connection to your nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that of the client.
With a packet forwarder, such that requests from the client sent to the loadbalancer VIP end up at the node with the source IP of the client, not an intermediate proxy.
Loadbalancers in the first category must use an agreed upon protocol between the loadbalancer and backend to communicate the true client IP such as the HTTP X-FORWARDED-FOR header, or the proxy protocol.
Can someone please help how can we define a protocol to get the client IP from the loadbalancer?
Maybe your are confusing with kube-proxy and istio, by default Kubernetes uses kube-proxy but you can install istio that injects a new proxy per pod to control the traffic in both directions to the services inside the pod.
With that said you can install istio on your cluster and enable it for only the services you need and apply a blacklisting using the istio mechanisms
https://istio.io/docs/tasks/policy-enforcement/denial-and-list/
To make a blacklist using the source IP we have to leave istio manage how to fetch the source IP address and use som configuration like this taken from the docs:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
compiledTemplate: listentry
params:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip
instances: [ sourceip ]
---
You can use the param providerURL to maintain an external list.
Also check to use externalTrafficPolicy: Local on the ingress-gateway servce of istio.
As per comments my last advice is to use a different ingress-controller to avoid the use of kube-proxy, my recomendation is to use the nginx-controller
https://github.com/kubernetes/ingress-nginx
You can configure this ingress as a regular nginx acting as a proxy

Does GKE support nginx-ingress with static ip?

I have been using the Google Cloud Load Balancer ingress. However, I'm trying to install a nginxinc/kubernetes-ingress controller in a node with a Static IP address in GKE.
Can I use Google's Cloud Load Balancer ingress controller in the same cluster?
How can we use the nginxinc/kubernetes-ingress with a static IP?
Thanks
In case you're using helm to deploy nginx-ingress.
First create a static IP address. In google the Network Loadbalancers (NLBs) only support regional static IPs:
gcloud compute addresses create my-static-ip-address --region us-east4
Then install nginx-helm with the ip address as a loadBalancerIP parameter
helm install --name nginx-ingress stable/nginx-ingress --namespace my-namespace --set controller.service.loadBalancerIP=35.186.172.1
First question
As Radek 'Goblin' Pieczonka already pointed you out it is possible to do so.
I just wanted to link you to the official documentation regarding this matter:
If you have multiple Ingress controllers in a single cluster, you can
pick one by specifying the ingress.class annotation, eg creating an
Ingress with an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
will target the GCE controller, forcing the nginx controller to ignore
it, while an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
Second question
Since you are making use of the Google Cloud Platform I can give you further details regarding this implementation of Kubernetes in Google.
Consider that:
By default, Kubernetes Engine allocates ephemeral external IP
addresses for HTTP applications exposed through an Ingress.
However of course you can use static IP addressed for your ingress resource,
there is an official step to step guide showing you how to create a HTTP Load Balancing with Ingress making use of a ingress resource and to link to it a static IP or how to promote an "ephemeral" already in use IP to be static.
Try to go through it and if you face some issue update the question and ask!
For the nginx-ingress controller you have to set the external IP on the service:
spec:
loadBalancerIP: "42.42.42.42"
externalTrafficPolicy: "Local"
It is perfectly fine to run multiple ingress controllers inside kubernetes, but they need to be aware which Ingress objects they are supposed to instantiate. That is done with a special annotation like :
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
which tells that this ingress is expected to be provided by and only by nginx ingress controller.
As for IP, Some cloud providers allow the loadBalancerIP to be specified. with this you can controll the public IP of a service.
Create a Static Ip
gcloud compute addresses create my-ip --global
Describe the Static Ip (this will helo you to know static IP )
gcloud compute addresses describe ssl-ip --global
Now add these annotations:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: "gce" # <----
kubernetes.io/ingress.global-static-ip-name: my-ip # <----
Apply the ingress
kubectl apply -f infress.yaml
(Now wait for 2 minutes)
Run this to it will reflect the new ip
kubectl get ingress