How can I enable a Kubernetes networkpolicy for a url? - kubernetes

In my kubernetes cluster all network traffic crossing the namespace border is blocked and I have to enable it manually with a network policy.
The official kubernetes documentation describes networkpolicies via pod labels or ip ranges, but I need to connect to a specific url.
Of course, I can lookup the ip of this url and enable it, but if the ip changes I will get into trouble.
Is there any recommended way to allow communication with only a specific url?

TL;DR: Not possible.
According to Kubernetes API Reference Docs - NetworkPolicyPeer v1 networking.k8s.io, fields you can specify in egress.to are:
ipBlock
IPBlock describes a particular CIDR (Ex. "192.168.1.1/24","2001:db9::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector.
namespaceSelector
Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector.
podSelector
This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace.
Or, in more blunt terms - NetworkPolicy can be applied to specific IP range, specific namespace(s), or specific pod(s). URL are not supported.
Since you are already using Calico, you may want to have a look at Advanced egress access controls, which gives you exactly what you are looking for.
It is, however, behind a paywall, being a part of Calico Enterprise.

Related

Understanding pod labels vs annotations

I am trying to understand the difference between pods and annotations.
Standard documentation says that annotations captures "non-identifying information".
While on labels, selectors can be applied. Labels are used to organise objects in kubernetes cluster.
If this is the case then why istio use pod annotations instead of labels for various different settings : https://istio.io/latest/docs/reference/config/annotations/
Isn't label is good approach ?
Just trying to understand what advantages does annotations provide, if istio developers chose to use annotations.
As Extending the Burak answer,
Kubernetes labels and annotations are both ways of adding metadata to
Kubernetes objects. The similarities end there, however. Kubernetes
labels allow you to identify, select and operate on Kubernetes
objects. Annotations are non-identifying metadata and do none of these
things.
Labels are mostly used to attach with the resources like POD, Replica set, etc. it also get used to route the traffic and routing deployment to service and other.
Labels are getting stored in the ETCD database so you can search using it.
Annotation is mostly to store metadata and config-if any.
Metadata like : owner details, last helm release if using helm, side car injection
You can store owner details in labels, K8s use labels for traffic routing from service to deployment and labels should be the same on both resources (deployment & service) to route traffic.
What will you do in that case to match labels for resources? Use service owner name same inside all deployment & service? when you are running multiple distributed services managed by diff team and service owners.
If you notice some of annotation of istio is just for storing metadata like : install.operator.istio.io/chart-owner, install.operator.istio.io/owner-generation
Read more at : https://istio.io/latest/docs/reference/config/annotations/
You should also check once syntax of both label and annotation.

kubernetes dashboard - how to view list of defined namespaces without the need to search a certain namespace manually

When i do a RoleBinding to a certain user,
i wish this user could be able to access the k8s dashboard and already see the allowed namespaces i defined in the yaml.
the issue is that, after a user access the dashboard, he needs to type manually the required namespace, pressing 'Enter', only then he sees its features!
If for example i define namespaces A,B and C to user John, can you provide a yaml template that allows John to access and immediately see and select between namespaces A,b and C?
(currently, under the 'namespaces' field, he needs to type 'A' end 'Enter' in order to view)

block a rundeck node from arbitrary cloud and non-cloud resource discovery?

Is there a way to block arbitrary nodes being reported/discovered/red-status in rundeck? With all the sources feeding in (GCP plugin, resources.xml, etc.) I have often found a job status which applies to "all" is red since the individual instance isn't yet configured, giving a red status to the job.
Would be great if there were a way to do an easy block from the GUI and CLI for all resources for the given node.
You can use custom node-filters rules based on nodes status using health check status (also you can filter by name, tags, ip address, regex, etc). Take a look at this (at "Saving filters" section you've a good example).
Do a .hostnamepattern. in the exclude filter in the job and hit Save.
Simplify-simplify-simplify.

Ad Hoc Kubernetes Queries

Is there a way to easily query Kubernetes resources in an intuitive way? Basically I want to run queries to extract info about objects which match my criteria. Currently I face an issue where my match labels isn't quite working and I would like to run the match labels query manually to try and debug my issue.
Basically in a pseudo code way:
Select * from pv where labels in [red,blue,green]
Any third party tools who do something like this? Currently all I have to work with is the search box on the dashboard which isn't quite robust enough.
You could use kubectl with JSONPath (https://kubernetes.io/docs/reference/kubectl/jsonpath/). More information on JSONPath: https://github.com/json-path/JsonPath
It allows you to query any resource property, example:
kubectl get pods -o=jsonpath='{$.items[?(#.metadata.namespace=="default")].metadata.name}'
This would list all pod names in namespace "default". Your pseudo code would be something along the lines:
kubectl get pv -o=jsonpath='{$.items[?(#.metadata.label in ["red","blue","green"])]}'

Varying labels in Prometheus

I annotate my Kubernetes objects with things like version and whom to contact when there are failures. How would I relay this information to Prometheus, knowing that these annotation values will frequently change? I can't capture this information in Prometheus labels, as they serve as the primary key for a target (e.g. if the version changes, it's a new target altogether, which I don't want). Thanks!
I just wrote a blog post about this exact topic! https://www.weave.works/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/
The trick is Kubelet/cAdvisor doesn't expose them directly, so I run a little exporter which does, and join this with the pod name in PromQL. The exporter is: https://github.com/tomwilkie/kube-api-exporter
You can do a join in Prometheus like this:
sum by (namespace, name) (
sum(rate(container_cpu_usage_seconds_total{image!=""}[5m])) by (pod_name, namespace)
* on (pod_name) group_left(name)
k8s_pod_labels{job="monitoring/kube-api-exporter"}
)
Here I'm using a label called "name", but it could be any label.
We use the same trick to get metrics (such as error rate) by version, which we then use to drive our continuous deployment system. kube-api-exporter exports a bunch of useful meta-information about Kubernetes objects to Prometheus.
Hope this helps!