Istio AuthorizationPolicy rules questions - kubernetes

I’ve been testing istio (1.6) authorization policies and would like to confirm the following:
Can I use k8s service names as shown below where httpbin.bar is the service name for deployment/workload httpbin:
- to:
- operation:
hosts: ["httpbin.bar"]
I have the following rule; only ALLOW access to the httpbin.bar service from service account sleep in foo namespace.
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
- to:
- operation:
hosts: ["httpbin.bar"]
I setup 2 services; httpbin.bar and privatehttpbin.bar. My assumption was that it would block access to privatehttpbin.bar but this is not the case. On a side note, I deliberately avoided adding selector.matchLabels because as far as I can tell the rule should only succeed for httpbin.bar.
The docs state:
A match occurs when at least one source, operation and condition matches the request.
as per here.
I interpreted that AND logic will apply to the source and operation.
Would appreciate if I can find out why this may not be working or if my understanding needs to be corrected.

With your AuthorizationPolicy object, you have two rules in the namespace bar:
Allow any request coming from foo namespace; with service account sleep to any service.
Allow any request to httpbin service; from any namespace, with any service account.
So it is an OR, you are applying.
If you want and AND to be applied; meaning allow any request from the namespace foo with service account sleep to talk to the service httpbin, in the namespace bar, you need to apply the following rule:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
to: # <- remove the dash (-) from here
- operation:
hosts: ["httpbin.bar"]

On the first point You can specify the host name by k8s service name.Therefore httpbin.bar is acceptable for the host field.
On the second point,
As per here ,
Authorization Policy scope (target) is determined by
“metadata/namespace” and an optional “selector”.
“metadata/namespace” tells which namespace the policy applies. If set
to root namespace, the policy applies to all namespaces in a mesh.
So the authorization policy whitelist-httpbin-bar applies to workloads in the namespace foo.But the services httpbin and privatehttpbin you want to authorize lies in bar namespace.So your authorization policy does not restrict access to these services.
If there are no ALLOW policies for the workload, allow the request.
The above criteria makes the request a valid one.
Hope this helps.

Related

How to ALLOW access to a specific path only from within a namespace in istio/k8s?

I have an application running on K8s/istio in foo namespace. I don't have any Authorization Policy and everything works as expected. Now I want to ALLOW access to a specific path only from within bar namespace. So I created an AuthorizationPolicy as follow:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-specific-path
namespace: foo
spec:
selector:
matchLabels:
app: myapp
action: DENY
rules:
- to:
- operation:
paths: ["/specific/path/*"]
- from:
- source:
notNamespaces: ["bar"]
My understanding is the above AP should only allow access to /specific/path/* path from bar namespace. Any other path access should not get affected and work as before. But this causes other paths in my application to be denied including accessing home page /home of the app. What is wrong here? appreciate any help.
My understanding is the above AP should only allow access to /specific/path/* path from bar namespace.
This is not correct, because of the list of elements in your rules section. You have two rules here.
rules:
- to:
- operation:
paths: ["/specific/path/*"]
- from:
- source:
notNamespaces: ["bar"]
The first rule applies to requests targeting /specific/path/* and the second rule applies to requests coming from anywhere except the bar namespace.
Each element in a list or rules is OR'd together. It sounds like you want to AND these two rules together, so try removing the - from the second one (making it into one rule with a from and to clause).

EKS Block specific external IP from viewing nginx application

I have an EKS cluster with an nginx deployment on namespace gitlab-managed-apps. Exposing the application to the public from ALB ingress. I'm trying to block a specific Public IP (ex: x.x.x.x/32) from accessing the webpage. I tried Calico and K8s network policies. Nothing worked for me. I created this Calico policy with my limited knowledge of Network policies, but it blocks everything from accessing the nginx app, not just x.x.x.x/32 external IP. Showing everyone 504 Gateway timeout from ALB
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
Try this:
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: ingress-external
namespace: gitlab-managed-apps
spec:
selector:
app == 'nginx'
types:
- Ingress
ingress:
- action: Deny
source:
nets:
- x.x.x.x/32
- action: Allow
calico docs suggests:
If one or more network policies apply to a pod containing ingress rules, then only the ingress traffic specifically allowed by those policies is allowed.
So this means that any traffic is denied by default and only allowed if you explicitly allow it. This is why adding additional rule action: Allow should allow all other traffic that was not matched by the previous rule.
Also remember what docs mention about rules:
A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they are executed in order.
So default Allow rule has to follow the Deny rule for the specific IP, not the other way around.

Istio routing unique URL to specific pod

I am interested in using Istio in a use case where I spin up a pod based on some event (user starting a game for example), and allow that user to connect to the specific pod through a unqiue URL. Then when this game is over, I can spin the pod down.
I am trying to configure Istio in such a way that either the subdomain or URL can indicate a specific pod to send that request.
Maybe some way to dynamically control label matching? Like <pod_label>.api.com routes to pod matching label label_name: <pod_label> without having to update VirtualServices and DestinationRoutes every time a pod is created?
Edit:
A psuedo config would look similar to:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: dynamic-route
spec:
hosts:
- r.prod.svc.cluster.local
http:
- match:
uri:
prefix: "/pod/{pod_name}"
ignoreUriCase: true
route:
- destination:
host: {pod_name}.prod.svc.cluster.local
(not sure if this would belong in the virtual service or the destination route)

Do I need a istio sidecar proxy at client end for routing rules to be applied?

I have couple of services named svc A and svc B with request flow as follows:
svc A --> svc B
I have injected sidecar with svc B and then added the routing rules via VirtualServices object as:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: b
namespace: default
spec:
hosts:
- b.default.svc.cluster.local
http:
- route:
- destination:
host: b.default.svc.cluster.local
fault:
abort:
percentage:
value: 100
httpStatus: 403
These rules are only applied when svc A has a sidecar istio proxy. Which makes me think if we need to have istio proxy on the client side as well? I was expecting that the service for which I added rules should only have the sidecar. I can't think of any technical requirement to have it along side svc B.
Yes, Service A needs a sidecar. It's confusing I admit, but the way to think of the VirtualService resource is "where do I find the backends I want to talk to and what service should they appear to provide me?" A's sidecar is its helper which does things on its behalf like load-balancing, and in your case fault injection (Service B is reliable; it's Service A that wants it to seem unreliable).
The comments that A and B both need sidecars in order to communicate at all aren't correct (unless you want mTLS), but if you want the mesh to provide additional services to A, then A needs a sidecar.
yes, you should inject sidecar proxy in service A as well. then only the two services can communicate with each other through proxies
First go ahead and run:
gcloud container clusters describe [Your-Pod-Name] | grep -e clusterIpv4Cidr -e servicesIpv4Cidr
This will give you two IP addresses. Add these into your deployment yaml like shown below (REPLACING THE IP ADDRESSES WITH YOURS)
apiVersion: v1
kind: Pod
metadata:
name: [Your-Pod-Name]
annotations:
sidecar.istio.io/inject: "true"
traffic.sidecar.istio.io/includeOutboundIPRanges: 10.32.0.0/14,10.35.240.0/20
This allows internet connection to your services.

How to allow/deny http requests from other namespaces of the same cluster?

In a cluster with 2 namespaces (ns1 and ns2), I deploy the same app (deployment) and expose it with a service.
I thought separate namespaces would prevent from executing curl http://deployment.ns1 from a pod in ns2, but apparently, it's possible.
So my question is, how to allow/deny such cross namespaces operations? For example:
pods in ns1 should accept requests from any namespace
pods (or service?) in ns2 should deny all requests from other namespaces
Good that you are working with namespace isolation.
Deploy a new kind Network Policy in your ns1 with ingress all. You can lookup the documentation to define network ingress policy to allow all inbound traffic
Likewise for ns2, you can create a new kind Network Policy and deploy the config in ns2 to deny all ingress. Again the docs will come to rescue to help with you the yaml construct.
It may look something like this:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: ns1
name: web-allow-all-namespaces
spec:
podSelector:
matchLabels:
app: app_name_ns1
ingress:
- from:
- namespaceSelector: {}
It would not be answer you want, but I can provide the helpful feature information to implement your requirements.
AFAIK Kubernetes can define network policy to limit the network access.
Refer Declare Network Policy for more details of Network Policy.
Default policies
Setting a Default NetworkPolicy for New Projects in case OpenShift.