How can I make authorization decisions based on claims in the JWT when using an Istio JWT OriginAuthenticationMethod Policy?
Have you had a look through Istio -> Concepts -> Security?
There is an example under ServiceRoleBinding that might be what your looking for.
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
name: test-binding-products
namespace: default
spec:
subjects:
properties:
request.auth.claims[email]: "a#foo.com"
roleRef:
kind: ServiceRole
name: "products-viewer"
Related
I have set up a custom ServiceAccount with a ClisterRole binding. I can see the account and it's ca.crt, namespace and token.
Definition:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: my-app
name: my-app-svcaccount
namespace: my-app-ns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app-svcaccount
namespace: my-app-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: ServiceAccount
name: my-app-svcaccount
namespace: my-app-ns
In my pod spec I have specified the serviceAccountName and expect it to be mounted into the pod. When I go into the pod, I see the /run/secrets/kubernetes.io/serviceaccount/ folder as expected.
Definition within deployment pod spec:
serviceAccountName: my-app-svcaccount
However, the token in that folder is not the one from my serviceAccountName, nor of any other secret in my namespace. I'm pulling my hair as to what could be the reason for this. Where can this token be coming from and how can I find out where the incorrect token is coming from?
What I can see is that the mounted volume name does not refer to the ServiceAccount name, but rather to kube-api-access-... which is unknown to me where it's coming from.
Thanks in advance.
We have recently setup istio on our kubernetes cluster and are trying to see if we can use RequestAuthentication and AuthenticationPolicy to enable us to only allow a pod in namespace x to communicate with a pod in namespace y when it has a valid jwt token.
All the examples I have seen online seem to only apply for end user authentication via the gateway rather than internal pod to pod communication.
We have tried a few different options but are yet to have any luck.
We can get AuthenticationPolicy to work for pod to pod traffic using "from" and the source being the IP address of the pod in namespace x:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "jwt-auth"
namespace: y
spec:
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["10.43.5.175"]
When we add when block for jwt it doesn't work:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "jwt-auth"
namespace: y
spec:
action: ALLOW
rules:
- from:
- source:
ipBlocks: ["10.43.5.175"]
when:
- key: request.auth.claims[iss]
values: ["https://keycloak.example.com/auth/realms/istio"]
Also tried this but doesn't seem to work either:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: "request-jwt"
namespace: y
spec:
jwtRules:
- issuer: "https://keycloak.example.com/auth/realms/istio"
jwksUri: "https://keycloak.example.com/auth/realms/istio/protocol/openid-connect/certs"
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "deny-invalid-jwt"
namespace: y
spec:
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
Thanks in advance!
Yes, it is possible to use both Authorization Policies and Request Authentications.
But debugging is quite difficult because a lot is based on your environment and the JWT that is being used, and so on.
To troubleshoot these kinds of issues I'd start by setting the rbac scoped logs to debug for the services envoy proxy.
In the rbac debug logs you'll see the data extracted from the JWT and stored into filter metadata.
What you'll frequently find is that:
The issuer in the filter metadata might not match the one in the RequestAuthentication resource, etc.
Learn more about logging scopes here https://istio.io/v1.12/docs/ops/diagnostic-tools/component-logging/#logging-scopes
I'd like to grant a service account the ability to access the metrics exposed by the metrics-server service (https://metrics-server.kube-system/metrics). If I create a serviceaccount...
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-reader
namespace: prometheus
...and then grant it cluster-admin privileges...
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-reader-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: metrics-reader
namespace: prometheus
...it works! I can use the account token to access the metrics server:
curl -k --header "Authorization: Bearer $token" https://metrics-server.kube-system/metrics
But I don't want to require cluster-admin access just to read
metrics. I tried to use the view cluster role instead of
cluster-admin, but that fails.
Is there an existing role that would grant the appropriate access?
If not, what are the specific permissions necessary to grant read-only
access to the metrics-server /metrics endpoint?
Interesting question. I've found some info for you, however i'm not sure that 100% helpful. It needs more research and reproduce.
check RBAC Deny when requesting metrics. Smth like below?
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-reader
namespace: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-metrics
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view-metrics
subjects:
- kind: ServiceAccount
name: metrics-reader
namespace: prometheus
It seems, there is a aggregated-metrics-reader clusterrole (or there was)
Aggregated ClusterRoles are documented in:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles.
The purpose of the system:aggregated-metrics-reader ClusterRole, is to
aggregate the rules, that grant permission to get the pod and node
metrics, to the view, edit and admin roles.
however I wasnt able to find any reference to aggregated-metrics-reader clusterrole in current version of that doc.
You can find huge example of using this clusterrole in Metrics server unable to scrape
IN addition check This adds the aggregated-metrics-reader ClusterRole which was missing github PR:
What this PR does / why we need it: This adds the
aggregated-metrics-reader ClusterRole which was missing, and seems to
be required for k8s 1.8+ per the metrics-server documentation and
default deploy manfiests
Unfortunately link in that PR direct to nowhere. I start thinking this obsolete info for 1.8 clusters.. Will update answer in case find anything more relevant
Istio can route traffic based off headers and such. There are great examples of how to do this in the Istio docs.
Istio can also validate your JWT. The Istio docs also cover that.
But I can't seem to find a way to get my JWT Validated, then use the user claim found in the JWT Json to route traffic. The example I linked to just expects the user to be plain text in a header.
How can an Istio Virtual Service be setup to route based on a claim in a JWT (preferably one it validated).
You can implement this using the Istio authorization policy. I did something similar with Keycloak and Kong to restrict user traffic at API gateway level if claim or roles were not there.
Here is one nice example of JWT auth with istio:
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: backend
namespace: default
spec:
selector:
matchLabels:
app: backend
jwtRules:
- issuer: "${KEYCLOAK_URL}/auth/realms/istio"
jwksUri: "${KEYCLOAK_URL}/auth/realms/istio/protocol/openid-connect/certs"
---
# To allow only requests with a valid token, create an authorization policy
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: backend
namespace: default
spec:
selector:
matchLabels:
app: backend
action: ALLOW
rules:
- from:
when:
- key: request.auth.claims[preferred_username]
values: ["testuser"]
Example link : https://istio.io/latest/docs/tasks/security/authorization/authz-jwt/
Another nice example with OIDC : https://www.jetstack.io/blog/istio-oidc
RBAC and group list checks : https://istio.io/v1.4/docs/tasks/security/authorization/rbac-groups/
Using test config with Ignite 2.4 and k8s 1.9:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"/>
</property>
</bean>
</property>
</bean>
</beans>
Unable to find Kubernetes API Server at https://kubernetes.default.svc.cluster.local:443
Can I set the API Server URL in the XML config file? How?
#Denis was right.
Kubernetes using RBAC access controlling system and you need to authorize your pod to access to API.
For that, you need to add a Service Account to your pod.
So, for do that you need:
Create a service account and set role for it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ignite
namespace: <Your namespace>
I am not sure that permissions to access only pods will be enough for Ignite, but if not - you can add as more permissions as you want. Here is example of different kind of roles with large list of permissions. So, now we create Cluster Role for your app:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite
namespace: <Your namespace>
rules:
- apiGroups:
- ""
resources:
- pods # Here is resources you can access
verbs: # That is what you can do with them
- get
- list
- watch
Create binding for that role:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: ignite
namespace: <Your namespace>
Now, you need to associate ServiceAccount to pods with your application:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
....
spec:
template:
spec:
serviceAccountName: ignite
After that, your application will have an access to K8s API. P.S. Do not forget to change <Your namespace> to namespace where you running Ignition.
Platform versions
Kubernetes: v1.8
Ignite: v2.4
#Anton Kostenko design is mostly right, but here's a refined suggestion that works and grants least access privileges to Ignite.
If you're using a Deployment to manage Ignite, then all of your Pods will launch within a single namespace. Therefore, you should really use a Role and a RoleBinding to grant API access to the service account associated with your deployment.
The TcpDiscoveryKubernetesIpFinder only needs access to the endpoints for the headless service that selects your Ignite pods. The following 2 manifests will grant that access.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ignite-endpoint-access
namespace: <your-ns>
labels:
app: ignite
rules:
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["<your-headless-svc>"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ignite-role-binding
labels:
app: ignite
subjects:
- kind: ServiceAccount
name: <your-svc-account>
roleRef:
kind: Role
name: ignite-endpoint-access
apiGroup: rbac.authorization.k8s.io
Take a look at this thread: http://apache-ignite-users.70518.x6.nabble.com/Unable-to-connect-ignite-pods-in-Kubernetes-using-Ip-finder-td18009.html
The problem of 403 error can be solved by granting more permissions to the service account.
Tested Version:
Kubernetes: v1.8
Ignite: v2.4
This is going to be little bit more permissive.
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite-rbac
subjects:
- kind: ServiceAccount
name: default
namespace: <namespace>
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
If you're getting 403 unauthorized then your service account that made your resources may not have good enough permissions. you should update your permissions after you ensure that your namespace and service account and deployments/ replica sets are exactly the way you want it to be.
This link is very helpful to setting permissions for service accounts:
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions