Integrate Keycloak IODC with Kubernetes Dashboard for SSO - kubernetes

Good day,
could you help me to fix a little issue with integration Keycloak SSO with Kubernetes Dashboard?
I’m trying to do the following steps:
Keycloak configurations:
create a new Realm - Kubernetes
create client ID with internal sing in, generate Client ID - Kubernetes
create mapper group in Client
created user and group
Keycloak-gatekeeper (Proxy):
--discovery-url={{ .Values.keycloakProxy.serverHost }}
--redirection-url=https://{{ .Values.ingress.host.name }}
--upstream-url=https://{{ .Release.Name }}.{{ .Values.namespace }}.svc.cluster.local
--client-secret={{ .Values.keycloakProxy.clientSecret }}
--client-id=kubernetes
--listen=0.0.0.0:3000
--enable-refresh-tokens=false
--skip-upstream-tls-verify
--skip-openid-provider-tls-verify
--enable-logging=true
--enable-json-logging=true
--resources=uri=/*
--secure-cookie=false
--verbose
Kubernetes RBAC:
created the RBAC role name with the same name as in Keycloak
After that I’m trying to authenticate to k8s dashboard using and get the following problem:
After secseessfull authorization, Kubernetes Dashboard sends me 401 as an Unauthorised User.

Related

aws-iam-authenticator & EKS

I've deployed a test EKS cluster with the appropiate configMap, and users that are SSO'd in can access the clusters via exporting session creds (AWS_ACCESS_KEY_ID, SECRET_ACCESS_KEY_ID, AWS_SESSION_TOKEN etc) and having the aws-iam-authenticator client installed in their terminal. The problem comes in when users attempt to use an aws sso profile stored in ~/.aws/config using the aws-iam-authenticator. The error that's recieved when running any kubectl command is the following:
$ kubectl get all
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I've tested this on my local machine (AWS CLI v2) and I haven't had any success. I've exported an AWS profile found in the ~/.aws/config file via export AWS_PROFILE=User1 and running aws sts get-caller-identity correctly shows the profile being exported. I've switched between mulitple named profiles and each one gets the correct identity and permissions, however, when running any kubectl command I get the above error. I've also tried symlinking config -> credentials but no luck. The only way it works is if I export the access_key, secret_key, and session_token to the environment variables.
I suppose I can live with having to paste in the dynamic creds that come from AWS SSO, but my need to solve solutions won't let me give up :(. I was following the thread found in this github issue but no luck. The kube config file that I have setup is spec'd to AWS's documentation.
I suspect there may be something off with the aws-iam-authenticator server deployment, but nothing shows in the pod logs. Here's a snippet from the tools github page, which I think I followed correctly, but I did skip step 3 for reasons that I forgot:
The Kubernetes API integrates with AWS IAM Authenticator for
Kubernetes using a token authentication webhook. When you run
aws-iam-authenticator server, it will generate a webhook configuration
file and save it onto the host filesystem. You'll need to add a single
additional flag to your API server configuration:
Kube Config File
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "eks-cluster-name"
- "-r"
- "EKS-ADMIN-ROLE:ARN:::::::"
env:
- name: AWS_PROFILE
value: "USER"
The AWS CLI v2 now supports AWS SSO so I decided to update my Kube config file to leverage the aws command instead of aws-iam-authenticator. Authentication via SSO is now a breeze! It looks like AWS wanted to get away from having to have an additional binary to be able to authenticate in to EKS clusters which is fine by me! Hope this helps.

Kubernetes Nginx-Ingress oauth_proxy how to pass information/token to service

I am running a Kubernetes Cluster with an Nginx-ingress fronting couple of web apps. Because Nginx doesn't support SSO/OIDC by default, I use an oauth_proxy for authentication.
Everything is working, only verified users are able to access the web pages.
Is it possible to pass or request information from the Identity Provider to the client?
Edit
I already use oauth2_proxy (https://github.com/pusher/oauth2_proxy) with Azure AD. The issue is that I need all user details from the IP.
Logs from my oauth2_proxy:
$ kubectl logs oauth2-proxy-7ddc97f9d5-ckm29
[oauthproxy.go:846] Error loading cookied session: Cookie "_oauth2_proxy" not present
...
[requests.go:25] 200 GET https://graph.windows.net/me?api-version=1.6 {
"odata.metadata":"https://graph.windows.net/myorganization/$metadata#directoryObjects/#Element",
"odata.type":"Microsoft.DirectoryServices.User",
"objectType":"User",
... ,
"sipProxyAddress":"nico.schuck#example.com",
"streetAddress":"my stree",
"surname":"Schuck",
"telephoneNumber":55512345,
"usageLocation":"DE",
"userType":"Member"
}
165.xxx.xxx.214 - nico.schuck#example.com [2020/01/17 11:22:02] [AuthSuccess] Authenticated via OAuth2: Session{email:nico.schuck#example.com user: token:true id_token:true created:2020-01-17 11:22:02.28839851 +0000 UTC m=+181.592452463 expires:2020-01-17 12:22:02 +0000 UTC refresh_token:true}
Consider oauth2_proxy which works well with nginx ingress for SSO. Follow the below link
https://github.com/bitly/oauth2_proxy
You should be using below configuration in your Ingress Rule
metadata:
name: application
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"

Keycloak provides invalid signature with Istio and JWT

I'm using Keycloak (latest) for Auth 2.0, to validate authentication, provide a token (JWT) and with the token provided, allows the access to the application URLs, based in the permissions.
Keycloak is currently running in Kubernates, with Istio as Gateway. For Keycloak, this is the policy being used:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: application-auth-policy
spec:
targets:
- name: notification
origins:
- jwt:
issuer: http://<service_name>http.<namespace>.svc.cluster.local:8080/auth/realms/istio
jwksUri: http://<service_name>http.<namespace>.svc.cluster.local:8080/auth/realms/istio/protocol/openid-connect/certs
principalBinding: USE_ORIGIN
An client was registered in this Keycloak and a RSA created for it.
The issuer can generates a token normally and the policy was applied successfully.
Problem:
Even with everything set, the token provided by Keycloak has the signature invalid according to JWT Validator.
This token doesn't allow any access for the URLs, as it should be, with 401 code.
Anyone else had a similar issue?
The problem was resolved with two options:
1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri)
2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).

How do I present a custom GCP service account to kubernetes workloads?

I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.
One of our open questions is how to manage to manage GCP service accounts on the cluster.
I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!
Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.
Is that possible?
I've considered some options, but I'm not confident on the feasibility or desirability:
A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set GOOGLE_DEFAULT_CREDENTAILS to point to it
Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?
Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?
You are on the right track. GCP service accounts can be used in GKE for PODs to assign permissions to GCP resources.
Create an account:
cloud iam service-accounts create ${SERVICE_ACCOUNT_NAME}
Add IAM permissions to the service account:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role='roles/${ROLE_ID}'
Generate a JSON file for the service account:
gcloud iam service-accounts keys create \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
service-account.json
Create a secret with that JSON:
kubectl create secret generic echo --from-file service-account.json
Create a deployment for your application using that secret:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
name: echo
spec:
containers:
- name: echo
image: "gcr.io/hightowerlabs/echo"
env:
- name: "GOOGLE_APPLICATION_CREDENTIALS"
value: "/var/run/secret/cloud.google.com/service-account.json"
- name: "PROJECT_ID"
valueFrom:
configMapKeyRef:
name: echo
key: project-id
- name: "TOPIC"
value: "echo"
volumeMounts:
- name: "service-account"
mountPath: "/var/run/secret/cloud.google.com"
volumes:
- name: "service-account"
secret:
secretName: "echo"
If you want to use various permissions for separate deployments, you need to create some GCP service accounts with different permissions, generate JSON tokens for them, and assign them to the deployments according to your plans. PODs will have access according to mounted service accounts, not to service the account assigned to the node.
For more information, you can look through the links:
Authenticating to Cloud Platform with Service Accounts
Google Cloud Service Accounts with Google Container Engine (GKE) - Tutorial

API auth error connecting Prometheus to Kubernetes (Openshift Origin)

I have a Kubernetes cluster (Openshift Origin v3.6) and Prometheus (v1.8.1) running in two separate servers. I am trying to monitor Kubernetes with Prometheus.
I got the default service account token and put it on /etc/prometheus/token.
oc sa get-token default
Then added this to Prometheus configuration file:
...
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
api_server: 'https://my.kubernetes.master:8443'
scheme: https
bearer_token_file: /etc/prometheus/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
...
After restarting prometheus, I see following error log repeating over and over again:
Nov 23 22:43:05 prometheus prometheus[17830]: time="2017-11-23T22:43:05Z" level=error msg="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:183: Failed to list *v1.Pod: User "system:anonymous" cannot list all pods in the cluster" component="kube_client_runtime" source="kubernetes.go:76"
I found this here:
If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make.
I believe my configuration is wrong somewhere, and Prometheus is not using the token to authenticate.
So, what's wrong with my setup and how could I fix it?. Thanks in advance.
Let's begin with Authentication, As you have provided Prometheus with the default service account token which means It's authenticated normally. API Server knows who it is.
Now, Authorization is giving you the problem here.As you can see here
"system:anonymous" cannot list all pods in the cluster"
It means authenticated service account does not have capability or permission to execute this operation therefore, you can not do it.
Solution to your problem
Check if there is a suitable clusterRole available for Prometheus. As
Prometheus needs to have cluster-wide permission to operate its task.
If not create a clusterRole.
Check if there is a clusterRoleBinding for default Service Account. If not create a clusterRoleBinding.
I have attached a link for further reading RBAC