Sign out fails for Gitlab integrated with Keycloak OIDC - kubernetes

When logged into gitlab using the oauth2 provider keycloak and trying to log out, Gitlab redirects to the sign_in page, but doesn't end out session on Keycloak, so we are logged in again.
These are the environment variables used in gitlab kubernetes deployment:
- name: OAUTH2_GENERIC_APP_ID
value: <client-name>
- name: OAUTH2_GENERIC_APP_SECRET
value: "<client-secret>"
- name: OAUTH2_GENERIC_CLIENT_AUTHORIZE_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/auth"
- name: OAUTH2_GENERIC_CLIENT_END_SESSION_ENDPOINT
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/logout"
- name: OAUTH2_GENERIC_CLIENT_SITE
value: "https://<keycloak-url>/auth/realms/<realm-name>"
- name: OAUTH2_GENERIC_CLIENT_TOKEN_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/token"
- name: OAUTH2_GENERIC_CLIENT_USER_INFO_URL
value: "https://<keycloak-url>/auth/realms/<realm-name>/protocol/openid-connect/userinfo"
- name: OAUTH2_GENERIC_ID_PATH
value: sub
- name: OAUTH2_GENERIC_NAME
value: Keycloak
- name: OAUTH2_GENERIC_USER_EMAIL
value: email
- name: OAUTH2_GENERIC_USER_NAME
value: preferred_username
- name: OAUTH2_GENERIC_USER_UID
value: sub
- name: OAUTH_ALLOW_SSO
value: Keycloak
- name: OAUTH_AUTO_LINK_LDAP_USER
value: "false"
- name: OAUTH_AUTO_LINK_SAML_USER
value: "false"
- name: OAUTH_AUTO_SIGN_IN_WITH_PROVIDER
value: Keycloak
- name: OAUTH_BLOCK_AUTO_CREATED_USERS
value: "false"
- name: OAUTH_ENABLED
value: "true"
- name: OAUTH_EXTERNAL_PROVIDERS
value: Keycloak
I have tried a workaround mentioned here: https://gitlab.com/gitlab-org/gitlab/-/issues/31203 , but no luck. Please help.
Note:
Gitlab version: 14.9.2
Keycloak version: 17
Kubernetes Version: 1.21.5

To be perfectly clear: the expectation is that you should be signed out of GitLabn, not necessarily keycloak altogether. This is happening correctly since you see the sign-in page after signing out. For example, if you sign into GitLab using Google and sign out of GitLab, you should only be signed out of GitLab, not Google.
The behavior you are observing is due to the fact that you have auto-login (auto_sign_in_with_provider) enabled, which automatically redirects users from the sign-in page to login again with keycloak (again) immediately after (successfully) signing out.
To avoid this problem, in the GitLab settings (under Admin -> Settings -> General -> Sign-in Restrictions) set the After sign-out path to be /users/sign_in?auto_sign_in=false or in other words https://gitlab.example.com/users/sign_in?auto_sign_in=false
Note the query string ?auto_sign_in=false will prevent the auto-redirect to sign back into keycloak. You can also choose a different URL entirely.
See sign-in information and sign in with provider automatically for more information.

Related

Access secret environment variables from kubernetes in nextjs app

I'm trying to deploy a nextjs app to GCP with kubernetes. I'm trying to setup auth with nextAuth and keycloak provider. The issue I'm running into is that KEYCLOAK_CLIENT_SECRET and KEYCLOAK_CLIENT_SECRET can't be found, all of the other environment variables listed here show up under process.env.XX the two with secretKeyRefs come in undefined.
I have read in multiple places that they need to be available during build time for nextjs but I'm not sure how to set this up. We have several nodejs apps with this same deployment file and they are able to obtain the secrets.
This is what my deployment.yaml env section looks like.
Anyone have any experience setting this up?
I can't add the secrets to any local env files for security purposes.
env:
- name: NODE_ENV
value: production
- name: KEYCLOAK_URL
value: "https://ourkeycloak/auth/"
- name: NEXTAUTH_URL
value: "https://nextauthpath/api/auth"
- name: NEXTAUTH_SECRET
value: "11234"
- name: KEYCLOAK_REALM
value: "ourrealm"
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_SECRET
- name: KEYCLOAK_CLIENTID
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_ID

Keycloak login page reload issue. /auth/realms/master/protocol/openid-connect/3p-cookies/step1.html 404 error

Keycloak login page is reloading continuously automatically. After inspecting networking tab, I came to know that an html page(/auth/realms/master/protocol/openid-connect/3p-cookies/step1.html) is giving 404 in response.
I am running quay.io/keycloak/keycloak:18.0.0 keycloak image.
I am providing the following variables in environment(also providing database vars.)
name: KC_HOSTNAME
value: my-domain
name: KC_HTTP_RELATIVE_PATH
value: "/auth"
name: PROXY_ADDRESS_FORWARDING
value: "True"
name: KC_HTTP_ENABLED
value: "True"
Which configuration am I missing in environment variables?

application authentication using wso2 in kubernetes ingress

I am trying to use wso2 as an authorization server with ouath2. I referred to the below links
Link
As mentioned in the link Google authenticator is used but can I use wso2 instead of google?
I have created a service provider in wso2 -> then select oauth/opendID connect configuration -> used the client ID and secret to create oauth2 image. But I am not sure what provider name I have to give.
spec:
containers:
- args:
- --provider=wso2
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: 0UnfZFZDb
- name: OAUTH2_PROXY_CLIENT_SECRET
value: rZroDX6uOsySSt4eN
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: b'cFF0enRMdEJrUGlaU3NSTlkyVkxuQT09'
image: quay.io/pusher/oauth2_proxy:v4.1.0-amd64
and in the ingress, I have added the following annotations
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.auth.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://identity.wso2.com:443/commonauth?rd=/"
but I am getting an authentication error.
Can use I wso2 as a authorization server similar the github or google?
for wso2, do I need to create an oauth2 image?
my k8s ingress annotations are correct (tried multiple values like start?rd=$escaped_request_uri etc)?

Serverless: create api key from SecretsManager value

I have a Serverless stack deploying an API to AWS. I want to protect it using an API key stored in Secrets manager. The idea is to have the value of the key in SSM, pull it on deploy and use it as my API key.
serverless.yml
service: my-app
frameworkVersion: '2'
provider:
name: aws
runtime: nodejs12.x
...
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
As far as I can tell, the deployed API Gateway key should be the same as the SSM Secret, yet when I look in the console, the 2 values are different. What am I overlooking? No error messages either.
I ran into the same problem a while ago and I resorted to using the serverless-add-api-key plugin as it was not comprehensible for me when Serverless was creating or reusing new API keys for API Gateway.
With this plugin your serverless.yml would look something like this:
service: my-app
frameworkVersion: '2'
plugins:
- serverless-add-api-key
custom:
apiKeys:
- name: apikey
value: ${ssm:myapp-api-key}
functions:
your-function:
runtime: ...
handler: ...
name: ...
events:
- http:
...
private: true
You can also use a stage-specific configuration:
custom:
apiKeys:
dev:
- name: apikey
value: ${ssm:myapp-api-key}
This worked well for me:
custom:
apiKeys:
- name: apikey
value: ${ssm:/aws/reference/secretsmanager/dev/user-api/api-key}
deleteAtRemoval: false # Retain key after stack removal
functions:
getUserById:
handler: src/handlers/user/by-id.handler
events:
- http:
path: user/{id}
method: get
cors: true
private: true

Google cloud: insufficient authentication scopes

I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"