I am trying to use wso2 as an authorization server with ouath2. I referred to the below links
Link
As mentioned in the link Google authenticator is used but can I use wso2 instead of google?
I have created a service provider in wso2 -> then select oauth/opendID connect configuration -> used the client ID and secret to create oauth2 image. But I am not sure what provider name I have to give.
spec:
containers:
- args:
- --provider=wso2
- --email-domain=*
- --upstream=file:///dev/null
- --http-address=0.0.0.0:4180
env:
- name: OAUTH2_PROXY_CLIENT_ID
value: 0UnfZFZDb
- name: OAUTH2_PROXY_CLIENT_SECRET
value: rZroDX6uOsySSt4eN
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: b'cFF0enRMdEJrUGlaU3NSTlkyVkxuQT09'
image: quay.io/pusher/oauth2_proxy:v4.1.0-amd64
and in the ingress, I have added the following annotations
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.auth.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://identity.wso2.com:443/commonauth?rd=/"
but I am getting an authentication error.
Can use I wso2 as a authorization server similar the github or google?
for wso2, do I need to create an oauth2 image?
my k8s ingress annotations are correct (tried multiple values like start?rd=$escaped_request_uri etc)?
Related
I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.
It is possible to set forceBackendUrlToFrontendUrl as an environment variable in Kubernetes?
My problem is that the backend communication from pod to pod is over unencrypted HTTP. Keycloak (frontend) is only reachable over HTTPS.
The JWT has the "iss" claim https://......, and the service calls Keycloak to check this token. Keycloak says the token is invalid because the "issues" is invalid - and yes, it is right, https is not http.
I think i must set the variable forceBackendUrlToFrontendUrl from the Keycloak documentation, but I have no idea how I can set this in Kubernetes.
I had a similar problem, configuration below is working for me:
Keycloak:
Outside: https://keycloak.mydomain.com
Inside: https://keycloak.namespace.svc:8443
- Keycloak Container Env variables:
env:
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: https://keycloak.mydomain.com/auth/
---
Frontend:
Outside: https://myfrontend.com
Inside: http://myfrontend.namespace.svc:8080
- Keycloak.json: "url": "https://keycloak.mydomain.com/auth",
- Keycloak Admin Console:
- frontend-client: RootURL: https://myfrontend.com
---
Backend:
Outside: https://myfrontend.com/api
Inside: http://mybackend.namespace.svc:8080
- Keycloak Admin Console:
- backend-client: RootURL: http://mybackend.namespace.svc:8080
This is a spring boot application:
- application.yml
spring
security:
oauth2:
client:
provider:
keycloak:
authorization-uri: "https://keycloak.mydomain.com/auth/realms/<realm>/protocol/openid-connect/auth"
jwk-set-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/certs"
token-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/token"
user-info-uri: "https://keycloak.namespace.svc:8443/auth/realms/<realm>/protocol/openid-connect/userinfo"
issuer-uri: "https://keycloak.mydomain.com/auth/realms/<realm>"
I have Google Cloud projects A, B, C, D. They all use similar setup for Kubernetes cluster and deployment. Projects A,B and C have been build months ago. They all use Google Cloud SQL proxy to connect to Google Cloud SQL service. Now when recently I started setting up the Kubernetes for project D, I get following error visible in the Stackdriver logging:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
I have compared the difference between the Kubernetes cluster between A,B,C and D but they appear to be same.
Here is the deployment I am using
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-site
labels:
system: projectA
spec:
selector:
matchLabels:
system: projectA
template:
metadata:
labels:
system: projectA
spec:
containers:
- name: web
image: gcr.io/customerA/projectA:alpha1
ports:
- containerPort: 80
env:
- name: DB_HOST
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# Change <INSTANCE_CONNECTION_NAME> here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- sh
- -c
- /cloud_sql_proxy -instances=my-gcloud-project:europe-west1:databaseName=tcp:3306
- -credential_file=/secrets/cloudsql/credentials.json
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
So it would appear that the default service account doesn't have enough permissions? Google Cloud doesn't allow enabling the Cloud SQL API when creating the cluster via Google Cloud console.
From what I have googled this issue some say that the problem was with the gcr.io/cloudsql-docker/gce-proxy image but I have tried newer versions but the same error still occurs.
I found solution to this problem and it was setting the service-account argument when creating the cluster. Note that I haven't tested what are the minimum required permissions for the new service account.
Here are the steps:
Create new service account, doesn't require API key. Name I used was "super-service"
Assign roles Cloud SQL admin, Compute Admin, Kubernetes Engine Admin, Editor to the new service account
Use gcloudto create the cluster like this using the new service account
gcloud container clusters create my-cluster \
--zone=europe-west1-c \
--labels=system=projectA \
--num-nodes=3 \
--enable-master-authorized-networks \
--enable-network-policy \
--enable-ip-alias \
--service-account=super-service#project-D.iam.gserviceaccount.com \
--master-authorized-networks <list-of-my-ips>
Then the cluster and the deployment at least was deployed without errors.
I am having difficulties sending requests to my spring boot application deployed in my Google Cloud Kubernetes cluster. My application receives a photo and sends it to the Google Vision API. I am using the provided client library (https://cloud.google.com/vision/docs/libraries#client-libraries-install-java) as explained here https://cloud.google.com/vision/docs/auth:
If you're using a client library to call the Vision API, use Application Default Credentials (ADC). Services using ADC look for credentials within a GOOGLE_APPLICATION_CREDENTIALS environment variable. Unless you specifically wish to have ADC use other credentials (for example, user credentials), we recommend you set this environment variable to point to your service account key file.
On my local machine everyting works fine, I have a docker container with an env. varialbe GOOGLE_APPLICATION_CREDENTIALS pointing to my service account key file.
I do not have this variable in my cluster. This is the response I am getting from my application in the Kubernetes cluster:
{
"timestamp": "2018-05-10T14:07:27.652+0000",
"status": 500,
"error": "Internal Server Error",
"message": "io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.",
"path": "/image"
}
What I am doing wrong? Thx in advance!
I also had to specify the GOOGLE_APPLICATION_CREDENTIALS environment variable on my GKE setup, these are the steps I completed thanks to How to set GOOGLE_APPLICATION_CREDENTIALS on GKE running through Kubernetes:
1. Create the secret (in my case in my deploy step on Gitlab):
kubectl create secret generic google-application-credentials --from-file=./application-credentials.json
2. Setup the volume:
...
volumes:
- name: google-application-credentials-volume
secret:
secretName: google-application-credentials
items:
- key: application-credentials.json # default name created by the create secret from-file command
path: application-credentials.json
3. Setup the volume mount:
spec:
containers:
- name: my-service
volumeMounts:
- name: google-application-credentials-volume
mountPath: /etc/gcp
readOnly: true
4. Setup the environment variable:
spec:
containers:
- name: my-service
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gcp/application-credentials.json
That means you are trying to access a service that is not enabled or authenticated to use. Are you sure that you enabled the access to Google vision ?
You can check/enable API's from Dashboard at https://console.cloud.google.com/apis/dashboard or Navigate to APIs & Services from Menu
Will it help if you add GOOGLE_APPLICATION_CREDENTIALS environment variable to your deployment/pod/container configuration?
Here is an example of setting environment variables described in Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Trying to connect to a 2nd gen cloud sql database from a GCP Container.
I created the cloud proxy, but am a bit confused on what my app needs to connect via the proxy. My app is looking to connect on 127.0.0.1:3306 already within the application with all the needed mysql connection information which works fine outside of GCP. My app is currently logging connection errors against 127.0.0.1:3306 when deployed on GCP container.
Error: connect ECONNREFUSED 127.0.0.1:3306 at Object.exports._errnoException
Any additional sample files available for a simple node app to better understand the needed application config?
The sample below seems to address what wordpress needs, but what do I need for simple node app?
https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/cloudsql/cloudsql_deployment.yaml
Related Link:
https://cloud.google.com/sql/docs/mysql/connect-container-engine
Provide 127.0.0.1:3306 as the host address your application uses to access the database.
I have this hard coded in my app.
Because the proxy runs in a second container in the same pod, it appears to your application as localhost, so you use 127.0.0.1:3306 to connect to it.
Right, I have this hard coded in my app
Provide the cloudsql-db-credentials secret to enable the application to log in to the database.
Ok, if I have to add this, what
For example, assuming the application expected DB_USER and DB_PASSWORD:
name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
If your proxy user requires a password, you would also add:
So what variable name would I be using here? Is this asking for the mysql user name?
name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
What variable is needed here? Is this asking for the mysql pw for the id above?
In the wordpress sample from the link above, I'm trying to figure out what variables are needed for a simple node app.
containers:
- image: wordpress:4.4.2-apache
name: web
env:
- name: WORDPRESS_DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:[PORT]
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 80
name: wordpress
Thanks!