Accessing introspect endpoint , failed:connection refused kong, keycload,OIDC - keycloak

I am getting a connection refused response from the server when i try to hit the the endpoint via the proxy i have created on kong.
The curl command i am using to hit the proxy -
curl --location --request GET 'http://localhost:8000/listProducts/'
--header 'Accept: application/json'
--header 'Authorization: Bearer token'
to get the token I use the following curl -
curl --location --request POST 'http://localhost:8180/auth/realms/experimental/protocol/openid-connect/token'
--header 'Content-Type: application/x-www-form-urlencoded'
--data-urlencode 'username=username'
--data-urlencode 'password=password'
--data-urlencode 'grant_type=password'
--data-urlencode 'client_id=myapp'
The Client protocol is open-connect and the access type is public.
The config i have done in the Oidc plugin:
consumer:
response type:
code:
introspection endpoint: http://{MyHostIP}:8180/auth/realms/experimental/protocol/openid-connect/token/introspect
filters:
bearer only: yes
ssl verify: no
session secret:
introspection endpoint auth method:
realm: experimental
redirect after logout uri: /
scope: openid
token endpoint auth method:
client_secret_post:
logout path: /logout
client id: kong
discovery: https://{MyHostIP}:8180/auth/realms/master/.well-known/openid-configuration
client secret: myClientSecret
recovery page path:
redirect uri path:
Thanks in advance

How did u deployed Keycloak? I see 2 points:
Your discovery endpoint is https
Your introspection endpoint is just http
Also, if you are using Docker to deploy Kong + Keycloak, go to your hosts file and add a new line with your local ip with MyHostIP.
sudo nano /etc/hosts
Then add
your.ip.address. keycloak-host
Update the docker-compose file
kong:
build:
context: kong/
dockerfile: Dockerfile
extra_hosts:
- "Keycloak-host:your.ip.address"
Now configure your introspection and discovery url using the keycloak-host
Ex: http://keycloak-host:8180/auth/realms/master/protocol/openid-connect/token/introspect
If you need a functional example about Kong + openID + Keycloak, check this repo.

Related

Prometheus datasource : client_error: client error: 403

Hi I am trying to add built-in OpenShift(v4.8) prometheus data source to a local grafana server. I have given basic auth with username and password and as of now I have enabled skip tls verify also. Still I'm getting this error
Prometheus URL = https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com
this is the grafana log
logger=tsdb.prometheus t=2022-04-12T17:35:23.47+0530 lvl=eror msg="Instant query failed" query=1+1 err="client_error: client error: 403"
logger=context t=2022-04-12T17:35:23.47+0530 lvl=info msg="Request Completed" method=POST path=/api/ds/query status=400 remote_addr=10.100.95.27 time_ms=36 size=65 referer=https://grafana.xxxx.xxxx.com/datasources/edit/6TjZwT87k
You cannot authenticate to the OpenShift prometheus instance using basic authentication. You need to authenticate using a bearer token, e.g. one obtained from oc whoami -t:
curl -H "Authorization: Bearer $(oc whoami -t)" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/
Or from a ServiceAccount with appropriate privileges:
secret=$(oc -n openshift-monitoring get sa prometheus-k8s -o jsonpath='{.secrets[1].name}')
token=$(oc -n openshift-monitoring get secret $secret -o jsonpath='{.data.token}' | base64 -d)
curl -H "Authorization: Bearer $token" -k https://prometheus-k8s-openshift-monitoring.apps.xxxx.xxxx.xxxx.com/

Use the Kubernetes REST API without kubectl

You can simply interact with K8s using its REST API. For example to get pods:
curl http://IPADDR/api/v1/pods
However I can't find any example of authentication based only on curl or REST. All the examples show the usage of kubectl as proxy or as a way to get credentials.
If I already own the .kubeconfig, and nothing else, is there any way to send the HTTP requests directly (e.g. with a token) without using kubectl?
The kubeconfig file you download when you first install the cluster includes a client certificate and key. For example:
clusters:
- cluster:
certificate-authority-data: ...
server: https://api.cluster1.ocp.virt:6443
name: cluster1
contexts:
- context:
cluster: cluster1
user: admin
name: admin
current-context: admin
preferences: {}
users:
- name: admin
user:
client-certificate-data: ...
client-key-data: ...
If you extract the client-certificate-data and client-key-data to
files, you can use them to authenticate with curl. To extract the
data:
$ yq -r '.users[0].user."client-certificate-data"' kubeconfig | base64 -d > cert
$ yq -r '.users[0].user."client-key-data"' kubeconfig | base64 -d >
key
And then using curl:
$ curl -k --cert cert --key key \
'https://api.cluster1.ocp.virt:6443/api/v1/namespaces/default/pods?limit=500'
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "22022"
},
"items": []
Alternately, if your .kubeconfig has tokens in it, like this:
[...]
users:
- name: your_username/api-clustername-domain:6443
user:
token: sha256~...
Then you can use that token as a bearer token:
$ curl -k https://api.mycluster.mydomain:6443/ -H 'Authorization: Bearer sha256~...'
...but note that those tokens typically expire after some time, while the certificates should work indefinitely (unless they are revoked somehow).

Kuberntes deployment using yaml file

Can I define DeploymentConfig kind in kuberntes yaml file where api version is of openstack
If yes, how?
If I get your question in terms of redhat OpenStack .. You will need OpenShift installed on top of redhat OpenStack
And the API will be as
"apiVersion": "apps.openshift.io/v1",
OpenShift API reference
https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps.openshift.io/v1.DeploymentConfig.html
HTTP REQUEST
POST /apis/apps.openshift.io/v1/deploymentconfigs HTTP/1.1 Authorization: Bearer $TOKEN Accept: application/json Connection: close Content-Type: application/json' { "kind": "DeploymentConfig", "apiVersion": "apps.openshift.io/v1", ... }
CURL REQUEST
$ curl -k \ -X POST \ -d #- \ -H "Authorization: Bearer $TOKEN" \ -H 'Accept: application/json' \ -H 'Content-Type: application/json' \ https://$ENDPOINT/apis/apps.openshift.io/v1/deploymentconfigs <<'EOF' { "kind": "DeploymentConfig", "apiVersion": "apps.openshift.io/v1", ... } EOF
As reference below is redhat open shifts documents link for deploymentconfig creation
https://docs.openshift.com/container-platform/4.2/applications/deployments/what-deployments-are.html#deployments-and-deploymentconfigs_what-deployments-are

How to set secodary key for kubernetes ingress basic-auth

I wanna have an ingress for all my service in the k8s, and give the ingress a basic auth. But for auth rotation, I want to support a secondary auth for user so the endpoint can be reached when they re-generate the primary key.
I currently can follow this guide to set up an ingress with single basic auth.
Adapting the guide, you can put multiple usernames and passwords in the auth file you're using to generate the basic auth secret. Specifically, if you run the htpasswd command without the -c flag, so e.g. htpasswd <filename> <username> it will add an entry to the file rather than creating a new file from scratch:
$ htpasswd -c auth foo
New password: <bar>
Re-type new password: <bar>
Adding password for user foo
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
$ htpasswd auth user2
New password: <pass2>
Re-type new password: <pass2>
Adding password for user user2
$ cat auth
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
If you've already created the secret in the first place via the given command:
$ kubectl create secret generic basic-auth --from-file=auth
You can then update the secret with this trick:
$ kubectl create secret generic basic-auth --from-file=auth\
--dry-run -o yaml | kubectl apply -f -
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
secret/basic-auth configured
You can confirm setting the secret worked:
$ kubectl get secret basic-auth -ojsonpath={.data.auth} | base64 -D
foo:$apr1$isCec65Z$JNaQ0GJCpPeG8mR1gYsgM1
user2:$apr1$.FsOzlqA$eFxym7flDnoDtymRLraA2/
Finally, you can test basic auth with both usernames and passwords is working:
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'wronguser:wrongpass' \
-s -w"%{http_code}" -o /dev/null
401
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'foo:bar' \
-s -w"%{http_code}" -o /dev/null
200
$ curl http://<minikube_ip>/ -H 'Host: foo.bar.com' \
-u 'user2:pass2' \
-s -w"%{http_code}" -o /dev/null
200

How to access model microservice deployed behind Istio and Dex?

I built a deploy pipeline to serve ML models using Kubeflow (v0.6) and Seldon Core, but now that models are deployed I can't figure out how to pass the auth. layer and consume the services.
My kubernetes instance is on bare-metal and setup is identical to this: https://www.kubeflow.org/docs/started/getting-started-k8s/
I was able to follow these instructions launch example-app and issue an IDToken for a staticClient, but when I pass the token as 'Authorization: Bearer' I get redirected to dex logon page.
(part of) Dex configMap:
staticClients:
- id: kubeflow-authservice-oidc
redirectURIs:
# After authenticating and giving consent, dex will redirect to
# this url for the specific client.
- https://10.50.11.180/login/oidc
name: 'Kubeflow AuthService OIDC'
secret: [secret]
- id: model-consumer-1
secret: [secret]
redirectURIs:
- 'http://127.0.0.1:5555/callback'
When I try to access the service:
curl -H "Authorization: Bearer $token" -k https://10.50.11.180/seldon/kubeflow/machine-failure-classifier-6e462a70-a995-11e9-b30b-080027dfd9f4/api/v0.1/predictions
Found.
What am I missing? :(
I found out that serving seldon models with Istio worked better if they were in a namespace other than 'kubeflow'.
I Followed these instructions: https://docs.seldon.io/projects/seldon-core/en/latest/examples/istio_canary.html, (created new gateway and namespaces) and was able to bypass Dex.
Have you tried VirtualService?
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: <name-of-your-choice>
spec:
gateways:
- <your-gateway>
hosts:
- <your-host>
http:
- match:
- uri:
prefix: "<your-api-path-uri>"
rewrite:
uri: "<your-rewrite-logic>"
route:
- destination:
host: <name-of-your-service>.<namespace>.svc.<cluster-domain>
port: <port-of-the-service>
Virtual service will help you route traffic as specified.
I'm three years to late. Try to get your cookie from the dashboard in the developer mode
document.cookie
Replace XXX with your cookie.
curl -H -k https://10.50.11.180/seldon/kubeflow/machine-failure-classifier-6e462a70-a995-11e9-b30b-080027dfd9f4/api/v0.1/predictions --data-urlencode 'json={"data":{"ndarray":[["try to stop flask from using multiple threads"]]}}' -H "Cookie: authservice_session=XXX" -v