I am having an issue with authentication using istio in Azure AKS. As far as I can tell I am generating a valid token but I get a 403 error.
The istio authorization config for the app is :
kind: AuthorizationPolicy
metadata:
name: entitlements-jwt-authz
namespace: osdu
spec:
selector:
matchLabels:
app: entitlements-azure
action: DENY
rules:
- from:
- source:
notRequestPrincipals: ["*"]
to:
- operation:
notPaths: ["/",
"*/v2/api-docs",
"*/swagger-resources","*/swagger-ui.html",
"*/actuator/health",
"/entitlements/v1/swagger-resources/*",
"/entitlements/v1/webjars/*"]
I want to try and change this policy to just allow so I can try and isolate it to be a token issue but I am not sure how to change this policy as kubernetes is not something I am experienced with. Can somebody point me in the right direction?
I am not sure if I have given enough information so please ask if need more.
Thanks
Mike
I am guessing you have an access_token that is not compatible with istio's signature verification.
Go to jwt.io analyze your token. Does the header contain a nonce? If yes, that's the problem. Azure is not really transparent about this, but as far as I understand, the signature must be verified using JWKS and nonce, which istio can't.
For me, the solution was to change the scope from the default MS Graph scope to a custom one. You can create a scope in your app registration or use the default one: <appId>/.default, eg abce-1234-ghkli-5677/.default.
Related
I am able to access my kubernetes dashoard UI by accessing below url and providing the token and hitting sign in button on the login screen
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/workloads?namespace=default
Is there a way I can pass the token via the URL itself so the Dashboard UI opens in logged in state so i don't need to manually past the token and hit sign in?
I am looking for something like this (which was suggested by ChatGPT which unfortunately didn't work, this just opens the login screen again) :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/?token=<authentication-token>
We can access the kubernetes dashboard UI by two ways
Bearer
token
KubeConfig
As to answer your question, we can't login by encoding the token in the URL. But we can use Skip option to avoid giving the token every time we login.
To enable the Skip button to appear in UI we need to add following flags in the dashboard deployment under args section
--enable-skip-login
--disable-settings-authorizer
After adding these flags the deployment looks something like this
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --enable-skip-login
- --disable-settings-authorizer
- --auto-generate-certificates
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
Now when you redeploy the dashboard you are able to see the Skip button. By skipping the login will save a lot of time when testing locally deployed clusters.
Note: This is not a suggested method in terms of security standpoint. However, if you are deploying in an isolated testing environment you can proceed with the above steps.
For more information refer this link
I have an authentication flow with auth0 that exists outside of a declaratively configured Kong Gateway setup but still want to validate tokens at the edge. It's expected that calling users will always pass an authorization header with a bearer token that they've received from a login endpoint that calls an internal auth service.
After reading the Kong docs it seems like you need to assign the pubkey/JWK to a consumer which I don't quite understand. I don't want to manage users through Kong. I have seen mention of consumer being able to be an "anonymous" user which may be a blanket way to apply this, but I'm unsure of how to configure this declaratively.
Is this possible without writing custom middleware?
I believe you can use the Kong JWT Signer plugin to validate your bearer token with the JWK server, even without a consumer, by leaving access_token_consumer blank in the configuration and using other claims to verify the JWT token.
By following this instruction, you should be able to understand the inner working of the plugin and figure it out from there.
The cleanest way to do this is with Plus/Enterprise Kong and using their OpenID Connect plugin. I want to keep this limited to their open source Kong deployed in a single container with declarative configuration, however. I've managed to figure out how to accomplish this as such.
You can create a consumer with any username and the jwt_secrets field are applied to the plugin somehow. I have no idea how or why. Here is an example kong.yaml:
_format_version: "2.1"
services:
- name: mock-grpc-service-foo
host: host.docker.internal
port: 4770
protocol: grpc
routes:
- name: foo-routes
protocols:
- http
paths:
- /hello
plugins:
- name: grpc-gateway
config:
proto: proto/foo.proto
consumers:
- username: anonymous # this can be anything
jwt_secrets:
- algorithm: RS256
key: https://company_name_here.auth0.com/
rsa_public_key: |
-----BEGIN PUBLIC KEY-----
... pub key here ...
-----END PUBLIC KEY-----
secret: this-does-not-seem-to-matter
plugins:
- name: jwt
service: mock-grpc-service-foo
You can derive your public key from your Auth0 JWK like so:
curl https://COMPANYNAME.auth0.com/pem > COMPANYNAME.pem
then
openssl x509 -pubkey -noout -in COMPANYNAME.pem > pubkey.pem
I'm doing this with REST->gRPC mappings, but you can do the same with regular regular routing. You can apply this plugin globally, to services or routes.
Declaratively configuring this opens up a whole new can of worms as you need to do templating with an entry script in order to inject the correct pub key for each environment this is deployed in provided you have different Auth0 tenants, but this gets you a lot of the way there.
I want to create a GCP Load Balancer path redirect rule programatically using the gcloud tool.
As a test, I created one manually through the GCP Console web interface.
For my manually created rule, gcloud compute url-maps describe my-url-map returns something that looks like:
creationTimestamp: '2021-02-23T20:26:04.825-08:00'
defaultService: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/my-backend-service
fingerprint: abcdefg=
hostRules:
- hosts:
- blah.my-site.com
pathMatcher: path-matcher-1
id: '12345678'
kind: compute#urlMap
name: my-url-map
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/my-backend-service
name: path-matcher-1
pathRules:
- paths:
- /my-redirect-to-root
urlRedirect:
httpsRedirect: false
pathRedirect: /
redirectResponseCode: MOVED_PERMANENTLY_DEFAULT
stripQuery: false
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/urlMaps/my-url-map
What I would like to do is to recreate the urlRedirect rule above (redirecting from /my-redirect-to-root to /), but using the gcloud tool.
Looking through the gcloud docs I can't seem to find anything referring to redirects. Is it that this is not possible to do via the gcloud tool? and if not, is there any other solution for creating these redirect rules programatically?
I'm basically trying to get around another GCP issue to do with GCS URLs for static websites by using Load Balancer redirects for each folder in our static site (~400 folders).
Currently Cloud SDK does not support creating url maps with redirects.
If you think that functionality should be available, you can create a Feature Request at Public Issue Tracker to have this option added in future.
For now, you can use API which allows creating url maps with redirects.
Is ingress-nginx's external-auth secure when using an external service like httpbin? The example connects to https://httpbin.org/basic-auth/user/passwd with the user and password inside of the URL.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
It seems to work fine when I try it myself. (read: when inspecting with curl, I cannot see this url) but maybe I'm missing something.
Is this secure for a production environment?
Reference: https://kubernetes.github.io/ingress-nginx/examples/auth/external-auth/
General.
It is never ok to put username and password in GET URL in any environment. It should be post and that to encrypted.
To your problem.
Basic auth is just as FYI.. Use oauth which is pretty common.
https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/
We are running a couple of k8s clusters on Azure AKS.
The service (ghost blog) is behind the Nginx ingress and secured with a cert from Letsencrypt. All of that works fine but the redirect behavior is what I am having trouble with.
The Ingress correctly re-directs from http://whatever.com to
https://whatever.com — the issue is that it does so using a 308
redirect which strips all post/page Meta anytime a user shares a
page from the site.
The issue results in users who share any page of the site on most social properties receiving a 'Preview Link' — where the title of the page and the page meta preview do not work and are instead replaced with '308 Permanent Redirect' text — which looks like this:
From the ingress-nginx docs over here I can see that this is the intended behavior (ie. 308 redirect) what I believe is not intended is the interaction with social sharing services when those services attempt to create a page preview.
While the issue would be solved by Facebook (or twitter, etc etc) pointing direct to the https site by default, I currently have no way to force those sites to look to https for the content that will be used to create the previews.
Setting Permanent Re-Direct Code
I can also see that it looks like I should be able to set the redirect code to whatever I want it to be (I believe a 301 redirect will allow Facebook et al. to correctly pull post/page snippet meta), docs on that found here.
The problem is that when I add the redirect-code annotation as specified:
nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
I still get a 308 re-direct on my resources despite being able to see (from my kubectl proxy) that the redirect-code annotation correctly applied. For reference, my full list of annotations on my Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ghost-ingress
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
To reiterate — my question is; what is the correct way to force a redirect to https via a custom error code (in my case 301)?
My guess is the TLS redirect shadows the nginx.ingress.kubernetes.io/permanent-redirect-code annotation.
You can actually change the ConfigMap for your nginx-configuration so that the default redirect is 301. That's the configuration your nginx ingress controller uses for nginx itself. The ConfigMap looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: nginx-configuration
namespace: ingress-nginx
data:
use-proxy-protocol: "true"
http-redirect-code: "301"
You can find more about the ConfigMap options here. Note that if you change the ConfigMap you'll have to restart your nginx-ingress-controller pod.
You can also shell into the nginx-ingress-controller pod and see the actual nginx configs that the controller creates:
kubectl -n ingress-nginx exec -it nginx-ingress-controller-xxxxxxxxxx-xxxxx bash
www-data#nginx-ingress-controller-xxxxxxxxx-xxxxx:/etc/nginx$ cat /etc/nginx/nginx.conf
These directions are for Azure AKS users but the solution for this solution for facebook / social property preview links showing as 308 permanent redirect will probably work on any cloud provider (though it has not been tested) — you would just need to change the way you login / get your credentials etc.
Thanks to Rico for the solution! Since this is only tested with Facebook you may or may not want to go the ConfigMap application route (which Rico mentions above) this walks through manually editing the ConfigMap as opposed to using kubectl apply -f to apply one saved locally.
Pickup AZ Credentials for your cluser (az login)
Assume the role for your cluster: az aks get-credentials --resource-group yourGroup --name your-cluster
Browse your Cluster: az aks browse --resource-group yourGroup --name your-cluster
Navigate to the namespace containing your Ingress nGinx containers (not the backend services — although they could be in the same NS).
On the left hand side navigation menu (just above settings) find the 'ConfigMaps' tab and click it.
Edit the 'Data' element of the YAML and add the following line (note the quotes around both the name and number in the key/value):
"data": {
"some-other-setting-here": "false",
"http-redirect-code": "301"
}
You will need a comma after each key/value line except the last.
Restart your nginx-controller POD by deleting it make SURE you don't delete the deployment like I did.
If you want to be productive you can upgrade your nginx install (from helm) which will restart / re-create the container in the process by using:
helm upgrade ngx-ingress stable/nginx-ingress
Where ngx-ingress is the name of your helm install. Also note that using the '--reuse-values' flag will cause your upgrade to fail (re: https://github.com/helm/helm/issues/4337)
If you don't know the name you used for nginx when you installed it from Helm originally you can use helm list to find it.
Finally to test and make sure your Re-Directs are using the correct ConfigMap code, curl your http site with:
curl myhttpdomain.com
You should receive something like this:
```
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.15.3</center>
</body>
</html>
```
One important thing to note here is that if you are making the change to a 301 re-direct to try to fix the preview link for facebook or one of the other social media properties (twitter etc etc) then in all likelihood this will not fix any link to any page / post that you have already linked to — at least not right away.
The social properties all use intense caching to limit their resource usage but you can check to see if the above fixes your preview link issue by linking to a NEW page / post that you have not previously referenced.
Be Aware of Implications for 'POST'
So the major reason that nginx-ingress uses a code 308 is because it keeps the 'body' / payload intact in cases where you are sending a POST request (as opposed to a normal GET request link you do with a browser etc).
For me this wasn't a concern but if you are for whatever reason posting to the http address and expecting that to be re-directed seamlessly that will probably not work — after you swap to the 301 redirect discussed in post that is.
HOWEVER if you are not expecting a seamless redirect when sending POST requests (I think most people probably are not, I know I am not) then I think this is the best way to fix the Facebook 308 Permanent redirect behavior.