User access using kubectl - kubernetes

I want to set multiple accounts to only have access only to owned namespace, we try with authorization mode ABAC but we get when use kubectl "error: couldn't read version from server: the server does not allow access to the requested ressource" and it seems to be a bug. Is theire other way to do it ?

Before attempting to access your resources, kubectl first makes requests to the server's /version and /api endpoints to confirm compatibility and negotiate API version. In ABAC, the /version and /api endpoints are considered "nonResourcePaths", but those also require authorization. You can add a rule to your ABAC file allowing all users readonly access to nonResourcePaths as follows:
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}}
From there, you can make it more restrictive if you need to.

Related

Unable to list audit devices with "sudo" and "list" capabilities applied to token policy

I'm trying to issue a vault CLI call to list the currently enabled audit devices. I have defined a policy against sys/audit which defines "sudo" and "list" capabilities (among others). I have been issued a token with that policy applied. However when I run vault audit list, I get a "permission denied" error.
What capabilities do I need to add to my policy in order for this to work? This is being done in a bootstrap script. (The additional capabilities currently included are because I do an 'enable' if the result of 'list' turns up that auditing is disabled. I'm trying to limit the permissions of this token to just the absolute minimum required for the two operations needed in this context.)
Policy (named "aud"):
path "sys/audit/*" {
capabilities = ["list", "read", "create", "update", "sudo"]
}
Token issuance (done elsewhere, logged in with root token):
vault token create --id=my-token --policy=aud
My script (where I attempt to use the token to login and check audit device status):
vault login my-token
vault audit list
vault audit enable file <options>
Error:
Error listing audits: Error making API request.
URL: GET http://< obfuscated >:8200/v1/sys/audit
Code: 403. Errors:
* 1 error occurred:
* permission denied
The subsequent 'vault audit enable' call works, so I know the capabilities are sufficient for that. But I'm unsure what change I need to make so that vault audit list works, since I already have "sudo" along with "list" and "read" capabilities.
It turns out that a wildcard policy for path "sys/audit/*" won't match against a request to sys/audit (no suffix). So in fact two separate path declarations are necessary.
First, for the vault audit list, this policy is sufficient:
path "sys/audit" {
capabilities = ["list", "read", "sudo"]
}
... and then for vault audit enable, the broader policy against the wildcard:
path "sys/audit/*" {
capabilities = ["create", "update", "sudo"]
}
This second one could be tightened up to only match against sys/audit/file as well.

Authentication with Openshift API running on IBM cloud failed with 401 unauthorized error

I'm trying to build some application to manage my OpenShift cluster on IBM cloud and the first step is to authenticate against both IBM cloud and the OpenShift cluster.
https://cloud.ibm.com/docs/openshift?topic=openshift-cs_api_install#kube_api
I followed the steps describe in the above link, and successfully obtained all the tokens including 'access_token', 'id_token' and 'refresh_token'. Among them the 'id_token' is supposed to be used to authenticate against the OpenShift API.
With the access_token I can visit IBM cloud API successfully, like obtaining account, cluster information.
However, when I use the id_token to call OpenShift API, it failed with the following error. It happened even for the '/version' api, which can be accessed without providing a bearer token.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
I can verify that my account have correct service roles assigned as described here, and I can see corresponding roles with 'ibm' prefix assigned in OpenShift web portal as well.
Can anyone please verify that the instructions in the first link above is still valid or have any clue about what might have been wrong?
[Update]
To help troubleshooting, I paste a sample of tokens here, this is what I get for the step3 in the 'Working with your cluster by using the Kubernetes API' section in the link, it is a bit lengthy:
{
"access_token": "eyJraWQiOiIyMDIxMDIxOTE4MzUiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC0yNzAwMDU1WERHIiwiaWQiOiJJQk1pZC0yNzAwMDU1WERHIiwicmVhbG1pZCI6IklCTWlkIiwianRpIjoiMDY1OWI5MjktMDE1Zi00MDg0LTgwZWMtYmFhZjBhYTBkNDQ4IiwiaWRlbnRpZmllciI6IjI3MDAwNTVYREciLCJnaXZlbl9uYW1lIjoi6Iic5a2QIiwiZmFtaWx5X25hbWUiOiLpmYgiLCJuYW1lIjoi6Iic5a2QIOmZiCIsImVtYWlsIjoicmFmb3VsQDE2My5jb20iLCJzdWIiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiYXV0aG4iOnsic3ViIjoiY2hzemNoZW5AY24uaWJtLmNvbSIsImlhbV9pZCI6IklCTWlkLTI3MDAwNTVYREciLCJuYW1lIjoi6Iic5a2QIOmZiCIsImdpdmVuX25hbWUiOiLoiJzlrZAiLCJmYW1pbHlfbmFtZSI6IumZiCIsImVtYWlsIjoicmFmb3VsQDE2My5jb20ifSwiYWNjb3VudCI6eyJ2YWxpZCI6dHJ1ZSwiYnNzIjoiOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiLCJpbXNfdXNlcl9pZCI6Ijg4NzM1NzYiLCJmcm96ZW4iOnRydWUsImltcyI6IjM0NjU1MiJ9LCJpYXQiOjE2MTQyNTU5ODYsImV4cCI6MTYxNDI1OTU4NiwiaXNzIjoiaHR0cHM6Ly9pYW0uY2xvdWQuaWJtLmNvbS9pZGVudGl0eSIsImdyYW50X3R5cGUiOiJ1cm46aWJtOnBhcmFtczpvYXV0aDpncmFudC10eXBlOmFwaWtleSIsInNjb3BlIjoiaWJtIG9wZW5pZCBjb250YWluZXJzLWt1YmVybmV0ZXMiLCJjbGllbnRfaWQiOiJrdWJlIiwiYWNyIjoxLCJhbXIiOlsicHdkIl0sInN1Yl85Yzk3MjViZDEzZWE0NTY3ODhhYzBlZTc4ZDg2NDZlMSI6ImNoc3pjaGVuQGNuLmlibS5jb20iLCJpYW1faWRfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJJQk1pZC0yNzAwMDU1WERHIiwicmVhbG1lZF9zdWJfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJJQk1pZC1jaHN6Y2hlbkBjbi5pYm0uY29tIn0.Rm3F0UKz9Aq3-1xXMmkFi0UkENIvQUkRo6qhtWaG3LKBH5HHsZbAQeJUhKqXYbI643nj2ssDP2U50BVv-6zbpfmyVncP5Z5Dmi620mi2QesduRQaH1XlC-l7KuF3uT0hJ_9FSD-0Wqi5ph0pkKxHJ-BmLkHC-4F0NByiUtwIpwyTpthuzwC251XZsQ9Ya8gzCxHB9DFb3tzOF3cupVVZmc2mMJbv4JuTSnP00H5rOT4yIzeI0Lqm6LhDpMRJ4P8glmIxmU6fag42P94pFNf3jEzIZGl49NINiWXlKbAleij3vSouobtYvrBmxWQF4KpuwKPEI-bMf1zpsHPYBHWidg",
"id_token": "eyJraWQiOiIyMDIxMDIxOTE4MzUiLCJhbGciOiJSUzI1NiJ9.eyJpYW1faWQiOiJJQk1pZC0yNzAwMDU1WERHIiwiaXNzIjoiaHR0cHM6Ly9pYW0uY2xvdWQuaWJtLmNvbS9pZGVudGl0eSIsInN1YiI6ImNoc3pjaGVuQGNuLmlibS5jb20iLCJhdWQiOiJrdWJlIiwiZ2l2ZW5fbmFtZSI6IuiInOWtkCIsImZhbWlseV9uYW1lIjoi6ZmIIiwibmFtZSI6IuiInOWtkCDpmYgiLCJlbWFpbCI6InJhZm91bEAxNjMuY29tIiwiZXhwIjoxNjE0MjU5NTg2LCJzY29wZSI6ImlibSBvcGVuaWQgY29udGFpbmVycy1rdWJlcm5ldGVzIiwiaWF0IjoxNjE0MjU1OTg2LCJhdXRobiI6eyJzdWIiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiaWFtX2lkIjoiSUJNaWQtMjcwMDA1NVhERyIsIm5hbWUiOiLoiJzlrZAg6ZmIIiwiZ2l2ZW5fbmFtZSI6IuiInOWtkCIsImZhbWlseV9uYW1lIjoi6ZmIIiwiZW1haWwiOiJyYWZvdWxAMTYzLmNvbSJ9LCJzdWJfOWM5NzI1YmQxM2VhNDU2Nzg4YWMwZWU3OGQ4NjQ2ZTEiOiJjaHN6Y2hlbkBjbi5pYm0uY29tIiwiaWFtX2lkXzljOTcyNWJkMTNlYTQ1Njc4OGFjMGVlNzhkODY0NmUxIjoiSUJNaWQtMjcwMDA1NVhERyIsInJlYWxtZWRfc3ViXzljOTcyNWJkMTNlYTQ1Njc4OGFjMGVlNzhkODY0NmUxIjoiSUJNaWQtY2hzemNoZW5AY24uaWJtLmNvbSIsImdyb3Vwc185Yzk3MjViZDEzZWE0NTY3ODhhYzBlZTc4ZDg2NDZlMSI6WyJkZXZvcHMtYWRtaW4tdnBuLW9ubHkiLCJkZXZvcHMtZGVmYXVsdC12cG4tb25seSIsIklQV0MgQWRtaW4iXX0.Y42KUJRGgZA9OV164GAKSF0W5rRNGf3x32YXrAo5UvKhpOK0k4r_hwZU5BZhI2y3t-UqM7lNOIxexpft2Zmc9ApQ6BlVN-iN1jcfBzxmrUPMObpc1-vDrAc9Sq84J8nYzy1Rk32ydFHeb3V2iDhJn14_NOnXwhuz9EFkSg0uUZHugTAPx5A-VcdrehceX0yOqAOfX5EzTtmHoI8-JQbfNt8pyBSJs8Eoag7_mtfNgx13bP_-M8W7tltCSHhPEO46gUurPFkvasHggConPQ_oBw3ANAvY8tDfivrGmdiR2Q-uc4SnFAjOgC77YskDLskBcOeehhBvxwDkyufztzqM6w",
"refresh_token": "OKDsw87zCujUXCmb4LZ3-DFQN7lUa0ejdqau_fL3Voms7M7DaKYgO07gZW29VQbcwdGc3z8jrQjjf_4gOutKyRCZ6LyEiSEKTZQ6Kovwqji02Puxu3fzIFB9f8-a1hMlkTtP4u32_FTCmOZA6ARvzxEyRX36CtQEzSVz-zVMsvPxdgyztUEWPTtvbr7aPn4eq209OzTGzTyPCBFR-N0gVp2tKLbIrGmyi_vgC-6xLRvR2nWGJsUwaaBjXwvICeCBY3qRJ90VyP1krBSHa72f1XJWpvLnBWHN8qo1dfPknHvknlEZ3kMUA87KZkynkgiVifhRq90oNAKYHhKJ4XRs2tyz05zW5a8qEhgoIVsslUzDLLNU1btRF_3g587dKckPzEav3BgQlCik4im8gIC74HFGZOz4P7z9QKLJHQY7ElDillH8pLRjW8Dx0yZvn8Yo5rSqJSj0zUmJxNZMUNEpF_DTQhHCePNOWu1_1q4o5cIb_Mv-mGMMVwrVUsJYUyaeV9O5cWl58eWlHQxS3SbuAjsBrzfSdcrIyFe5aQViyL_sL1-o54xFrMJPC3prPD25TS4vUOwAy7tc9r1AGZG00YUGaxPwzKcOWBI4DqksIiEKPOtcm3k0y24TuwRPa0AK-9jfYAzkx3rciBYGKbq1WOFjX-p6LH67ayxVUJcQcjSMe-35LZnsHQtc0VOxNHjJKdJiHsKOYEDY1Nz0k4zGZr1EZ6j7w4tLpBXP9ThC8hReiihWDmld9lzFdLwKZPF7jl4u03a2WQZ6j-wMHvLtOBcLDiKwEaeWaGp8v_YS3j4iGqkcAytf7z_-toD1O3ZHtIUlbe6H64IAVPKadN1Y1SD49Ouk1fk8xDFr7HQ4RuDTLfZnLGzC4vvzysCmJEX837Wjf2f9WdirEaKxoSlDDJKilt--20Ota-5CTimD8u0SttC6CD1Glj8bbAS8ddCAfVirDJty7FW3eyALvAHifKqzRa1kBDPHb305q91oSWYdzBKIlTinN9BAXDc3ZccVkWM6Y3VgUzh2iQwM0lKadts7OMwqhLDk7rukAXHRUpKxy-85rUf-a0oz41s69PXdQteoh559vEb0uyrq0kOnI1RnuJ7MaEGDC25Kfezumo0snwYRmQhXMPMeKkxBKxs9ZydKxxcp1qtLwFyHA6MhZuXRpZM9Qse9mqovNdHHOhAQIZu3J7HJusuVdg3SJhZkTH__gXpCc2hBeOpR0rPc6qZm7z2nU5pJQ2XgzH2TUm6psA",
"ims_user_id": 8873576,
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1614259586,
"refresh_token_expiration": 1616847976,
"scope": "ibm openid containers-kubernetes"
}
In addition, the following approach works but the token is obtained through the OpenShift web console, and thus cannot be obtained programmatically(at least I don't see how),
"Authorization: Bearer sha256~6V_OvZ5OoV8vnHF33Es5qsloAY-iXkLQ8dfl_Nsyn94"
Thanks!
You can not and should not send the ID-Token to get access to APIs, its only meant to be used by the client who did the initial authentication. It also typically have a very short lifetime (like 5 minutes in some implementation).
The only purpose of the ID-token is basically o create the local user session.
On the page you refer to it says at the end:
ID token: Every IAM ID token that is issued via the CLI expires after
one hour. When the ID token expires, the refresh token is sent to the
token provider to refresh the ID token. Your authentication is
refreshed, and you can continue to run commands against your cluster.
It sounds like they mean the access token. In openID connect you don't renew your ID-token (what I am aware of)
Have been busy in the past few days, I will share how I solved this problem here. In fact it didn't address the original issue, but is another way to achieve the goal.
So it turned out that there was another doc regarding how the access token can be obtained(Yes, as mentioned by #Tore Nestenius it should be an access token instead of an id token). The token described here is actually the same as what one would get through the Openshift web console. And basically it has nothing to do with the previous link I shared in the question.

Keycloak Gatekeeper always fail to validate 'iss' claim value

Adding the match-claims to the configuration file doesn't seem to do anything. Actually, Gatekeeper is always throwing me the same error when opening a resource (with or without the property).
My Keycloak server is inside a docker container, accessible from an internal network as http://keycloak:8080 while accessible from the external network as http://localhost:8085.
I have Gatekeeper connecting to the Keycloak server in an internal network. The request comes from the external one, therefore, the discovery-url will not match the 'iss' token claim.
Gatekeeper is trying to use the discovery-url as 'iss' claim. To override this, I'm adding the match-claims property as follows:
discovery-url: http://keycloak:8080/auth/realms/myRealm
match-claims:
iss: http://localhost:8085/auth/realms/myRealm
The logs look like:
On startup
keycloak-gatekeeper_1 | 1.5749342705316222e+09 info token must contain
{"claim": "iss", "value": "http://localhost:8085/auth/realms/myRealm"}
keycloak-gatekeeper_1 | 1.5749342705318246e+09 info keycloak proxy service starting
{"interface": ":3000"}
On request
keycloak-gatekeeper_1 | 1.5749328645243566e+09 error access token failed verification
{ "client_ip": "172.22.0.1:38128",
"error": "oidc: JWT claims invalid: invalid claim value: 'iss'.
expected=http://keycloak:8080/auth/realms/myRealm,
found=http://localhost:8085/auth/realms/myRealm."}
This ends up in a 403 Forbidden response.
I've tried it on Keycloak-Gatekeeper 8.0.0 and 5.0.0, both with the same issue.
Is this supposed to work the way I'm trying to use it?
If not, what I'm missing?, how can I validate the iss or bypass this validation? (preferably the former)?
It is failing during discovery data validation - your setup violates OIDC specification:
The issuer value returned MUST be identical to the Issuer URL that was directly used to retrieve the configuration information. This MUST also be identical to the iss Claim value in ID Tokens issued from this Issuer.
It is MUST, so you can't disable it (unless you want to hack source code - it should be in coreos/go-oidc library). Configure your infrastructure setup properly (e.g. use the same DNS name for Keycloak in internal/external network, content rewrite for internal network requests, ...) and you will be fine.
Change the DNS name to host.docker.internal
token endpoint: http://host.docker.internal/auth/realms/example-realm/open-id-connect/token
issuer URL in your property file as http://host.docker.internal/auth/realms/example-realm
In this way both outside world access and internal calls to keycloak can be achieved

PUT Object to AWS S3 via HTTP through VPC Endpoint with proper ACL?

I am using an HTTPS client to PUT an object to Amazon S3 from an EC2 instance within a VPC that has an S3 VPC Endpoint configured. The target Bucket has a Bucket Policy that only allows access from specific VPCs, so authentication via IAM is impossible; I have to use HTTPS GET and PUT to read and write Objects.
This works fine as described, but I'm having trouble with the ACL that gets applied to the Object when I PUT it to the Bucket. I've played with setting a Canned ACL using HTTP headers like the following, but neither results in the correct behavior:
x-amz-acl: private
If I set this header, the Object is private but it can only be read by the root email account so this is no good. Others need to be able to access this Object via HTTPS.
x-amz-acl: bucket-owner-full-control
I totally thought this Canned ACL would do the trick, however, it resulted in unexpected behavior, namely that the Object became World Readable! I'm also not sure how the Owner of the Object was decided either since it was created via HTTPS, in the console the owner is listed as a seemingly random value. This is the documentation description:
Both the object owner and the bucket owner get FULL_CONTROL over the
object. If you specify this canned ACL when creating a bucket, Amazon
S3 ignores it.
This is totally baffling me, because according to the Bucket Policy, only network resources of approved VPCs should even be able to list the Object, let alone read it! Perhaps it has to do with the union of the ACL and the Bucket Policy and I just don't see something.
Either way, maybe I'm going about this all wrong anyway. How can I PUT an object to S3 via HTTPS and set the permissions on that object to match the Bucket Policy, or otherwise make the Bucket Policy authoritative over the ACL?
Here is the Bucket Policy for good measure:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
],
"Condition": {
"StringEquals": {
"aws:SourceVpc": "vpc-12345678"
}
}
}
]
}
The way that S3 ACLs and Bucket Policies work is the concept of "Least Privilege".
Your bucket policy only specifies ALLOW for the specified VPC. No one else is granted ALLOW access. This is NOT the same as denying access.
This means that your Bucket or object ACL is granting access.
In the S3 console double check who the file owner is after the PUT.
Double check the ACL for the bucket. What rights have your granted at the bucket level?
Double check the rights that you are using for the PUT operation. Unless you have granted public write access or the PUT is being ALLOWED by the bucket policy, the PUT must be using a signature. This signature will determine the permissions for the PUT operation and who owns the file after the PUT. This is determined by the ACCESS KEY used for the signature.
Your x-amz-acl should contain bucket-owner-full-control.
[EDIT after numerous comments below]
The problem that I see is that you are approaching security wrong in your example. I would not use the bucket policy. Instead I would create an IAM role and assign that role to the EC2 instances that are writing to the bucket. This means that the PUTs are then signed with the IAM Role Access Keys. This preserves the ownership of the objects. You can then have the ACL being bucket-owner-full-control and public-read (or any supported ACL permissions that you want).

Using passport-http on Hyperledger composer REST API

I would like to know if it is possible to use passport-http to secure the REST API of Hyperledger Composer generated with the composer-rest-server and what would be the export COMPOSER_PROVIDERS='{}' configuration.
The idea is to use the identities previously generated and assigned to participants with the composer to authenticate the GET and POST requests on the API.
If it were possible, how would the userID and userSecret be passed, as a special http header, in the body or as a simple basic auth header?
I've not tried, but it should be able to. The Composer REST server uses the open source Passport authentication middleware, its a matter of configuration. Multiple Passport strategies can be selected, allowing clients of the REST server to select a preferred authentication mechanism.
The strategy for passport-http is here -> https://github.com/jaredhanson/passport-http
You can try something like:
export COMPOSER_PROVIDERS='{
"basic": {
"provider": "basic",
"module": "passport-http",
"clientID": "REPLACE_WITH_CLIENT_ID",
"clientSecret": "REPLACE_WITH_CLIENT_SECRET",
"authPath": "/auth/local",
"callbackURL": "/auth/local/callback",
"successRedirect": "/",
"failureRedirect": "/login"
}
}'
I assume you know how to configure your passport-http strategy.
and check out RESTful Node.js Application with passport-http - and see an example (right near the end) of an app consuming REST Endpoints right near the end.