I am using AWS Cognito Federated Identities to map tokens from arbitrary identity providers to Session Tokens and Temporary Credentials. But it is critical that we obtain the issuer and subject claims for each of these identities within either the API Gateway or our target micro services.
To this end, I am trying to extract the issuer and subject claims from an AWS Session Token using API Gateway Mapping Templates. All APIs are signed with AWS Signature Version 4. But to be clear, very few of our identities are coming from AWS Cognito User Pools, but from various trusted identity providers we have configured in AWS Cognito Federated Identities.
I am referring to the following page for instruction:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html
What I have seen thus far:
$context.authorizer.claims.property: only useful for identities from Cognito User Pools
$context.identity.cognitoIdentityId: gets the Cognito Federated Identities key for an identity
Am I missing something?
Is it possible to extract issuer/subject from Mapping Template Context for an arbitrary identity?
Alternatively, is it possible to query Cognito Federated Identities for issuer/subject using a CognitoIdentity obtained via $context.identity.cognitoIdentityId
Feedback is much appreciated,
Thanks,
Randy
Related
I am trying to figure out a way to authenticate my service A that calls my service B.
Both are hosted in AWS ECS so I assume service A has an IAM role it is running under which can be used to authenticate it.
My service B (asp.net core 6) application already uses cognito JWT token for authentication.
Question: Is it possible to get a JWT token without using Cognito for my service A (for M2M authentication/authorization)?
i am quite new to kubernetes and I am looking towards certificate based authentication and token based authentication for calling K8 apis. To my understanding, I feel token based approach (openID + OAuth2) is better since id_token will get refreshed by refresh_token at a certain interval and it also works well with the login point(web browser) which is not the case with Certificate based approach . Any more thoughts to this ? I am working using minikube with kubernetes . Can anyone share their thoughts here ?
Prefer OpenID Connect or X509 Client Certificate-based authentication strategies over the others when authenticating users
X509 client certs: decent authentication strategy, but you'd have to address renewing and redistributing client certs on a regular basis
Static Tokens: avoid them due to their non-ephemeral nature
Bootstrap Tokens: same as static tokens above
Basic Authentication: avoid them due to credentials being transmitted over the network in cleartext
Service Account Tokens: should not be used for end-users trying to interact with Kubernetes clusters, but they are the preferred authentication strategy for applications & workloads running on Kubernetes
OpenID Connect (OIDC) Tokens: best authentication strategy for end users as OIDC integrates with your identity provider (e.g. AD, AWS IAM, GCP IAM ...etc)
I advice you to use OpenID Connect. OpenID Connect is based on OAuth 2.0. It is designed with more of an authentication focus in mind however. The explicit purpose of OIDC is to generate what is known as an id-token. The normal process of generating these tokens is much the same as it is in OAuth 2.0.
OIDC brings a step closer to providing with a user-friendly login experience and also to allow us to start restricting their access using RBAC.
Take also look on Dex which acts as a middleman in the authentication chain. It becomes the Identify Provider and issuer of ID tokens for Kubernetes but does not itself have any sense of identity. Instead, it allows you to configure an upstream Identity Provider to provide the users’ identity.
As well as any OIDC provider, Dex supports sourcing user information from GitHub, GitLab, SAML, LDAP and Microsoft. Its provider plugins greatly increase the potential for integrating with your existing user management system.
Another advantage that Dex brings is the ability to control the issuance of ID tokens, specifying the lifetime for example. It also makes it possible force your organization to re-authenticate. With Dex, you can easily revoke all tokens but there is no way to revoke a single token.
Dex also handles refresh tokens for users. When a user logs in to Dex they may be granted an id-token and a refresh token. Programs such as kubectl can use these refresh tokens to re-authenticate the user when the id-token expires. Since these tokens are issued by Dex, this allows you to stop a particular user refreshing by revoking their refresh token. This is really useful in the case of a lost laptop or phone.
Furthermore, by having a central authentication system such as Dex, you need only configure the upstream provider once.
An advantage of this setup is that if any user wants to add a new service to the SSO system, they only need to open a PR to Dex configuration. This setup also provides users with a one-button “revoke access” in the upstream identity provider to revoke their access from all of our internal services. Again this comes in very useful in the event of a security breach or lost laptop.
More information you can find here: kubernetes-single-sign-one-less-identity/, kubernetes-security-best-practices.
I created a AWS API Gateway set with authentication = AWS_IAM to call a Lambda function. Now, to call this API I understand that I need to sign the request and as stated in the AWS documentation the correct way is to add the Authorization header calculated using AWS Signature V4 which need an access_key and a secret_key.
On my client side the user authenticate with AWS Cognito first and receive the JWT tokens (id token access token and refresh token) but I cannot find the access_key/secret
_key in them. How can I calculate the AWS Signature V4 from the tokens received from AWS Cognito?
I believe you can't (with 99,99999 of certainty)!
Please confirm that you are authenticating your users with AWS Cognito User Pool. You probably are because Cognito User Pool is the service that provides JWT. In this case, the token will assure the service that receives it (API Gateway) that your user is registered in a specific identity directory (User Pool). Your service should evaluate if it will provide access or not to its resources for users registered in this specific directory with the provided claims (groups, roles, etc).
When you secure your API Gateway endpoints with AWS_IAM you are saying that only identities that AWS can recognize inside its own identity directory (Users or Roles) are allowed to perform actions on the resource. In general, users registered in Cognito User Pools are not considered by AWS as valid identities.
For a Cognito User Pool user to be considered a valid AWS identity, you have two options:
1 - Configure your AWS account to use external Identity Providers and Federation. Not a simple thing and a solution to a different use case. In summary, don't choose this one.
2 - Use another AWS product (with a name that creates a lot of confusion) called Cognito Identity Pool. This service evaluates if the JWT token is allowed in that context (you configure it inside the Identity Pool). If is a valid token from a registered identity directory, Cognito Identity Pool will exchange your JWT token for a AWS Access Key, AWS Secret Key and AWS Session Token associated with a specific IAM Role. You can then use these keys to sign your request. But keep in mind that with this change you will lose your capacity to identity the specific user in API Gateway and in the downstream services called by API Gateway.
If you need to have the JWT token in your downstream services, you can do it with a little bit of additional effort. You can't find a way here: https://stackoverflow.com/a/57961207/6471284
I want to create base authentication in kubernetes. every document say that I should create CSV or file then enter the username and password in it. but I do not want to use file I want to some database or kubernetes handle it.
what can I do for base authentication?
You can based your authentication on tokens if you don't want to use static pasword file.
First option:
Service Account Tokens
A service account is an automatically enabled authenticator that uses signed bearer tokens to verify requests.
The plugin uses two flags(which are optional):
Service accounts are usually created automatically by the API server and associated with pods running in the cluster through the ServiceAccount Admission Controller. Bearer tokens are mounted into pods at well-known locations, and allow in-cluster processes to talk to the API server. Accounts may be explicitly associated with pods using the serviceAccountName field of a PodSpec.
Service account bearer tokens are perfectly valid to use outside the cluster and can be used to create identities for long standing jobs that wish to talk to the Kubernetes API. To manually create a service account, simply use the kubectl create serviceaccount (NAME) command. This creates a service account in the current namespace and an associated secret.
The created secret holds the public CA of the API server and a signed JSON Web Token (JWT).
The signed JWT can be used as a bearer token to authenticate as the given service account. See above for how the token is included in a request. Normally these secrets are mounted into pods for in-cluster access to the API server, but can be used from outside the cluster as well.
There is some drawbacks because service account tokens are stored in secrets, any user with read access to those secrets can authenticate as the service account. Be careful when granting permissions to service accounts and read capabilities for secrets.
Second:
Install OpenID Connect (full documentation you can find here: oidc).
OpenID Connect (OIDC) is a superset of OAuth2 supported by some service providers, notably Azure Active Directory, Salesforce, and Google. The protocol’s main addition on top of OAuth2 is a field returned with the access token called an ID Token. This token is a JSON Web Token (JWT) with well known fields, such as a user’s email, signed by the server.
To identify the user, the authenticator uses the id_token (not the access_token) from the OAuth2 token response as a bearer token.
Since all of the data needed to validate who you are is in the id_token, Kubernetes doesn’t need to “phone home” to the identity provider. In a model where every request is stateless this provides a very scalable solution for authentication.
Kubernetes has no “web interface” to trigger the authentication process. There is no browser or interface to collect credentials which is why you need to authenticate to your identity provider first.
There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-authentication.
I try to download a file from a google cloud drive bucket via the REST. But if I use the access_token of the oAuth 2.0 client which I have created I get "Insufficient Permission" as an error (It works with the access toke of my googel account).
So, where in the cloud platform I can grant the oAuth2 client access to the bucket from where I want to download the file?
Thx
TL;DR - You're most likely missing the step where you request the right scopes when requesting your OAuth2.0 access token. Please look at the supported scopes with Google Cloud Storage APIs. Access tokens typically expire in 60 minutes and you will need to use a refresh token to get a new access token when it expires.
Please read the Google Cloud Storage Authentication page for detailed information.
Scopes
Authorization is the process of determining what permissions an
authenticated identity has on a set of specified resources. OAuth uses
scopes to determine if an authenticated identity is authorized.
Applications use a credential (obtained from a user-centric or
server-centric authentication flow) together with one or more scopes
to request an access token from a Google authorization server to
access protected resources.
For example, application A with an access
token with read-only scope can only read, while application B with an
access token with read-write scope can read and modify data. Neither
application can read or modify access control lists on objects and
buckets; only an application with full-control scope can do so.
Authentication in Google Cloud
Google Cloud services generally provides 3 main modes of authentication:
End User Account credentials - here you authenticate as the end user directly using their google account or an OAuth 2.0 access token. When requesting an access token, you will need to provide the scopes which determine which APIs are accessible to the client using that access token.
OAuth2.0 credentials - if granted the right scope, can access the user's private data. In addition, Cloud IAM lets you control fine grained permissions by granting roles to this user account.
Service Accounts - here you create a service account which is associated with a specific GCP project (and billed to that project thereby). These are mainly used for automated use from your code or any of the Google Cloud services like Compute Engine, App Engine, Cloud Functions, etc. You can create service accounts using Google Cloud IAM.
Each service account has an associated email address (you specify when creating the service account) and you will need to grant appropriate roles for this email address for your Cloud Storage buckets/objects. These credentials if granted the right roles can access the user's private data.
API keys - here you get an encrypted string which is associated with a GCP project. It is supported only by very few Google Cloud APIs and it is not possible to restrict the scope of API keys (unlike service accounts or OAuth2.0 access tokens).