GCloud Cloud Run - service to service 403 with valid access token and correct invoker role - gcloud

I get a 403 calling a Cloud Run service from another Cloud Run service.
I can only find this relevant SO question but both the accepted answer checks and the OP's solution are not valid for my use-case.
Use case:
Service A (the called service):
expose public routes to the internet (eg: /public/... )
expose private routes (eg: /internal/...)
expose a 403 custom route /403 with JSON response
I have a subdomain configured using a Load Balancer with two rules:
traffic to this subdomain pointing to internal is redirected to /403
all the other traffic to this subdomain is redirected to my Cloud Run Service
My service is deployed with this config:
Ingress Control is set to internal, with Allow traffic from external HTTP(S) load balancer flagged
A VPC is configured, with Route all traffic through VPC set to true (the same VPC is used by all my services)
Authentication setting is Allow unauthenticated invocations
Service B's service account (the caller) is listed in Service A's permission tab under the Cloud Run Invoker role (same as PubSub's service account)
With this configuration:
Traffic from the internet is served correctly through the subdomain
PubSub can call Service A using the computeMetadata bearer token
Service B gets a 403 using the https://xxx.a.run.app url
Service B obtains a valid computeMetadata token (the JWT is valid, claims are coherent), but 403 with this body is returned:
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Access is forbidden.</h2>
<h2></h2>
</body></html>
I am sure this is Google's 403 and not my custom 403 for it's not a JSON response and the request is not logged in Service A's logs.
I tried to allow all unauthenticated traffic and internal urls are served correctly, but I can not expose internal urls.
Does anyone have suggestion on what to check or how to debug this situation?

Related

Cloudfront and ALBs - Redirecting an HTTP request of a URL that is not on the SSL certificate. HTTP not HTTPS

I have a ALB set up behind a cloudfront distro. I have a rule to redirect an HTTP request to URL A to URL B which is not on AWS infrastructure.
When I query the ALB directly for URL A, the load balancer properly redirects to URL B. When I query a cloudfront endpoint for URL A, I get a 403 error back. Per the troubleshoot 403s aws doc, it seems the issue is that I don't have an alternate CNAME configured for URL B. However, since it's not on my SAN certificate that's associated with my CloudFront distro, I can't add it to the list of alternate CNAMES. is there a workaround to allow requests to URL A to properly travel through my cloudfront distro and get redirected? It doesn't make sense to me that I can't do this for an HTTP request.
verified that the ALB can be queried directly and redirect works
tried to add an alternate cname for http domain
removed wacl on alb to make sure that wasn't blocking it

AWS API Gateway responds with 403 when first going through Alert Logic WAF

I've seen a lot of questions on this topic, but none had answers that worked for my particular situation.
Context
I have a domain name foo.bar.com mapped in Route 53 to an Application Load Balancer in a VPC
The ALB routes to the WAF in my Alert Logic instance, hosted in the same VPC
I have a "website" in Alert Logic that points to xyz.execute-api.us-east-1.amazonaws.com via HTTPS over port 443
I have an API defined in API Gateway with an Invoke URL the same as above xyz.execute-api.us-east-1.amazonaws.com
My API has a route /hello with an Integration that points to an internal Application Load Balancer in the same VPC and subnets as everything mentioned above
Problem
Doing a GET request to https://xyz.execute-api.us-east-1.amazonaws.com succeeds from Postman while connected to the VPN for the given VPC
Doing a GET request to foo.bar.com failed from Postman - whether or not connected to the VPN - with a status code of 403, a body of { "message": "Forbidden" }, and a x-amzn-ErrorTypeofForbiddenException`
QUESTION: What am I missing?

Keycloak behind a Load Balancer with SSL gives a "Mixed Content" error

I have set up Keycloak (docker container) on the GCP Compute Engine (VM). After setting the sslRequired=none, I'm able to access Keycloak over a public IP (e.g. http://33.44.55.66:8080) and manage the realm.
I have configured the GCP CLassic (HTTPS) Load Balancer and added two front-ends as described below. The Load Balancer forwards the request to the Keycloak instance on the VM.
HTTP: http://55.44.33.22/keycloak
HTTPS: https://my-domain.com/keycloak
In the browser, the HTTP URL works fine and I'm able to login to Keycloak and manage the realm. However, for the HTTPS URL, I get the below error
Mixed Content: The page at 'https://my-domain.com/auth/admin/master/console/' was loaded over HTTPS, but requested an insecure script 'http://my-domain.com/auth/js/keycloak.js?version=gyc8p'. This request has been blocked; the content must be served over HTTPS.
Note: I tried this suggestion, but it didn't work
Can anyone help with this, please?
I would never expose Keycloak on plain http protocol. Keyclok admin console itself is secured via OIDC protocol and OIDC requires to use https protocol. So default sslRequired=EXTERNAL is safe and smart configuration option from the vendor.
SSL offloading must be configured properly:
Keycloak container with PROXY_ADDRESS_FORWARDING=true
loadbalancer/reverse proxy (nginx, GCP Classic Load Balancer, AWS ALB, ...) with proper request header X-Forwarded-* configuration, so Keycloak container will know correct protocol, domain which is used for the users

How to use JWT Auth0 token for Cloud Run Service to Service communication if the Metaserver Token is overriding the Auth0 Token

Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.

API Gateway Proxy to VPC Link

I am trying to use API Gateway to route traffic to an internal network load balancer.
All routes to the base path (/) are working, so I know the VPC Link is up and reachable.
I added a proxy resource (/{proxy+}), with ANY http method. In the ANY "Integration Request" I selected:
Integration Type: VPC Link
Use Proxy Integration
Method: ANY
VPC Link: My-VPC-link (abcdefg)
Endpoint URL: (i.e. http://abcd1234.cloudfront.net/{proxy})
I can see that the my web server responds with a redirect:
(b9d0c629-31ec-11e8-b452-0f13c3c62b81) Endpoint response body before
transformations: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: http://abcd1234.cloudfront.net/api/. If not click the link.
(b9d0c629-31ec-11e8-b452-0f13c3c62b81) Method completed with status: 301
The web page shows:
{"message":"Forbidden"}
Also, if I try to directly link to the CloudFront URL I get the same error.
In addition to the CloudFront URL, I've also tried the following:
Custom Domain Name
403 Forbidden
The URL of my deployed stage
{"message": "Internal server error"}
What URL should be in the 'Endpoint URL' field in the integration request?
It turns out that the API-Gateway must call the VPC Link with 'http' not 'https' for the VPC Link URL.
Everything seems to be in order in your configuration of API Gateway, including the endpoint URL.
My guess is that you are messing something with the redirections. Is it possible that your server is redirecting to the same place again and again, or that the redirection rules always apply?
To be sure that the problem is at your server's side, try a simpler set up. Try making an API call to somewhere that doesn't redirect, just return a simple response.