AWS Application Load Balancer for ECS 503 errro - amazon-ecs

I have an Application load balancer configured for AWS ECS. The health check endpoint works fine and I can the 200 response but for any other endpoint the load balancer is returning this response:
<html>
<head>
<title>503 Service Temporarily Unavailable</title>
</head>
<body>
<center>
<h1>503 Service Temporarily Unavailable</h1>
</center>
</body>
</html>
All the other posts that have seen talk about health check not working as the cause of the issue but for me the health checks work perfectly find and I can see the message hitting the containers in the logs. Any idea why I am getting this error for other endpoitns?

Check that the traffic port and the health check port are the same.
I can see this happening if you have the correct port configured in the health check settings, but the wrong port configured for sending actual traffic to the container.

Related

GCloud Cloud Run - service to service 403 with valid access token and correct invoker role

I get a 403 calling a Cloud Run service from another Cloud Run service.
I can only find this relevant SO question but both the accepted answer checks and the OP's solution are not valid for my use-case.
Use case:
Service A (the called service):
expose public routes to the internet (eg: /public/... )
expose private routes (eg: /internal/...)
expose a 403 custom route /403 with JSON response
I have a subdomain configured using a Load Balancer with two rules:
traffic to this subdomain pointing to internal is redirected to /403
all the other traffic to this subdomain is redirected to my Cloud Run Service
My service is deployed with this config:
Ingress Control is set to internal, with Allow traffic from external HTTP(S) load balancer flagged
A VPC is configured, with Route all traffic through VPC set to true (the same VPC is used by all my services)
Authentication setting is Allow unauthenticated invocations
Service B's service account (the caller) is listed in Service A's permission tab under the Cloud Run Invoker role (same as PubSub's service account)
With this configuration:
Traffic from the internet is served correctly through the subdomain
PubSub can call Service A using the computeMetadata bearer token
Service B gets a 403 using the https://xxx.a.run.app url
Service B obtains a valid computeMetadata token (the JWT is valid, claims are coherent), but 403 with this body is returned:
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>403 Forbidden</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Forbidden</h1>
<h2>Access is forbidden.</h2>
<h2></h2>
</body></html>
I am sure this is Google's 403 and not my custom 403 for it's not a JSON response and the request is not logged in Service A's logs.
I tried to allow all unauthenticated traffic and internal urls are served correctly, but I can not expose internal urls.
Does anyone have suggestion on what to check or how to debug this situation?

IBM Cloud: Kubernetes add-on ALB Oauth2 Proxy for App ID integration fails to start

I deployed a containerized app to my IBM Cloud Kubernetes service in a VPC. The app uses App ID for authentication. The deployment pipeline ran successfully. The app seems ready, but when accessing its URL it gives an internal server error (500 status code).
From the Kubernetes dashboard I found that the ALB Oauth Proxy add-on is failing. It is deployed, but does not start.
The deployment seems to fail in the health checks (ping not successful). From the POD logs I found the following as last (and only) entry:
[provider.go:55] Performing OIDC Discovery...
Else, there is not much. Any advise?
Guessing from the missing logs and the failing pings, it seemed related to some network setup. Checking the VPC itself, I found that there was no Public Gateway attached to the subnet. Enabling it allowed outbound traffic. The oauth proxy could contact the App ID instance. The app is working as expected now.
Make sure that the VPC subnets allow outbound traffic and have a Public Gateway enabled.

API Gateway Proxy to VPC Link

I am trying to use API Gateway to route traffic to an internal network load balancer.
All routes to the base path (/) are working, so I know the VPC Link is up and reachable.
I added a proxy resource (/{proxy+}), with ANY http method. In the ANY "Integration Request" I selected:
Integration Type: VPC Link
Use Proxy Integration
Method: ANY
VPC Link: My-VPC-link (abcdefg)
Endpoint URL: (i.e. http://abcd1234.cloudfront.net/{proxy})
I can see that the my web server responds with a redirect:
(b9d0c629-31ec-11e8-b452-0f13c3c62b81) Endpoint response body before
transformations: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: http://abcd1234.cloudfront.net/api/. If not click the link.
(b9d0c629-31ec-11e8-b452-0f13c3c62b81) Method completed with status: 301
The web page shows:
{"message":"Forbidden"}
Also, if I try to directly link to the CloudFront URL I get the same error.
In addition to the CloudFront URL, I've also tried the following:
Custom Domain Name
403 Forbidden
The URL of my deployed stage
{"message": "Internal server error"}
What URL should be in the 'Endpoint URL' field in the integration request?
It turns out that the API-Gateway must call the VPC Link with 'http' not 'https' for the VPC Link URL.
Everything seems to be in order in your configuration of API Gateway, including the endpoint URL.
My guess is that you are messing something with the redirections. Is it possible that your server is redirecting to the same place again and again, or that the redirection rules always apply?
To be sure that the problem is at your server's side, try a simpler set up. Try making an API call to somewhere that doesn't redirect, just return a simple response.

AWS Classic Load Balancer + EC2: web API requests returns 404

I have an AWS EC2 Jira instance running behind an AWS Classic load balancer. The site loads in the browser fine, but all API requests are returning 404 for some reason. It is not a Jira 404, but a generic 404 response with no body and minimal headers. Only response useful header seems to be Server: nginx.
Tried white-listing my client IP, opening up all ports, sending request to the LB and directly to the instance with proper security group settings, etc., but same 404 response is returned. I'm using Postman to test the API. I noticed when I load the EC2 instance directly in the browser, it redirects to the load balancer.
Returns 200 with HTML. Basic auth works, too.
GET http://jira (home page)
Returns 404:
GET http://jira/rest/api/2/issue/ticket-num (or any other /rest/ endpoints)
Where should I start looking to debug this 404 issue? I feel like I'm missing something basic. I'm not seeing any Jira configuration for setting up its rest API. I feel like perhaps it's a server configuration issue, although I've never come across manual web server configuration while installing Jira, so maybe on the AWS's side?
EDIT: still waiting to get ssh access to the instance, so I'll update as I get more info and access.
This HTTP 404 responses with very limited set of headers could be from the default (the bottom one) rule in ELB. I experienced similar issue getting HTTP 404 because instead of host header I set path and provided the host domain name in one of ELB rules. So the rule did not work and default rule returned 404 because there is no such path exists on the instance.
I would recommend to try to use Redirect to or Return fixed response options for default rule to check out if it goes to the default rule.

Spinnaker Gate is redirecting to the incorrect authentication URL

So I have spinnaker running behind an https load balancer and my external ports use the standard 443 which get port mapped to the spinnaker instance still on port 9000. I've gotten pretty much everything to work except a redirect from gate is still appending the :9000 port to my URL.
requests sent to https://my.url.com/gate/auth/redirect?to=https://my.url.com/#/infrastructure send back a redirect response with the location header in the 301 location:https://my.url.com:9000/gate/login which fails because the load balancer is only listening for 443. If I manually delete the port and go right to https://my.url.com/gate/login the oauth flow works as expected and once authed all deck functionality and subsequent gate queries work as expected.
In my /etc/default/spinnaker file I have
SPINNAKER_DECK_BASEURL=https://my.url.com
SPINNAKER_GATE_BASEURL=https://my.url.com/gate
in /opt/spinnaker/config/gate-googleOAuth.yml I have
spring:
oauth2:
client:
preEstablishedRedirectUri: ${SPINNAKER_GATE_BASEURL}/login
useCurrentUri: false
and I've ran /opt/spinnaker/bin/reconfigure_spinnaker.sh plus restarts to make sure deck and gate get updated. Does anyone have any ideas what I might be missing?
I figured out my problem. With the help of this issue pointing me in the right direction (https://github.com/spinnaker/spinnaker/issues/1112) and some digging I found that the issue was with apache2 and the reverse proxy back to gate.
ProxyPassReverse
This directive lets Apache httpd adjust the URL in the Location, Content-Location
and URI headers on HTTP redirect responses. This is essential when Apache httpd
is used as a reverse proxy (or gateway) to avoid bypassing the reverse proxy because
of HTTP redirects on the backend servers which stay behind the reverse proxy.
from apache2 documentation https://httpd.apache.org/docs/current/mod/mod_proxy.html#proxypassreverse