AmplifyJS error when Appsync subscribe is exposed via API Gateway - aws-api-gateway

Appsync is already replacing API Gateway to some extend, then why do you need to expose it via API Gateway. I know most people would be asking this question. Here is why?
Support for Usage Plan
Possibility of monetization.
As far as I understood, Appsync is GrapQL + Apollo server implementation. The API exposed supports POST request. And even the subscription request is also a post request with Websocket MQTT (AWS IoT) URL as the response. (Eg provided below)
{
"extensions": {
"subscription": {
"mqttConnections": [
{
"url": "wss://something-ats.iot.ap-northeast-1.amazonaws.com/mqtt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...",
"topics": [
".../.../subscribeToVehicleLocation/..."
],
"client": "..."
}
],
"newSubscriptions": {
"subscribeToVehicleLocation": {
"topic": ".../../subscribeToVehicleLocation/..",
"expireTime": null
}
}
}
},
"data": {
"subscribeToVehicleLocation": null
}
}
If that is the case, can we expose Appsyn endpoint via API-Gateway (POST Method)?
For simplicity, I tried with HTTP API in API-Gateway.
It worked well for Query & Mutate Request.
But for Subscribe request, I am getting a handshake exception. (Connection failed: Connection handshake error, in my angular amplify project)
Is this the right way to expose Appsync via API-Gateway? Or should I use AWS Service (In API Gateway) to Invoke Appsync?
How can I resolve this Websocket Connection handshake error in Angular Amplify Project?
PS:
I was previously able to subscribe to the data using the original Appsync URL (using AmplifyJS in Angular 7). With API-Gateway URL, I am getting this WS Handshake exception (with Amplify).
WebSocket connection to 'wss://....execute-api.ap-northeast-1.amazonaws.com/graphql?header=...&payload=e30=' failed: Error during WebSocket handshake: Unexpected response code: 400
in AWSAppSyncRealTimeProvider.js:603
Update 24-04-2020
I was able to invoke Appsync via AWS Service invoke in API-Gateway with the below settings. (Used REST Protocol provided by API Gateway)
But still, I am having the Web Socket error in Amplify
API-Gateway Configuration
Note: AWS Subdomain, is the subdomain part of Appsync API Endpoint.
Trust relationship for IAM Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"appsync.amazonaws.com",
"apigateway.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
IAM Role Permission

I don't think that you are able to make the connection in the way that you imagined.
In AppSync, queries and mutations are delivered using normal HTTP connections (REST). On the other side, subscriptions are based on websockets. Both protocols are TCP based, but they are treated differently by servers and clients (browsers and sdk's, like amplify).
AWS API Gateway supports WebSockets, but using a different configuration.
You told us that you configured the API Gateway to use HTTP API for simplicity. In that case it will not work because the endpoint that you created inside API Gateway is not prepared to perform the websocket handshake with the client (Amplify).
To accept and control the handshake you must create the API as a Websocket API. But the thing is: using this API you can't just forward your call to AppSync. You will need to deploy a component to implement the connection control as expected by API Gateway WebSocket Implementation.
A Lambda Function would be the first idea, but then, there is the question: How you will control and persist the websocket connection from your Lambda to AppSync API. You can't count on Lambda to do that.
I can imagine a following implementation to work on your case (but I don't think that this will be a good implementation):
Implement a WebSocket API in AWS API Gateway
Deploy a component to control de WebSocket API Backend (this must be inside an instance or a container)
From this component, you use amplify to connect to the websocket endpoint in AppSync.
In the component, each time that you receive a request from AppSync to a specific websocket connection, you call AWS API Gateway callback url for the WebSocket API and forward the response.
In summary, with this solution you'll need to reproduce the connection control that is provided by AppSync out of the box, besides having another part of infrastructure to provide the plumbing that is already provided by AppSync out of the box.

After getting expert opinions, I had finally dropped the plan of exposing Appsync via API-Gateway.
Appsync & API-Gateway belongs to the same hierarchy in AWS Stack. It was not a good idea to expose Appsync via API-Gateway, since, Appsync endpoint would still be public (leading to a back door).
Below are my solutions (considering Appsync alone) for
Monetization Scope: Collect Appsync metrics/trace logs and calculate the API Usage based on Cognito UserId or Apikey.
Usage Plan/Quota: Set up a Lambda Datasource (in Pipeline resolver) incrementing the hit count in Redis cache (with key as Apikey and value as hit count having a custom TTL say 1 day).
If there are any better solutions, please feel free to share it.

Related

Questions about istio external authorization

Problem statement:
My goal is to have istio with external authorization service (ideally HTTP, if not possible than GRPC would do as well). There is a requirement to be able to control what exact status code will be returned to client on authorization service. The latter requirement is the most problematic part.
My research
I have read istio documentation on external authorizer
I have made a prototype with HTTP Auth service, but whatever non 200 status
code I return from Auth Service the client always receives 403
Forbidden
In mesh config specification I see the only possibility to set statusOnError but it will be used only in case auth service is unreachable and it can not be dynamically changed.
Also in envoy documentation for GRPC service I see possibility to set custom status
HTTP attributes for a denied response.
{
"status": "{...}",
"headers": [],
"body": "..."
}
Questions:
Is having custom status possible only with GRPC auth service?
Is istio using envoy API-V3 or API-V2?
Any suggestion how to cook istio with external authorizer and custin status codes?
I made the GRPC Auth service prototype and found the answer. It is counter-intuitive but GRPC external auth service is really more flexible than HTTP one. And it really allows to set arbitrary status code

Whitelist web application for API access without API key

We're developing a web application (SPA) consisting of the following parts:
NextJS container
Django backend for user management
Data API (FastAPI) protected with API keys, for which we also provide 3rd party access
The NextJS container uses an API key to access the data API. We don't want to expose the API key to the client (browser), so the browser sends the API requests to the NextJS container, which then relays it to the data API, see here. This seems secure, but is more complicated and slower than sending requests from the browser to the data API directly.
I'm wondering if it's possible to whitelist the web application in the data API, so that the client (browser) can call the data API directly without API key, but 3rd parties can't. FastAPI provides a TrustedHostMiddleware, but it's insecure because it's possible to spoof the host header. It has been suggested to whitelist IPs instead, but we don't have a dedicated IP for our web application. I looked into using the referer header, but it's not available in the FastAPI request object for some reason (I suspect some config problem in our hosting). Also, the referer header could be spoofed as well.
Is there even a safe way to whitelist our web application for data API access, or do we need to relay the request via NextJS container and use an API key?
Is there even a safe way to whitelist our web application for data API
access,
No, you need in all case an Authentication mechanism, something before the backend that check if the client is an authorize client.
The simplest pattern is using the NextJS container as the proxy. Proxy that have an api-key to call the backend ( what you are currently doing ).
There is many way to implement a secured proxy for a backend, but this authentication logic should not be inside the backend but in a separate service ( like envoy , nginx ... )

External login (via ADFS) from identity server3 responds with http status code 504

I received federation metadata endpoint from customer which I used to configure WsFederationAuthentication in identityserver3.
Everything works fine from developer machine like identity server login redirecting to adfs login page, but after deploying the solution into AWS elastic bean stalk (which is in private subnet) then I receive 504 HTTP status code when I try to login through External(ADFS) login.
I simulated this scenario in postman. I receive 302 response in developer machine but the request never ends (postman result pane shows 'Loading...') in AWS ec2 instance.
I am able to browse federation metadata URL and /adfs/ls endpoint from AWS ec2 instance.
In idnetity server log, I can see below logs,
External login requested for provider: adfs
Triggering challenge for external identity provider
HTTP Response
{
"StatusCode": 401,
"Headers": {
"Content-Type": [
"text/html"
],
"Server": [
"Microsoft-IIS/10.0"
],
"Content-Length": [
"0"
]
},
"Body": ""
}
After this, gateway timeout happens (by AWS load balancer).
As per code in Microsoft.Owin.Security.WsFederation.WsFederationAuthenticationHandler.cs, from ApplyResponseChallengeAsync() method, redirect response should generate with location header having adfs login page URL.
But, this is not happening.
I see below error in HTTPError.Log.
GET
/identity/external?provider=adfs&signin=699036641a8b2b6ddccea61bc8c1f715 --
1 Connection_Abandoned_By_ReqQueue DefaultAppPool
I do not see any event related to above HTTP error in event viewer log.
I searched for the above error but the solutions did not yield any good results for this issue.
I further investigated with process monitor tool, compared the tcp operations between local and aws ec2 instance for the identityserver external login endpoint request then I found that TCP disconnect operation happening immediately after TCP connect in AWS ec2 instance but in local this was not happening instead TCP communication established and tcp communication went well.
Further investigated with wireshark tool then I found the Handshake failure happening in AWS ec2 instance after Client Hello call. Then I compared TLS version and cipher suites used by local machine (from wireshark log), I found the difference like local machine uses TLS 1.2 and cipher suite : TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (0xc030)
and AWS ec2 instance uses TLS 1.0 which is not supproted by ADFS server. Hence the tcp connection could not be established resulting into handshake failure.
I followed this link https://learn.microsoft.com/en-us/officeonlineserver/enable-tls-1-1-and-tls-1-2-support-in-office-online-server#enable-strong-cryptography-in-net-framework-45-or-higher to make .net framewrok to use strong crypto.
After this registry update, successfully able to login from external idp (ADFS) via identity server3 login page.

How to use JWT Auth0 token for Cloud Run Service to Service communication if the Metaserver Token is overriding the Auth0 Token

Prerequisites
I have two Cloud Run services a frontend and a backend. The frontend is written in Vue.js/Nuxt.js and is using a Node backend therefore. The backend is written in Kotlin with Spring Boot.
Problem
To have an authenticated internal communication between the frontend and the backend I need to use a token thttps://cloud.google.com/run/docs/authenticating/service-to-service#javahat is fetched from the google metaserver. This is documented here: https://cloud.google.com/run/docs/authenticating/service-to-service#java
I did set it all up and it works.
For my second layer of security I integrated the Auth0 authentication provider both in my frontend and my backend. In my frontend a user can log in. The frontend is calling the backend API. Since only authorized users should be able to call the backend I integrated Spring Security to secure the backend API endpoints.
Now the backend verifies if the token of the caller's request are valid before allowing it to pass on to the API logic.
However this theory does not work. And that simply is because I delegate the API calls through the Node backend proxy. The proxy logic however is already applying a token to the request to the backend; it is the google metaserver token. So let me illustrate that:
Client (Browser) -> API Request with Auth0 Token -> Frontend Backend Proxy -> Overriding Auth0 Token with Google Metaserver Token -> Calling Backend API
Since the backend is receiving the metaserver token instead of the Auth0 Token it can never successfully authorize the API call.
Question
Due the fact that I was not able to find any articles about this problem I wonder if it's simply because I am doing it basically wrong.
What do I need to do to have a valid Cloud Run Service to Service communication (guaranteed by the metaserver token) but at the same time have a secured backend API with Auth0 authorization?
I see two workarounds to make this happen:
Authorize the API call in the Node backend proxy logic
Make the backend service public available thus the metaserver token is unnecessary
I don't like any of the above - especially the latter one. I would really like to have it working with my current setup but I have no idea how. There is no such thing like multiple authorization token, right?
Ok I figured out a third way to have a de-facto internal service to service communication.
To omit the meta-server token authentication but still restrict access from the internet I did the following for my backend cloud run service:
This makes the service available from the internet however the ingress is preventing any outsider from accessing the service. The service is available without IAM but only for internal traffic.
So my frontend is calling the backend API now via the Node backend proxy. Even though the frontend node-backend and the backend service are both somewhat "in the cloud" they do not share the same "internal network". In fact the frontend node-backend requests would be redirected via egress to the internet and call the backend service just like any other internet-user would do.
To make it work "like it is coming from internal" you have to do something similar like VPN but it's called VPC (Virtual Private Cloud). And luckily that is very simple. Just create a VPC Connector in GCP.
BUT be aware to create a so called Serverless VPC Access (Connector). Explained here: https://cloud.google.com/vpc/docs/serverless-vpc-access
After the Serverless VPC Access has been created you can select it in your Cloud Run Service "Connection" settings. For the backend service it can be simply selected. For the frontend service however it is important to select the second option:
At least that is important in my case since I am calling the backend service by it's assigned service URL instead of a private IP.
After all that is done my JWT token from the frontend is successfully delivered to the backend API without being overwritten by a MetaServer token.

Using Kong API Gateway as a proxy for Cisco UCCX

I am running Cisco UCCX 11.0 which is a Contact Center server that is based on a Java scripting engine. Scripts are build using the 'Script Editor' software where you drag elements (Java Beans) to define the script logic. One of the steps in the script is to perform a REST Call. Unfortunately this step does not support adding Custom Headers such as Authorization headers and thus is limited to Basic Authentication only.
I would like the script to make a REST Call to an external API that uses a static Bearer Token. Am I correct in saying I could use Kong Gateway for this? Here is my idea of the flow:
UCCX Makes REST Call to Kong with Basic Authentication ---> Kong Gateway recieves the request ---> Kong Gateway makes it's request to External API with static Bearer Token ---> External API responds back to Kong ---> Kong forwards the Response back to UCCX
Is this type of flow possible/easy to deploy?
This can easily be managed by assigning the Request Transformer plugin to the Kong API exposing the upstream service.
Example:
Let's assume you have an API endpoint on Kong called /myapi that is forwarding to your upstream service.
You then assign the Request Transformer plugin to the /myapi API.
For your case, you will most likely want to be using the config.add.headers option when configuring the Request Transformer plugin to add the required header authentication which will be added to all upstream requests.
Relevant Gitter Conversation:
https://gitter.im/Mashape/kong?at=587c3a9c074f7be763d686db