AWS API Gateway - to use with AWS EC2 Endpoint or AWS Lambda? - rest

I need to create a API where the Vendors will push the data to the server using REST calls and this data needs to further pushed to a user on mobile app(using Websocket guessing as of now) to whom the data belongs.
For Vendors to use REST API : I need to check the Vendor credential and Write that data to DB.
I am keen to know what approach should I use ? Should I use AWS API Gateway which can help for security and scalability.
and while using AWS API Gateway - what would be a better approach to have EC2 Endpoint or Lambda Endpoint.

Using EC2 vs Lambda depends on how you want to design your services and specific use cases. Going serverless is a trend these days, but you do not need to go serverless, just for the sake of being serverless.
For your use case, If the REST API you will expose updates a Database, let's say RDS, Lambda function probably is not an ideal choice. As you will need to open a connection every time the lambda function is invoked. Moreover, if you are running the lambda in a NO VPC config, You will need to publicly expose your RDS port. If its DynamoDB, it works out well.
But again, you want to push out the update to Mobile apps over say web sockets. You definitely need a WebSocket Server somewhere, and I guess its EC2.
You may design your application in way such that all your business logic resides in the lambda functions, updates the DB, posts a message to an SQS queue. The WebSocket server can then pick up messages from the SQS queue and post updates. This decouples your application architecture. This is just one approach and wont scale horizontally out of the box.
OR - You may choose to put everything in one EC2 instance, expose a REST API that updates the DB and also posts updates to the WebSocket connection.

Related

Read server time from AWS DynamoDb, swift

I need to get the current server time/timestamp from AWS dynamodb to my ios swift application.
In firebase db we can write the current timestamp to db and after that we can read it from the app. Any suggestion about this is appreciated.
DynamoDB does not provide any sort of server time—any timestamps must be added by the client. That being said, you can emulate a server time behavior by setting up a Lambda function or an EC2 instance as a write proxy for DynamoDB and have it add a timestamp to anything being written to DynamoDB. But it’s actually even easier than that.
AWS allows you to use API Gateway to act as a proxy to many AWS services. The process is a little long to explain in detail here, but there is an in-depth AWS blog post you can follow for setting up a proxy for DynamoDB. The short version is that you can create a rest endpoint, choose “AWS Service Proxy” as the integration type, and apply a transformation to the request that inserts the time of the request (as seen by API Gateway). The exact request mapping you set up will depend on how you want to define the REST resources and on the tables you are writing to. There is a request context variable that you can use to get the API Gateway server time. It is $context.requestTimeEpoch.

Microservices: API Call Vs Messaging. When to Use?

I know that messaging system is non blocking and scalable and should be used in microservices environment.
The use case that i am questioning is:
Imagine that there's an admin dashboard client responsible for sending API request to create an Item object. There is a microservice that provides API endpoint which uses a MySQL database where the Item should be stored. There is another microservice which uses elastic search for text searching purposes.
Should this admin dashboard client :
A. Send 2 API Calls; 1 Call to MySQL service and another elasticsearch service
or
B. Send message to topic to be consumed by both MySQL service and elasticsearch service?
What are the pros and cons when considering A or B?
I'm thinking that it's a little overkill when only 2 microservices are consuming this topic. Also, the frequency of which the admin is creating Item object is very small.
Like many things in software architecture, it depends. Your requirements, SLAs and business needs should make it clearer.
As you noted, messaging system is not blocking and much more scalable, but, API communication got it pluses as well.
In general, REST APIs are best suited to request/response interactions where the client application sends a request to the API backend over HTTP.
Message streaming is best suited for notifications when new data or events occur that you may want to take action upon.
In you specific case, I would go with a messaging system with is much more scalable and non-blocking.
Your A approach is coupling the "routing" logic into your application. Pretend you need to perform an API call to audit your requests, then you will need to change the code and add another call to your application logic. As you said, the approach is synchronous and unless you're not providing threading logic, your calls will be lined up and won't scale, ie, call mysql --> wait response, then call elastic search --> wait response, ...
In any case you can prefer this approach if you need immediate consistency, ie, the result call of one action feeding the second action.
The B approach is decoupling that routing logic, so, any other service interested in the event can subscribe to the topic and perform the action expected. Totally asynchronous and scalable. Here you will have eventual consistency and you have to recover any possible failure.

cloud sql send pubsub message on update/insert

I am setting up a read only GraphQL instance using Java. GraphQL as I understand it needs to be told when to re-query its data sources. We are using GCP, and Cloud SQL for our primary data source. Our monolithic system is what is responsible for updating the data.
Is there a way to trigger a web request or pub/sub message from cloud sql without sys_eval(sys_eval('curl https://example.com'));?
or is there a way to turn on sys_eval in cloud sql?
After some brainstorming around sys_eval alternatives such as binary logs and so on, I think the course of action I'd recommend is to move the MySQL client to the GCE instance, and establish the connection to the Cloud SQL instance through a Private IP.
Such connection will be guaranteed a much lower latency, and an a lot higher network security, since, the service does not use Public IPs and it would be protected from the "outside" Internet; all compared to your current architecture.
You can find connection examples using VPC networks in the documentation provided.

How does it connect various microservices with Docker?

I have two microservices into Docker and I want to connect one with other, but I don´t know to do it. The two (and the future apps) are API Rest with Spring-boot, I am searching info, tutorials... but I don`t see nothing. My idea is have an main app that it is be able to connect with the other microservices that they are API Rest and afterwards this main app publish and all this I want to have it inside of the container (Docker).
Is it possible?
Anyone knows any tutorial that explain this?
Thanks so much!
What you are describing could be an API Gateway. Here is a great tutorial explaining this pattern.
Implement an API gateway that is the single entry point for all clients. The API gateway handles requests in one of two ways. Some requests are simply proxied/routed to the appropriate service. It handles other requests by fanning out to multiple services.
A variation of this pattern is the Backend for Front-End pattern. It defines a separate API gateway for each kind of client.
Using an API gateway has the following benefits:
Insulates the clients from how the application is partitioned into microservices
Insulates the clients from the problem of determining the locations of service instances
Provides the optimal API for each client
Reduces the number of requests/roundtrips. For example, the API gateway enables clients to retrieve data from multiple services with a single round-trip. Fewer requests also means less overhead and improves the user experience. An API gateway is essential for mobile applications.
Simplifies the client by moving logic for calling multiple services from the client to API gateway
Translates from a “standard” public web-friendly API protocol to whatever protocols are used internally
The API gateway pattern has some drawbacks:
Increased complexity - the API gateway is yet another moving part that must be developed, deployed and managed
Increased response time due to the additional network hop through the API gateway - however, for most applications the cost of an extra roundtrip is insignificant.
How implement the API gateway?
An event-driven/reactive approach is best if it must scale to scale to handle high loads. On the JVM, NIO-based libraries such as Netty, Spring Reactor, etc. make sense. NodeJS is another option.
Just give you the simplest answer:
In general containers can communicate among each others with any protocols (http,ftp,tcp,udp) not limit to only rest(http/s)
using the internal/ external IPs and ports
using the internal/ external names (dns):
in your Micro-service is in the same cluster on multi-host -> you should be able to write the program in your Springboot to call http://{{container service name}} , It's the built-in feature of containers
if you have more microservices in different cluster or hosts or the internet , you can use APIM (API management) or reverse-proxy(NGINX,HAProxy) to manages the service name eg.
microservice1.yourdomain.com —> container1 or service1(cluster)
microservice2.yourdomain.com —> container2 or service 2(cluster)
yourdomain.com/microservice1—> container2 or service 2(cluster)
yourdomain.com/microservice2—> container1 or service1(cluster)
PS . there are more sophisticated techniques out there but it fundamentally come down above approaches.

Run a web socket on Cloud Functions for Firebase?

Hello I actually have a REST api running on Cloud Functions for Firebase using http request, but now I need to sync the data on real time requesting to the functions. I read something about web sockets.
Is there a way to run a web socket on Cloud Functions for Firebase?
This is not going to be a good fit for Cloud Functions. Websockets rely on long-lived connections to the same server over time; Cloud Functions are ephemeral compute instances that are spun down when there's no traffic. There's no way to force or guarantee that a Cloud Function will keep running or hold a connection open indefinitely.
I would encourage you to investigate using the Firebase Realtime Database as a conduit here instead of trying to add realtime to Cloud Functions.
Theoretically you could use two different layers: one to manage the websocket connections and another layer to handle the data processing.
The websocket layer will not be Cloud Functions, but a Docker container running Push Pin in Cloud Run and that’ll route HTTP calls to your Cloud Functions to do the actual data processing.
This is possible using an Apigee Java callout, where the Java (if needed) calls a Cloud Function. See https://cloud.google.com/apigee/docs/api-platform/develop/how-create-java-callout