How to expose AWS Personlize recommendations via REST API - amazon-personalize

I want to expose the recommendations from AWS Personalize to clients via REST API. At this point I am thinking about AWS API Gateway > AWS Lambda > AWS Personalise.
Is there a native way to do this or a better approach to do this?

Using API Gateway and a Lambda function is one of the more common ways of creating a REST API around a Personalize campaign. API Gateway gives you the ability to add caching, throttling, alternative security patterns, and more. Since the Personalize GetRecommendations/GetPersonalizedRanking APIs only return itemIds and scores, you typically want to decorate the itemIds with the item metadata needed by clients to render recommendations (e.g., item price, name, description, image URL, etc). Otherwise your clients will likely have to lookup that information elsewhere. The Lambda function gives you the layer needed to lookup item metadata from your item catalog and return a response more suitable for rendering in an application. The Amazon Personalize Samples GitHub repo has an example using SAM to deploy Lambda functions for providing recommendations and ingesting events behind API Gateway.
Some other options include AWS App Runner which supports deploying code from a GitHub repo or a Docker container image from ECR behind an auto-scaling API or a microservice in ECS/EKS behind an Application Load Balancer. An alternative to REST is a GraphQL endpoint using AWS AppSync with a Lambda function as described above.
The best option comes down to the approach that best suits your existing architecture or experience.

Related

Pubsub HTTP POST?

I'm working with a service that will forward data to a URL of your choosing via HTTP POST requests.
Is there a simple way to publish to a Pubsub topic with a POST? The service I'm using (Hologram.io's Advanced Webhook Builder) can't store any files, so I can't upload a Google Cloud service account JSON key file.
Thanks,
Ryan
You have 2 challenges in your use cases:
Format
Authentication
Format
You need to customize the webhook to comply with the PubSub format. Some webhoock are enough customizable for that but it's not the case of all. If you can't customize the webhook call as PubSub expect, you need to use an intermediary layer (Cloud Functions or Cloud Run for example)
Authentication
Directly to PubSub or with an intermediary layer, the situation is the same: the requester (the webhook) needs to be authenticated and authorized to access to the Google Cloud service.
One of the bad, and possible, practice, is to set allUsers authorized to access your resources. Here an example with a PubSub topic
Don't do that. Even if you increase "your" process security by defining a schema (and thus to reject all the messages that aren't compliant with this schema), letting a resource publicly, and without authentication, accessible on the wild internet is criminal!
In the webhook context (I had this case previously in my company) I recommend you to use a static authentication (a long lived authentication header; not a short lived (1h) as a Google OAuth2 token); an API Key for example. It's not perfect, because in case of API Key leak, the bad actors will be able to use this breach for a long time (rotate as soon as you can your API Keys!), but it's safer than nothing!
I wrote a pretty old article on this use case (with ESPv2 and Cloud Run), but the principle, and the configuration, is almost the same on API Gateway, a Google Cloud manage services. In the article, I create a proxy for Cloud Run, Cloud Functions and App Engine, but you can do the same thing with PubSub by setting the correct target URL.

REST API calls for setting namespace preferences and Program preferences

Can the namespace preferences and program preferences be set via REST API calls? If yes, what is the syntax for it?
Generally in Cloud Data Fusion, when we intend to perform the action on GCP side, like create/delete/restart etc. instance, it's feasible to use domestic Google Cloud API, giving the opportunity to interact with a service endpoint via JSON/HTTP calls interface as described in Google Cloud API design document.
Dedicated to Data Fusion you can follow the Cloud Data Fusion REST API reference document, nicely explaining the methods for composing REST API HTTP calls to manage Data Fusion instances, moreover every method description from the documentation contains Google API Explorer sub-panel, to get handy experience building JSON request on a live data.
Said above, I assume your initial question is related more to CDAP REST API, as it includes the methods for pure CDAP instance metadata/namespaces/application configuration.
From the user perspective your workflow might be the following:
Identify the CDAP API endpoint as explained in this guideline;
Compose an HTTP PUT/GET request relevant to Data Fusion
Namespace/Metadata/Preferences/Configuration
object via CDAP RESTful API.
Yes of course! You have two methods.
The first method is creating it from the platform. Follow the steps below:
Open your data fusion instance
Go to System Admin => Configuration => Make HTTP calls
To create a namespace, submit an HTTP PUT request:
PUT /v3/namespaces/<namespace-id>
Link of CDAP: CDAP
The second method is using terraform.

How to create a dynamic API endpoint connection using HTTP or REST connectors in Azure Data Factory V2

I have an external REST based API that I need to create a connection to in order to retrieve data on a regularly scheduled basis (for BI purposes). This API is fairly robust, and supports around 60 distinct endpoints. Also, this same API is used to access information across multiple client sub-domains (e.g. client1.apisource.com, client2.apisource.com, client3.apisource.com, etc.). In other words, the API endpoints are the same for each client subdomain.
So what I'm trying to figure out is whether it's possible to create a single ADF that contains a complete set of pipeline actions for each endpoint THAT uses a "dynamic" URL based on the client subdomains? In other words...what I'm trying to see if its possible to create a single ADF that can manage a dynamic list of base URLs.
I tried to parameterize the HTTP and REST connections, but this doesn't appear to Is this possible yet. Any thoughts? Thanks!
Here is an example of a Web Activity to call a REST API using parameters and expressions. The URL can be an expression like:
#concat('https://management.azure.com/subscriptions/',pipeline().parameters.SubscriptionID,'/resourceGroups/',pipeline().parameters.ResourceGroup,'/providers/Microsoft.Sql/servers/',pipeline().parameters.Server,'/databases/',pipeline().parameters.DW,'?api-version=2014-04-01')

Multiple Authorization types with AWS AppSync

It seems as though an AppSync project can only be configured with one Authorization type (API_KEY, AWS_IAM, etc.). I'm using AMAZON_COGNITO_USER_POOLS as my primary type, but I also have a (Node.js) client that I want to provision with API_KEY access.
Is this possible?
If not, can you suggest any alternatives?
The answer by Rohan works provided you don't have subscriptions; if you do have a subscription in one AppSync endpoint and mutate data in another AppSync endpoint then while the data behind the scenes is updated, the subscription won't update (which makes sense, as the subscription is a attached as a listener within an AppSync endpoint). Until AppSync supports multiple methods you might want to give IAM a try; there's some details here on how to get it to work with Cognito in app + a Lambda. The example there is in python but for node.js you would generate signatures with something like https://www.npmjs.com/package/aws4 . The same method would work if running your node.js client elsewhere provided you generate some API keys
There are two approaches to solve for your use case.
You can provision a separate AppSync endpoint (you can create up to 25 per region within an AWS account) with the same schema and configure it with a different authorization scheme. Use this approach only if you need hard isolation between the endpoints.
As of May 2019, AWS AppSync supports multiple authorization schemes for a GraphQL API. You can enable AMAZON_COGNITO_USER_POOLS as the default auth scheme and API_KEY as the additional auth scheme. This is the recommended approach and also works with subscriptions, which addresses Matthew’s concern in another answer.
As of May 2019, AWS AppSync announced the support for multiple auth types in the same API. https://aws.amazon.com/blogs/mobile/using-multiple-authorization-types-with-aws-appsync-graphql-apis/

Creating an API Layer on top of Firebase Real-Time Database

I do have some data stored in my Real-Time Firebase database. I am willing to expose some of this data via a REST API to my B2B customers.
I know that Firebase is itself a REST API but its authentication mechanisms don't fit my needs. I am willing my customers to access the API with a simple API Key passed in the HTTP request headers.
To summarize, I need an API layer sitting on top of my Firebase real-time database with the following properties:
Basic Authentication via an API key passed in the HTTP request headers
Some custom logic that makes sure customers respect the API limits (maximum requests per day for example)
The only thing I can think of is implementing this layer in AWS lambda but that also sounds a bit off. From the lambda, I would have to access my Firebase database and serve that data. That seems too many network requests; something native to Firebase would be great.
Thanks,
Guven.
Why not have a simple API which provides them an Oauth token for the original firebase REST API if they have the correct Api Key
It'll be more secure as only you'll be able to make the tokens as only you'll have the service account private key. Also saves you the headache of making a whole REST API. Also the Oauth tokens expire relatively quickly so it's less of a risk than a normal key that you furnish
I personally have created my own Servlets where a user posts their data if they are authenticated using an id pass combo.
In the Servlets i use the default REST API provided by Firebase with the Oauth generated in my servlet. This way, i can have the DB security rules set to false for all writes from any client api. And the REST API and their admin sdk on my server ignore the security rules by default.
After some research, I have decided that AWS is the best platform such API related features.
Gateway API lets you setup your API interface in a matter of seconds
DynamoDB stores your API data; you can easily populate the data here
AWS Lambda lets you write the integration code between Gateway API and DynamoDB
On top of these, the platform offers these features out of the box:
Creation & handling and verification of API keys for authentication
Usage plans to make sure that API consumers don't exceed your API usage limits
Most of what I was looking for is offered in these AWS services.