I am confused about pricking criteria in aws amplify
Let's assume
List<Object> obj = Amplify.DataStore.query(Object.classtype, where: Object.ID.eq('123'));
//obj.length is 1000
Is this command 1000 requests or 1 request?
There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB
Refer to DynamoDB pricing for your AWS region
Related
I am trying to use the Google Cloud monitoring REST APIs to get timeseries data of my project. I have been looking at the documentation here: https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list
I do not see any API where I can query for multiple timeseries (with different filters each) using one single REST call. Is that possible with the available APIs ?
For example, for a given timerange, I might want to get the kubernetes/cpu/usage metric with certain filters and the kubernetes/memory/usage metric with a different set of filters in one single REST API call.
I'm new to Google Cloud SQL and Pub/Sub. I couldn't find documentation anywhere about this. But another question's accepted and upvoted answer seems to say it is possible to publish a Pub/Sub message whenever there is an insert happen to the database. Excerpt from that answer:
2 - The ideal solution would be to create the Pub/Sub topic and publish to it when you insert new data to the database.
But since my question is a different one, thus I asked a new question here.
Background: I'm using a combination of Google Cloud SQL, Firestore and Realtime Database for my app for its own unique strengths.
What I want to do is to be able to write into Firestore and Realtime databases once an insert is successful in Google Cloud SQL. According to the answer above, this is the steps I should do:
The app calls a Cloud Function to insert a data into Google Cloud SQL database (PostgreSQL). Note: The Postgres tables has some important constraints and triggers Postgres functions, thats why we want to start here.
When the insert is successful I want Google Cloud SQL to publish a message to Pub/Sub.
Then there is another Cloud Function that subscribes to the Pub/Sub topic. This function will write into Firestore / Realtime Database accordingly.
I got steps #1 & #3 all figured out. The solution I'm looking for is for step #2.
The answer in the other question is simply suggesting that your code do both of the following:
Write to Cloud SQL.
If the write is successful, send a message to a pubsub topic.
There isn't anything that will automate or simplify either of these tasks. There are no triggers for Cloud Functions that will respond to writes to Cloud SQL. You write code for task 1, then write the code for task 2. Both of these things should be straightforward and covered in product documentation. I suggest making an attempt at both (separately), and posting again with the code you have that isn't working the way you expect.
If you need to get started with pubsub, there are SDKs for pretty much every major server platform, and the documentation for sending a message is here.
While Google Cloud SQL doesn't manage triggers automatically, you can create a trigger in Postgres:
CREATE OR REPLACE FUNCTION notify_new_record() RETURNS TRIGGER AS $$
BEGIN
PERFORM pg_notify('on_new_record', row_to_json(NEW)::text);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER on_insert
AFTER INSERT ON your_table
FOR EACH ROW EXECUTE FUNCTION notify_new_record();
Then, in your client, listen to that event:
import pg from 'pg'
const client = new pg.Client()
client.connect()
client.query('LISTEN on_new_record') // same as arg to pg_notify
client.on('notification', msg => {
console.log(msg.channel) // on_new_record
console.log(msg.payload) // {"id":"...",...}
// ... do stuff
})
In the listener, you can either push to pubsub or cloud tasks, or, alternatively, write to firebase/firestore directly (or whatever you need to do).
Source: https://edernegrete.medium.com/psql-event-triggers-in-node-js-ec27a0ba9baa
You could also check out Supabase which now supports triggering cloud functions (in beta) after a row has been created/updated/deleted (essentially does the code above but you get a nice UI to configure it).
I have an android application, which is collecting data in form of text and images.I implemented an AWS Amplify integration. Am using auth for logins, and i also added datastore for online/offline synchronization of collected data to the cloud. But i get error 400 because my item exceeds the 400kb row limit on dynamodb. After research here , i discovered that its possible to use Amplify datastore to store complex objects like images but they are stored in s3. So the sample code that demostrates this is for react, which i have failed to implement the same in native android. So anyone have a way of implementing this in android?
Currently, Amplify only supports 'complex objects' when using the API package. This does not include the DataStore package, which handles AppSync differently.
complex object support: import { API } from '#aws-amplify/api'
no complex object support: import { DataStore } from '#aws-amplify/datastore'
Sources:
https://github.com/aws-amplify/amplify-js/issues/4579#issuecomment-566304446
https://docs.amplify.aws/lib/graphqlapi/advanced-workflows/q/platform/js#complex-objects
If you want to use DataStore, currently you need to put the file into S3 separately, and then you can store reference details to the S3 file in the DynamoDB record (i.e. bucket, region, key). This could be done with Amplify Storage module.
const { key } = await Storage.put(filename, file, { contentType: file.type } )
const result = await DataStore.save({ /* an object with s3 key/info */ })
1
In our application, we have around 100000 customers and need to process some data on monthly basis. Data processing logic for each customer involves, around 7 rest calls to different service. We need to do this in spring batch to achieve performance.
Steps to process data --l
Read all customers List-get the data web service--l
call 7 different micro services to get the balance, type, fees, date etc etc..--l
Write result to S3 bucket
please suggest the design the flow in spring batch
you can create a flow of multiple steps where for each step you can send "serviceName" as parameter. Write a customReader which calls the service based on serviceName. In CustomReader you can decide on the way you want to call the services.
List<Step> steps = new ArrayList<>();
for(each of your service){
createStep(String serviceName);
}
private Step createStep(String serviceName){
return stepBuilderFactory.get(""service calls")
.reader(UorCustomReader)
.processor(YourProcessor)//if needed
.writer(YourCustomCompositeWriter)
}
I'm trying to use the Cloud Firestore REST API, but can't seem to find the project id.
Firestore's REST API is still in beta; we can't generate our own database ids as of yet.
We have to use the default database id which is currently the following (glaringly literal) string:
(default)
And yes, you have to include the parentheses.