Amplify datastore for android implementation with complex objects - aws-amplify-sdk-android

I have an android application, which is collecting data in form of text and images.I implemented an AWS Amplify integration. Am using auth for logins, and i also added datastore for online/offline synchronization of collected data to the cloud. But i get error 400 because my item exceeds the 400kb row limit on dynamodb. After research here , i discovered that its possible to use Amplify datastore to store complex objects like images but they are stored in s3. So the sample code that demostrates this is for react, which i have failed to implement the same in native android. So anyone have a way of implementing this in android?

Currently, Amplify only supports 'complex objects' when using the API package. This does not include the DataStore package, which handles AppSync differently.
complex object support: import { API } from '#aws-amplify/api'
no complex object support: import { DataStore } from '#aws-amplify/datastore'
Sources:
https://github.com/aws-amplify/amplify-js/issues/4579#issuecomment-566304446
https://docs.amplify.aws/lib/graphqlapi/advanced-workflows/q/platform/js#complex-objects
If you want to use DataStore, currently you need to put the file into S3 separately, and then you can store reference details to the S3 file in the DynamoDB record (i.e. bucket, region, key). This could be done with Amplify Storage module.
const { key } = await Storage.put(filename, file, { contentType: file.type } )
const result = await DataStore.save({ /* an object with s3 key/info */ })

Related

How to store AWS S3 object data to a postgres DB

I'm working on a Golang application where users will be able to upload files:Images & PDFs.
The files will be stored in AWS S3 bucket which I've implemented. However I dont know how to go about retrieving identifiers for the stored items to save them in Postgres.
I was thinking of using an item.ID but the AWS sdk for go method does not provide an object ID:
for _,item:=range response.Contents{
log.Printf("Name : %s\n",item.Key)
log.Printf("ID : %s\n",*item.)
}
What other options are available to retrieve stored object references from AWS S3?
A common approach is to event source a lambda with an S3 bucket event. This way, you can get more details about the object created within your bucket. Then you can make this lambda function to persist the object metadata into postgres
Another option would be simply to append the object key you are using in your SDK to the bucket name you're targeting, then the final result would be full URI that points to the object stored. Something like this
s3://{{BUCKET_NAME/{{OBJECT_KEY}}

Using Ionic storage or a database (NoSQL, Sqlite, etc)

I am going to create an app (iOS and Android) that will save data to the users device (text, images, files, etc) and will stay on the users device until they decide to send it to the server. I can do it either with a sqlite database or using ionic storage but I don't know what the best practice would be
For simplicity I will only present two types of items that will be stored - notes and records
notes structure
notes = {
description: this.description,
'otherText': this.otherText,
fileOrImage1: this.imageOrFileURL1,
fileOrImage2: this.imageOrFileURL2,
..... Unlimited number of fileOrImageURL'S here
};
records structure
records = {
name: this.name,
'description': this.description,
// NOTE: These files and images are different than the ones above. They will be in separate components
fileOrImage1: this.imageOrFileURL1,
fileOrImage2: this.imageOrFileURL2,
..... Unlimited number of fileOrImageURL'S here
}
The user will first store the data on the device and it will only get uploaded when the user sends it to the server. Once its uploaded it gets deleted.
There can be many notes and records, lets say 25 each. Should I use Ionic Storage or something like a sqlite database? If I use ionic storage I will need to create a unique ID for each note and record and save it.
I am willing to change my approach if anybody has a better way. I'm still in the planning stage
I used the sqlite database for an app I did with Ionic, the reason for my choice was that I could then easily query the data, as with any database.

Is it possible to export pubnub chat messages to a postgresql database?

I am prototyping a mobile app and i want to make it quickly. For that purpose, i am using pubnub chat engine.
However, i plan to migrate to a postgresql database after the end of the beta test. But i don't want to lose the existing data already stored in pubnub. Are there some possibilities to export my chat data to my own postgreSQL database ?
Export PubNub Chat Messages to your PostgeSQL Database
While many approaches exist. One is best. Using PubNub Functions. You will asynchronously save messages reliably to your Database. Using an OnAfter Publish Event. Your database needs to be accessible via a secured HTTPS endpoint.
Stackoverflow Answer: PubNub: What is the right way to log all published message to my db - You will want to Save your JSON Messages to a Private Database using the method described in the links. Example code referenced below in this post.
export default request => {
const xhr = require('xhr');
const post = { method : "POST", body : request.message };
const url = "https://my.company.com/save";
// save message asynchronously
xhr.fetch( url, post ).then( serverResponse => {
// DB Save Success!
}).catch( err => {
// DB Save Failed! handle err
});
return request.ok();
}

Save data using Kitura - Swift

I am doing a POC to save data into CouchDB using IBM' Kitura application. I am able to upload some data in CouchDB using scripts and able to fetch and send that using a web API.
Similarly, I want another API that accepts data in JSON format and save in Couch DB.
Any guidance will be really helpful.
Have you taken a look at the Kitura-CouchDB package available here: https://github.com/IBM-Swift/Kitura-CouchDB
It also provides a usage sample case.
You can also take a look at our TodoList example using CouchDB (or Cloudant) databases.
https://github.com/IBM-Swift/TodoList-CouchDB/
let couchDBClient = CouchDBClient(connectionProperties: connectionProperties)
let database = couchDBClient.database(databaseName)
let x = JSON(json)
print("JSON: \(x.rawString())")
database.create(JSON(json)) {
}

How to create a H2OFrame using H2O REST API

Is it possible to create a H2OFrame using the H2O's REST API and if so how?
My main objective is to utilize models stored inside H2O so as to make predictions on external H2OFrames.
I need to be able to generate those H2OFrames externally from JSON (I suppose by calling an endpoint)
I read the API documentation but couldn't find any clear explanation.
I believe that the closest endpoints are
/3/CreateFrame which creates random data and /3/ParseSetup
but I couldn't find any reliable tutorial.
Currently there is no REST API endpoint to directly convert some JSON record into a Frame object. Thus, the only way forward for you would be to first write the data to a CSV file, then upload it to h2o using POST /3/PostFile, and then parse using POST /3/Parse.
(Note that POST /3/PostFile endpoint is not in the documentation. This is because it is handled separately from the other endpoints. Basically, it's an endpoint that takes an arbitrary file in the body of the post request, and saves it as "raw data file").
The same job is much easier to do in Python or in R: for example in order to upload some dataset into h2o for scoring, you only need to say
df = h2o.H2OFrame(plaindata)
I am already doing something similar in my project. Since, there is no REST API endpoint to directly convert JSON record into a Frame object. So, I am doing the following: -
1- For Model Building:- first transfer and write the data into the CSV file where h2o server or cluster is running.Then import data into the h2o using POST /3/ImportFiles, and then parse and build a model etc. I am using the h2o-bindings APIs (RESTful APIs) for it. Since I have a large data (hundreds MBs to few GBs), so I use /3/ImportFiles instead POST /3/PostFile as latter is slow to upload large data.
2- For Model Scoring or Prediction:- I am using the Model MOJO and POJO. In your case, you use POST /3/PostFile as suggested by #Pasha, if your data is not large. But, as per h2o documentation, it's advisable to use the MOJO or POJO for model scoring or prediction in a production environment and not to call h2o server/cluster directly. MOJO and POJO are thread safe, so you can scale it using multithreading for concurrent requests.