Save Form recognize scanned data in Database table - forms

After scanning my document with #AzureFormRecognizer[fott-preview.azurewebsites.net], How I store this field and data in oracle/MS server database. I see https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle but couldn't understand those things. Please help me

Form Recognizer output is a JSON output, you can use Logic App or Power Automate database connectors to take the JSON output into a database.

Related

PostgreSQL Store-and-Forrward

I'm building a data collection RESTful API 0 External devices will post json data to database server. So, my idea is to make it by Store-and-Forward ideology.
At the moment of the post, it will store raw json data to table with timestamp and processed true/false fields.
At the next moment when(if) the db server is not loaded will run some function, trigger or stored procedure, etc. The idea is to process all json data into suitable tables and fields for charting graphs/bars and map it over Google Maps later.
So how and what to use to run this functionality(2) when the db server no loaded and free for processing the posted json data?

IEI Notes Direct Transfer Activity

I am using a Notes connector with my direct transfer activity to send data to a DB2 LUW table. The data in the Notes database is text and decimal(7,2) in the external datasource. I have added this, , to the Notes connector and tried several variations of it and each time I get this same error, ,. Any help would be greatly appreciated.
My other option is to transfer the data to a new database, run an agent to convert the data using the formulas included, and then do the direct transfer activity using the new database.
Transfer the data to a separate database which the activity deletes and creates each time based off existing template, run an agent in that new database to set new fields of the proper type with the needed data converted, and finally transfer the new fields, with proper format, to the DB2 table. Tested and worked.
Although this solution does work our organization has a policy of not creating any new databases so I went the route of using DB2 LUW Stored Procedures to delete my documents and insert the new records with both being called from IEI (fka LEI) direct transfer activities.

Mongodb to redshift

We have a few collections in mongodb that we wish to transfer to redshift (on an automatic incremental daily basis).
How can we do it? Should we export the mongo to csv?
I wrote some code to export data from Mixpanel into Redshift for a client. Initially the client was exporting to Mongo but we found Redshift offered very large performance improvements for query. So first of all we transferred the data out of Mongo into Redshift, and then we came up with a direct solution that transfers the data from Mixpanel to Redshift.
To store JSON data in Redshift first you need to create a SQL DDL to store the schema in Redshift i.e. a CREATE TABLE script.
You can use a tool like Variety to help as it can give you some insight into your Mongo schema. However it does struggle with big datasets - you might need to subsample your dataset.
Alternatively DDLgenerator can generate DDL from various sources including CSV or JSON. This also struggles with large datasets (well the dataset I was dealing with was 120GB).
So in theory you could use MongoExport to generate CSV or JSON from Mongo and then run it through DDL generator to get a DDL.
In practice I found using JSON export a little easier because you don't need to specify the fields you want to extract. You need to select the JSON array format. Specifically:
mongoexport --db <your db> --collection <your_collection> --jsonArray > data.json
head data.json > sample.json
ddlgenerator postgresql sample.json
Here - because I am using head - I use a sample of the data to show the process works. However, if your database has schema variation, you want to compute the schema based on the whole database which could take several hours.
Next you upload the data into Redshift.
If you have exported JSON, you need to use Redshift's Copy from JSON feature. You need to define a JSONpath to do this.
For more information check out the Snowplow blog - they use JSONpaths to map the JSON on to a relational schema. See their blog post about why people might want to read JSON to Redshift.
Turning the JSON into columns allows much faster query than the other approaches such as using JSON EXTRACT PATH TEXT.
For incremental backups, it depends if data is being added or data is changing. For analytics, it's normally the former. The approach I used is to export the analytic data once a day, then copy it into Redshift in an incremental fashion.
Here are some related resources although in the end I did not use them:
Spotify has a open-source project called Luigi - this code claims to upload JSON to Redshift but I haven't used it so I don't know if it works.
Amiato have a web page that says they offer a commercial solution for loading JSON data into Redshift - but there is not much information beyond that.
This blog post discusses performing ETL on JSON datasources such as Mixpanel into Redshift.
Related Redit question
Blog post about dealing with JSON arrays in Redshift
Honestly, I'd recommend using a third party here. I've used Panoply (panoply.io) and would recommend it. It'll take your mongo collections and flatten them into their own tables in redshift.
AWS Database Migration Service(DMS) Adds Support for MongoDB and Amazon DynamoDB.So I think now onward best option to migrate from MongoDB to Redshift is DMS.
MongoDB versions 2.6.x and 3.x as a database source
Document Mode and Table Mode supported
Supports change data capture(CDC)
Details - http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html
A few questions that would be helpful to know would be:
Is this an add-only always increasing incremental sync i.e. data is only being added and not being updated / removed or rather your redshift instance is interested only in additions?
Is the data inconsistency due to delete / updates happening at source and not being fed to redshift instance ok?
Does it need to be daily-incremental batch or can it be realtime as it is happening as well?
Depending on your situation may be mongoexport works for you, but you have to understand the shortcoming of it, which can be found at http://docs.mongodb.org/manual/reference/program/mongoexport/ .
I had to tackle the same issue (not on a daily basis though).
as ask mentioned, You can use mongoexport in order to export the data, but keep in mind that redshift doesn't support arrays, so in case your collections data contains arrays you'll find it a bit problematic.
My solution to this was to pipe the mongoexport into a small utility program I wrote that transforms the mongoexport json rows into my desired csv output.
piping the output also allows you to make the process parallel.
Mongoexport allows you to add a mongodb query to the command, so if your collection data supports it you can spawn N different mongoexport processes, pipe it's results into the other program and decrease the total runtime of the migration process.
Later on, I uploaded the files to S3, and performed a COPY into the relevant table.
This should be a pretty easy solution.
Stitch Data is the best tool ever I've ever seen to replicate incrementally from MongoDB to Redshift within a few clicks and minutes.
Automatically and dynamically Detect DML, DDL for tables for replication.

Transferring data from Lotus Notes to DB2 using Agent

before you say something, I searched a lot but didn't find how to do that.
So I got database in .NSF format for use in Lotus Notes. I need to write an Agent (I know how to) so data from that database will be automatically transferred to DB2 database.
So before I create DB2 tables, how do i know which structure I need to use? How do I check how exactly data in that .NSF file is stored?
Thanks
Notes documents are unstructured, there's no guarantee that any two documents in a database have the same structure. You will need to decide what data you want to transfer to a relational table, then check each document to see if it contains the corresponding fields (items). You didn't mention what language you're planning to use for your agent; in Java you would use NotesDocument.getItems() to enumerate all items in a document.
As mustaccio also said, since Notes/Domino is a NoSQL database, you don't have a schema.
You should talk to the developer of the application and get an understanding of what data is lovated where.
You could of course use the Design Synopsis function in Domino Designer to export the actual design, but document can potentially contain data not showing up in the design.
If you want to export the documents as XML, I have a tool I wrote available here: http://www.texasswede.com/home.nsf/Page/Notes%20XML%20Exporter
You can export all the documents and then look at the XML to see what data you have.

How to insert data coming from web service into sql database in iphone?

I am developing an app where I need to insert data coming from web service into sqlite3 database.Web service returns XML data with 5 tags.Now after XML parsing how to insert parsed data into sql database?
Can I code for this???
Thanks in advance..
Inserting XML data in sqlite database is no different that inserting any data. So there are 3 parts of the problem you are trying to solve:
Calling the web service and getting the data
Parsing the data and populating some object
Inserting that data in sqlite database which has columns as per your object structure
NSURLConnection and NSXMLParser are the classed you need to look at for solving first 2 problems. Third one would be solved using sqlite library. Without more information about object structure it is difficult to suggest anything else. But you should find enough documentation on using sqlite if you search around.