Using RESTFul Oracle APEX - mongodb

I am building a mobile App using Appery.io platform which uses MongoDB -based database. I need to link this DB to Oracle database and use APEX to design an interface such that users can query, update the mobile App DB from Oracle as well as Oracle DB can be updated from the mobile App.
In APEX, I use the URI with GET method:
https://api.appery.io/rest/1/db/collections/Outlet_Details/
And I add the header:
X-Appery-Database-Id
When I run the query in the APEX where I insert the Database-Id, APEX shows the table/collection Outlet_Details in JSON format. However, not the entire table is shown due to, I think, the length of CLOB type.
Now my main problem is I need to query this table/collection called Outlet_Details by a column named: _id. So when I use the following URI:
https://api.appery.io/rest/1/db/collections/Outlet_Details/1234
It returns the specific record that ha _id = 1234. However, I do not want to hardcode it. Instead, I need to have more like where condition such that I can query based on any column value (e.g. userId instead _id). The CURL command is as follows:
curl -X GET
-H "X-Appery-Database-Id: 544a5cdfe4b03d005b6233b9"
-G --data-urlencode 'where={"userId ": "1234"}'
https://api.appery.io/rest/1/db/collections/outlet_details/
My problem is how to insert such a command into APEX, specailly (where) part.
In this tutorial, oracle database is used. Hence using where condition with =:DEP condition, and then bind it to a variable is pretty straightforward. However, I need to replicate this tutorial with my MongoDB.
The other question, which I guess would clarify a lot to me, in the aforementioned tutorial, there is a prefix URI that is by default APEX shema URI. Even when I insert different URI template, the resultant URI will append APEX to the one I inserted. How to build a service there using different URI?

I found that APEX takes where condition as encoded parameter in the URL. Something like:
https://api.appery.io/rest/1/db/collections/Outlet_Details?where=%7B%22Oracle_Flag%22%3A%22Y%22%7D
The header is same and no input parameters.
This can be done from Application builder > New Application > Database > Create Application > Shared Componenets > Create > REST and then start inserting the header, utl .. etc.
You can refer to this link as a reference encoded URL

Related

Azure data factory - custom mapping for Rest service

So, I am creating a Copy activity that reads from SQL Server table and have to send the data to an API end point with the PATCH request.
API provider specified that the body must be in the form of
"updates":[{"key1":"value1","key2":"value2","key3":"value3" },
{"key1":"value1","key2":"value2","key3":"value3" }, ...
.... {"key1":"value1","key2":"value2","key3":"value3" }]
However, my sql table maps to json this way (without the wrapper 'updates:')
[{"key1":"value1","key2":"value2","key3":"value3" },
{"key1":"value1","key2":"value2","key3":"value3" }, ...
.... {"key1":"value1","key2":"value2","key3":"value3" }]
I use the copy activity with the sink data set being of type Rest ..
How can we modify the mapping, so that schema gets wrapped by "updates" object ?
Using copy data activity, there might not be any possibility to wrap the data (array of objects) to an updates key.
To do this, I have used a lookup activity to get the data, set variable activity to wrap the data with an updates object key and finally, use Web activity with PATCH method and above variable value as body to complete the activity.
The following is the sample data I have taken for my SQL server table.
Use look up activity to select the data from this table using table or query option (I used query option). The debug output would be as follows:
NOTE: If your data is not same as in sample table I have taken, try using the query option so the output would be something as shown below
In the set variable activity, I have used an array variable and used the following dynamic content to wrap the above array of objects with updates key.
#array(json(concat('{"updates":',string(activity('Lookup1').output.value),'}')))
Now in the Web activity, choose all the necessary settings (PATCH method, authorizations, headers, URL, etc.,) and give the body as follows (I used a fake REST api as a demo):
#variables('tp')[0]
Since I am using the Fake REST API, the activity succeeds, but checking the Web activity debug input shows what is the body that is being passed to the Rest API. The following is an image for reference:

How to manage BigQuery tables post firestore backfill [google-bigquery]

I am interested in learning how to manage BigQuery post firestore backfills.
First off, I utilize the firebase/firestore-bigquery-export#0.1.22 function with a table named 'n'. After creating this table, 2 tables are generated n_raw_changelog, n_raw_latest.
Can I delete either of the tables, and why are the names generated automatically?
Then I ran a backfill, because the previous collection preceded the BigQuery table using:
npx #firebaseextensions/fs-bq-import-collection \
--non-interactive \
--project blah \
--source-collection-path users \
--dataset n_raw_latest \
--table-name-prefix pre \
--batch-size 300 \
-query-collection-group true
And now the script adds 2 more tables with added extensions
i.e. n_raw_latest_raw_latest, n_raw_latest_raw_changelog.
Am I supposed to send these records to the previous tables, and delete them post-backfill?
Is there a pointer, did I use incorrect naming conventions?
As shown in this tutorial, those two tables are part of the dataset generated by the extension.
For example, suppose we have a collection in Firebase called orders, like this:
When we install the extension, in the configuration panel shows as follows:
Then,
As soon as we create the first document in the collection, the extension creates the firebase_orders dataset in BigQuery with two resources:
A table of raw data that stores a full change history of the documents within the collection... Note that the table is named orders_raw_changelog using the prefix we configured before.
A view, named orders_raw_latest, which represents the current state of the data within the collection.
So, these are generated by the extension.
From the command you posten in your question, I see that you used the fs-bq-import-collection script with the --non-interactive flag, and pass the --dataset parameter with the
n_raw_latest value.
The --dataset parameter corresponds with the Dataset ID parameter that is shown in the configuration panel above. Therefore, you are creating a new dataset named n_raw_latest which will contain the n_raw_latest_raw_changelog table and the n_raw_latest_raw_latest view. In fact, you are creating a new dataset with your current registries, and not updating the dataset you created for instance.
To avoid this, as stated in the documentation, you must use the same Dataset ID that you set when configuring the extension:
${DATASET_ID}: the ID that you specified for your dataset during extension installation
See also:
Automated Firestore Replication to BigQuery
Stream Collections to BigQuery - GitHub
Import existing documents - GitHub

SAP CDS Odata URL in ADF

I am new to azure data factory (ADF) and trying to create a dataset from an Odata source. The only problem here is that the Odata URL was developed in SAP CDS and so has custom query options as shown below:
"http://XXXXXXX/ZC_XXX_TU_SR_ACTIVITY_CDS/ZC_XXX_TU_SR_Activity(p_warehouse='E065',p_from=datetimeoffset'2021-06-01T00:01:01',p_to=datetimeoffset'2021-08-11T23:01:01')/Set"
When choosing the path I expect only one path in the options but I get 2 - ZC_XXX_TU_SR_Activity and ZC_XXX_TU_SR_ActivitySet so I am unsure of which one to use even though I have tried both
When writing the query, I have tried:
?(p_warehouse='E065',p_from=datetimeoffset'2021-06-01T00:01:01',p_to=datetimeoffset'2021-08-11T23:01:01')/Set
?(p_warehouse='E065'&p_from=datetimeoffset'2021-06-01T00:01:01'&p_to=datetimeoffset'2021-08-11T23:01:01')/Set
?(p_warehouse=%27E065%27&p_from=datetimeoffset%272021-06-01T00:01:01%27&p_to=datetimeoffset%272021-08-11T23:01:01%27)/Set
I have also tried to use all 3 options without the '?', "()" and the '/Set' but I am still getting errors.
I get this error:
"query (p_warehouse='E065',p_from=datetimeoffset'2021-06-01T00:01:01',p_to=datetimeoffset'2021-08-11T23:01:01')/Set failed with status code InternalServerError and message SY/530An exception was raised."
I have run out of ideas now and don't know what else to do. Please help. Thanks!
Note: The OData connector copies data from the combined URL: [URL specified in linked service]/[path specified in dataset]?[query specified in copy activity source].
Here, I could see that you have the root path as http://XXXXXXX/ZC_XXX_TU_SR_ACTIVITY_CDS and the resource path as ZC_XXX_TU_SR_Activity or ZC_XXX_TU_SR_ActivitySet.
So, there is an issue passing the query in :
System Query Option :
System Query Options are query string options that a client may use to
alter the amount and order of data returned by an OData service for
the URL-identified resource. All System Query Options have a “$”
character before their names.
Custom Query Option:
Because the character "$" is reserved for system query options, custom
query options MUST NOT begin with it. A custom query option can start
with the “#” character, however this can cause custom query options to
clash with function parameter values supplied via Parameter Aliases.
This URL addresses, for example, give a ‘secURLtytoken' through a
custom query option.
This is for more information: URL Conventions (OData Version 3.0)

Is there any way to get table metadata using postgREST

I need to get table metadata like primary key, column type etc. using PostgRest. By executing the root path / of my PostgRest app I am getting JSON that contains all needed data in definitions object.
Unfortunately there is no endpoint to get it and there is no information about that in documentation.
I have tried to execute the following endpoints:
/table/parameters
/table/definitions
/table/schema
All returns 404 error code.
Is there any way to get metadata?
You'd have to write a view or function and call it. In that SQL return the metadata from anything in your db.

How to access Library, File, and Field descriptions in DB2?

I would like to write a query that uses the IBM DB2 system tables (ex. SYSIBM) to pull a query that exports the following:
LIBRARY_NAME, LIBRARY_DESC, FILE_NAME, FILE_DESC, FIELD_NAME, FIELD_DESC
I can access the descriptions via the UI, but wanted to generate a dynamic query.
Thanks.
Along with SYSTABLES and SYSCOLUMNS, there is also a SYSSCHEMAS which appears to contain the data you need. Please note that accessing this information through QSYS2 will restrict rows returned to those objects with which you have some access - the SYSIBM schema appears to disregard this (check the reference - for V6R1 it's about page 1267).
You also shouldn't need to retrieve this with a dynamic query - static with host variables (if necessary) will work just fine.