How to export Orient DB(2.2.x) schema with Rest API? - orientdb

I am trying to export orient db (2.2.x) using Http API.
HTTP GET request to http://localhost:2480/export/demo2
I need to export the database schema alone with the data in it. Need an advice to do this. Thanks in advance.

You cannot export a database using the REST api however you can return its schema by calling this url (don't forget to authenticate):
http://<server>:<port>/query/YourDatabaseName/sql/select expand(classes) from metadata:schema
Here's the docs:
OrientDB | Querying the schema
OrientDB | REST API - Query

Related

Purview API - Extracting all server guids from collections?

I'm currently building a pipeline using the purview atlas API to extract server guids, and from there, extract database guids, and so on and so forth until the column level. But don't see any API endpoints that allows me to extract all possible server guids from a collection? Is this possible?
This is not possible at this moment.
Could you please check this document to get list of guids:- Discovery - Query - REST API (Azure Purview) | Microsoft Docs
I am not sure if this could solve your problem, but here is what I do:
In my case, I need to extract the dictionary from Purview. To do that, i do the following:
pv glossary read
from that, i get a json with all the guid. Then,
pv glossary readDetailed --glossaryGuid= <<guids have found>>
I will just leve it here as a response just in case you can get anythig from this.
Thank you

How to find the database id of a cloud firestore project?

I'm trying to use the Cloud Firestore REST API, but can't seem to find the project id.
Firestore's REST API is still in beta; we can't generate our own database ids as of yet.
We have to use the default database id which is currently the following (glaringly literal) string:
(default)
And yes, you have to include the parentheses.

Is there a REST api of DashDB to export the content of a database in json format?

I need to re-create a database from a DashDB Bluemix service into another. And I need to automate this procedure in bash scripts.
The best I can think of is a DashDB REST API that allows me to export the content of the entire database into json format (or any other format you can think of), and a corresponding API that allows me to re-import the content in a different database on the same service or on a different service, possibly in a different Bluemix space. Thanks.
I assume you want to do a one time move and this is not about a continuous replication. In that case simply sign up on http://datascience.ibm.com, navigate to DataWorks, select "Load Data" from navigation panel (open it clicking top left) and then select Cloud Database as source type.
DataWorks load data from dashDB to dashDB
If you however still would prefer to write an own app or script that does the data movement and you want a REST API to export JSON data, then I recommend to write a simple R script that reads the data from a table (using ibmdbR) and writes it to stdout, deploy the script into dashDB (POST /home) and run the R script from your app/script calling /rscript endpoint: https://developer.ibm.com/clouddataservices/wp-content/themes/projectnext-clouddata/dashDB/#/
For Db2 on Cloud and Db2 Warehouse on Cloud, there is a REST API available that allows you to export data from a table in CSV format (up to 100.000 rows) and then load the data back. It will require a few requests as:
POST /auth/tokens
GET /sql_query_export
POST /home_content/{path}
POST /load_jobs
GET /load_jobs/{id}
I've implemented a client npm module for this API - db2-rest-client and you can export a statement result to a JSON file as:
export DB_USERID='<USERID>';export DB_PASSWORD='<PASSWORD>';export DB_URI='https://<SOURCE_HOSTNAME>/dbapi/v3'
db2-rest-client query --query="SELECT * FROM SRC_SCHEMA.SRC_TABLE" > test.json
Then you can transform that data into a .csv file and use the load job:
export DB_USERID='<USERID>';export DB_PASSWORD='<PASSWORD>';export DB_URI='https://<TARGET_HOSTNAME>/dbapi/v3'
db2-rest-client load --file=transformed.csv --table='DEST_TABLE' --schema='DEST_SCHEMA'

Loading data into Titan

I am currently running Titan Server (0.4) [via bin/titan.sh -c cassandra-es start] and load the sample data using rexster-console:
rexster[groovy]> g = rexster.getGraph("graph")
rexster[groovy]> GraphOfTheGodsFactory.load(g)
How can I do the same thing above using a RexsterClient in java? Essentially, Is it possible to get access to graph without me having to embed all this in client.execute()?
Thanks for your help.
Once you've created the graph you can access it with RexsterClient. You shouldn't need to recreate the graph again with it as the data is already in Cassandra. Just specify the graph name when constructing your RexsterClient instance (in the case of Titan Server the graph name is just "graph"):
RexsterClient client = RexsterClientFactory.open("localhost", "graph");
List<Map<String, Object>> results = client.execute("g.v(4).map");
That will initialize "g" and allow you to just issue some Gremlin against the Graph of the Gods sample data set. You can read more about the options for RexsterClient here.

What is the best practice to handle Multitenant security in Breeze?

I'm developing an Azure application using this stack:
(Client) Angular/Breeze
(Server) Web API/Breeze Server/Entity Framework/SQL Server
With every request I want to ensure that the user actually has the authorization to execute that action using server-side code. My question is how to best implement this within the Breeze/Web API context.
Is the best strategy to:
Modify the Web API Controller and try to analyze the contents of the
Breeze request before passing it further down the chain?
Modify the EFContextProvider and add an authorization test to
every method exposed?
Move the security all into the database layer and make sure that a User GUID and Tenant GUID are required parameters for every query and only return relevant data?
Some other solution, or some combination of the above?
If you are using Sql Azure then one option is to use Azure Federation to do exactly that.
In a very simplistic term if you have TenantId in your table which stores data from multiple tenants then before you execute a query like SELECT Col1 FROM Table1, you execute USE FEDERATION... statement to restrict the query results to a particular TenantId only, and you don't need to add WHERE TenantId=#TenantId to your query,
USE FEDERATION example: http://msdn.microsoft.com/en-us/library/windowsazure/hh597471.aspx
Note that use of Sql Azure Federation comes with lots of strings attached when it comes to Building a DB schema one of the best blog I have found about it is http://blogs.msdn.com/b/cbiyikoglu/archive/2011/04/16/schema-constraints-to-consider-with-federations-in-sql-azure.aspx.