The question is - is there any way to export the OrientDB database structure in a command style format, like:
create database <name>
create class <name>
create property <name>...
etc.
Thanks,
Denis
There is a command export schema available in orientdb console, it produces something like:
...
"schema":{
"version":54,
"classes":[
{
"name":"YourClassName",
"default-cluster-id":9,
"cluster-ids":[
9
],
"properties":[
{
"name":"f1",
"type":"STRING"
},
{
"name":"f2",
"type":"STRING"
},
{
"name":"f3",
"type":"STRING"
}
]
},
...
The output is json, so you can write a script to transform it in whatever you want.
More generally: export database FILENAME exports in a (special) JSON format (special, because the ordering of keys is important for it to be readable by OrientDB).
Related
i have try to read Json file using copy activity and write data in sql server.
my json file available in blob store.
i have set file fromat-JSON format
when i try to import schema i got error-Error occurred when
deserializing source JSON data. Please check if the data is in valid
JSON object format.. Activity ID:2f799221-f037-4f72-8e6c-385778929110
myjsonData
{
"id": "ed0e4960-d9c5-11e6-85dc-d7996816aad3",
"context": {
"device": {
"type": "PC"
},
"custom": {
"dimensions": [
{
"TargetResourceType": "Microsoft.Compute/virtualMachines"
},
{
"ResourceManagementProcessRunId": "827f8aaa-ab72-437c-ba48-d8917a7336a3"
},
{
"OccurrenceTime": "1/13/2017 11:24:37 AM"
}
]
}
}
}
Regards,
Manish
Based on your description and your sample source data, you could import the schema directly,however the column is nested.
If you want to flatten the nested json before you store them into sql server database as rows,you could execute Azure Function Activity before the Copy Activity.
Or you could execute the stored procedure in sql server dataset.
So I'd like to query a single JSON file which is not an array from Gatsby's GraphQL but I don't really know how to do it.
As far as I understand gatsby-transformer-json docs - it only supports loading arrays and have them accessible via allFileNameJson schema.
My gatsby-config plugins (only the necessary ones for this question):
{
resolve: 'gatsby-source-filesystem',
options: {
name: 'data',
path: `${__dirname}/src/data`
}
},
'gatsby-transformer-json'
And then let's say in src/data i have a something.json file, like this:
{
"key": "value"
}
Now I'd like to query the data from something.json file, but there is no somethingJson schema i can query (tried with Gatsby's GraphiQL).
Could someone point out what am I doing wrong or how can I solve this problem?
Ok, so it is possible to query single-object files as long as they have a parent (folder).
Let's take these parameters:
gatsby-source-filesystem configured to src/data
test.json file positioned at src/data/test.json with { "key": "value" } content in it
Now as the test.json file actually has a parent (data folder) - you can query the fields from test.json like this:
{
dataJson {
key
}
}
But putting those directly in your root folder is a bad practice, because when you will store another json file, i.e. secondtest.json with { "key2": "value2" } content, and will query it with the same query as above, you will get data only from a single node (not sure if it takes first, or last encountered node),
So, the perfect solution for this case is to have your single-object json files stored in separate folders with just one json per folder.
For example you have some "About Me" data:
create a about folder in your src/data
create an index.json file with i.e. { "name": "John" }
query your data
Like this:
{
aboutJson {
name
}
}
That's it.
I have a mongo db collection named like "name.types". When I am creating model for the collection in loopback, I cannot enter the model name with the "." as it says special characters not allowed. So I created the model as "name_types". Now how can I connect this model to the collection "name.types"? Any help would be appreciated. Thanks!
You can set collection name in model.json file :
//model.json
...
"options": {
"validateUpsert": true,
"mongodb": {
"collection": "name_types"
}
},
....
You can define a different collection name for your existing model by passing an option in model definition, something like this
Post = db.define('Post', {
title: { type: String },
content: { type: String }
},
{
mongodb: {
collection: 'PostCollection', // Custom the collection name
}
});
You can do it from the model.json file or from a boot script.
Good Luck.. :)
I have a bunch of files (~10Gb each) where each line represents a single JSON object. I want to import them in the streaming mode, but looks like it is not supported right now (OrientDB v.2.2.12). Are there any workarounds? And what is the recommended way for this case?
Looks like that JSON can be transformed to the ODocument in CODE block:
{
"code": {
"language": "Javascript",
"code": "(new com.orientechnologies.orient.core.record.impl.ODocument()).fromJSON(input);"
}
}
If you experience errors like:
Error in Pipeline execution:
com.orientechnologies.orient.core.exception.OSerializationException:
Found invalid } character at position 112 of text
Then just ensure that multiline option is set to off.
"extractor": {
"row": {
"multiLine": false
}
}
I am using chef version 10.16.2
I have a role (in ruby format). I need to access an attrubute set in one of the cookbooks
eg.
name "basebox"
description "A basic box with some packages, ruby and rbenv installed"
deployers = node['users']['names'].find {|k,v| v['role'] == "deploy" }
override_attributes {
{"rbenv" => {
"group_users" => deployers
}
}
}
run_list [
"recipe[users]",
"recipe[packages]",
"recipe[nginx]",
"recipe[ruby]"
]
I am using chef-solo so i cannot use search as given on http://wiki.opscode.com/display/chef/Search#Search-FindNodeswithaRoleintheExpandedRunList
How do i access node attributes in a role definition ?
Roles are JSON data.
That is, when you upload the role Ruby file to the server with knife, they are converted to JSON. Consider this role:
name "gaming-system"
description "Systems used for gaming"
run_list(
"recipe[steam::installer]",
"recipe[teamspeak3::client]"
)
When I upload it with knife role from file gaming-system.rb, I have this on the server:
{
"name": "gaming-system",
"description": "Systems used for gaming",
"json_class": "Chef::Role",
"default_attributes": {
},
"override_attributes": {
},
"chef_type": "role",
"run_list": [
"recipe[steam::installer]",
"recipe[teamspeak3::client]"
],
"env_run_lists": {
}
}
The reason for the Ruby DSL is that it is "nicer" or "easier" to write than the JSON. Compare the lines and syntax, and it's easy to see which is preferable to new users (who may not be familiar with JSON).
That data is consumed through the API. If you need to do any logic with attributes on your node, do it in a recipe.
Not sure if I 100% follow, but if you want to access an attribute which is set by a role from a recipe, then you just call it like any other node attribute. For example, in the case you presented, assuming the node has the basebox role in its run_list, you would just call:
node['rbenv']['group_users']
The role attributes are merged into the node.
HTH