How to "Import Queries" in the Web Interface of ArangoDB? - import

"Export Queries" does not work in the Query tab of the Web Interface, so I tried to manually create json with the query and import it, but the following definition is not clear:
Format:
JSON documents embedded into a list:
[{
"name": "Query Name",
"value": "Query Definition",
"parameter": "Query Bind Parameter as Object"
}]
What escaping should have the value, whether parameter is mandatory and what is the format in case multiple bind parameters are defined.
I was not able to import the following script:
[{
name: "Create Random Complex Users (num, outpUsers)",
value: '// Create specified number of users in the users Vertex collection
FOR i IN 1..#usersNum
INSERT {
id: 100000 + i,
age: 18 + FLOOR(RAND() * 50), // RAND generate float E {0, 1]
name: CONCAT('user', TO_STRING(i)),
} IN ##users'
}
]
What is wrong and how should it be fixed?
NOTE:
ArangoDB version: arangosh (ArangoDB 3.0.10 [linux] 64bit, using VPack 0.1.30, ICU 54.1, V8 5.0.71.39, OpenSSL 1.0.1f 6 Jan 2014)
Using the JSON fixed by #mpv1989 the following error appears in the Web Inetface: Query error: queries could not be imported.
And the following message is in the log using DB named test under the root user:
2016-10-26T12:31:28Z [31690] ERROR Service "/_admin/aardvark" encountered error 500 while handling POST http://localhost:8529/_db/test/_admin/aardvark/query/upload/root
2016-10-26T12:31:28Z [31690] ERROR ArangoError: users can only be used in _system database
2016-10-26T12:31:28Z [31690] ERROR at getStorage (/usr/share/arangodb3/js/server/modules/#arangodb/users.js:93:17)
2016-10-26T12:31:28Z [31690] ERROR at Object.exports.document (/usr/share/arangodb3/js/server/modules/#arangodb/users.js:291:17)
2016-10-26T12:31:28Z [31690] ERROR at Route._handler (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/aardvark.js:153:18)
2016-10-26T12:31:28Z [31690] ERROR at next (/usr/share/arangodb3/js/server/modules/#arangodb/foxx/router/tree.js:386:15)
2016-10-26T12:31:28Z [31690] ERROR at /usr/share/arangodb3/js/node/node_modules/lodash/lodash.js:9378:25
2016-10-26T12:31:28Z [31690] ERROR at Middleware.authRouter.use (/usr/share/arangodb3/js/apps/system/_admin/aardvark/APP/aardvark.js:78:3)
2016-10-26T12:31:28Z [31690] ERROR at next (/usr/share/arangodb3/js/server/modules/#arangodb/foxx/router/tree.js:388:15)
2016-10-26T12:31:28Z [31690] ERROR at next (/usr/share/arangodb3/js/server/modules/#arangodb/foxx/router/tree.js:384:7)
2016-10-26T12:31:28Z [31690] ERROR at next (/usr/share/arangodb3/js/server/modules/#arangodb/foxx/router/tree.js:384:7)
2016-10-26T12:31:28Z [31690] ERROR at next (/usr/share/arangodb3/js/server/modules/#arangodb/foxx/router/tree.js:384:7)
However, the fixed JSON can be SUCCESSFULLY imported to the _SYSTEM database! Thank you #mpv1989.
It seems, persistence and import of the Queries snippets works only for the _SYSTEM DB...

What error message do you get when exporting/importing?
For your workaround, I exported your query from the Web Interface. Here is the result:
[{
"name": "Create Random Complex Users (num, outpUsers)",
"value": "// Create specified number of users in the users Vertex collection\nFOR i IN 1..#usersNum\n INSERT {\n id: 100000 + i,\n age: 18 + FLOOR(RAND() * 50), // RAND generate float E {0, 1]\n name: CONCAT('user', TO_STRING(i))\n } IN ##users",
"parameter": {
"usersNum": 100,
"#users": "users"
}
}]
The field parameter is a Json Object. If you do not have any bind parameter just write an empty object "parameter": {}.

Related

DB2 select JSON_ARRAYAGG

Using of JSON_ARRAYAGG does not working for me
1) [Code: -104, SQL State: 42601] An unexpected token "ORDER" was found following "CITY)
". Expected tokens may include: ")".. SQLCODE=-104, SQLSTATE=42601, DRIVER=4.28.11
2) [Code: -727, SQL State: 56098] An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-104", SQLSTATE "42601" and message tokens "ORDER|CITY)
|)".. SQLCODE=-727, SQLSTATE=56098, DRIVER=4.28.11
I want an output like this
{"id":901, "name": "Hansi", "addresses" :
[
{ "address":"A", "city":"B"},
{ "address":"C", "city":"D"}
]
}
I am using IBM DB2 11.1 for Linux, UNIX and Windows.
values (
json_array(
select json_object ('ID' value ID,
'NAME' value NAME,
'ADDRESSES' VALUE JSON_ARRAYAGG(
JSON_OBJECT('ADDRESS' VALUE ADDRESS,
'CITY' VALUE CITY)
ORDER BY ADDRESS)
)
FROM CUSTOMER
JOIN CUSTOMER_ADDRESS ON ADDRESS_CUSTOMER_ID = ID
GROUP BY ID, NAME
FORMAT JSON
));
Used tables are:
CUSTOMER - ID (INT), NAME (VARCHAR64)
ADDRESS - ADDRESS (VARCHAR64), CITY (VARCHAR64)

Insert document with string containing line breaks to mongo using mongo shell

I am trying to insert the following document into a mongo collection:
[
{
"text": "tryng to insert a string
with some line breaks",
}
]
By running db.myCollection.insertMany(documentArray) where documentArray is just me copy-pasting this array.
But I am getting this error:
> db.myCollection.insertMany([
... {
... "text": "tryng to insert a string
uncaught exception: SyntaxError: "" literal not terminated before end of script :
#(shell):3:37
> with some line breaks",
uncaught exception: SyntaxError: missing ( before with-statement object :
#(shell):1:5
> }
uncaught exception: SyntaxError: expected expression, got '}' :
#(shell):1:0
> ]
Which obviously appears because it detects the new line character as the end of the command, so Mongo shell thinks it has to run the command, which is not complete.
Is there any way of saving \r\n and \n characters in MongoDB? Should I use another method not directly with the shell?
Both Mongo and the shell are version 4.4.15
Try doing this instead. The key is to embed the newline character in the string.
[
{
"text": "trying to insert a string\n" +
"with some line breaks",
}
]

Compose Transporter throws error when collection_filters is set to sync data for current day from DocumentDB/MongoDB to file/ElasticSearch

I am using Compose Transporter to sync data from DocumentDB to ElasticSearch instance in AWS. After one time sync, I added following collection_filters in pipeline.js to sync incremental data daily:
// pipeline.js
var source = mongodb({
"uri": "mongodb <URI>"
"ssl": true,
"collection_filters": '{ "mycollection": { "createdDate": { "$gt": new Date(Date.now() - 24*60*60*1000) } }}',
})
var sink = file({
"uri": "file://mongo_dump.json"
})
t.Source("source", source, "^mycollection$").Save("sink", sink, "/.*/")
I get following error:
$ transporter run pipeline.js
panic: malformed collection_filters [recovered]
panic: Panic at 32: malformed collection_filters [recovered]
panic: Panic at 32: malformed collection_filters
goroutine 1 [running]:
github.com/compose/transporter/vendor/github.com/dop251/goja.(*Runtime).RunProgram.func1(0xc420101d98)
/Users/JP/gocode/src/github.com/compose/transporter/vendor/github.com/dop251/goja/runtime.go:779 +0x98
When I change collection_filters so that value of "gt" key is single string token (see below), malformed error vanishes but it doesn't fetch any document:
'{ "mycollection": { "createdDate": { "$gt": "new Date(Date.now() - 24*60*60 * 1000)" } }}',
To check if something is fundamentally wrong with the way I am querying, tried simple string filter and that works well:
"collection_filters": '{ "articles": { "createdBy": "author name" }}',
I tried various ways to pass createdDate filter but either getting malformed error or no data. However same filter on mongo shell gives me expected output. Note that I tried with ES as well as file as sink before asking here.

resource type error while trying to use cloudformation

I tried to use the exact same example provided in the user guide mentioned below. It works from console but fails to create stack using client.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-athena-namedquery.html
I got an error while trying to execute the following:
{
"Resources": {
"AthenaNamedQuery": {
"Type": "AWS::Athena::NamedQuery",
"Properties": {
"Database": "swfnetadata",
"Description": "A query that selects all aggregated data",
"Name": "MostExpensiveWorkflow",
"QueryString": "SELECT workflowname, AVG(activitytaskstarted) AS AverageWorkflow FROM swfmetadata WHERE year='17' AND GROUP BY workflowname ORDER BY AverageWorkflow DESC LIMIT 10"
}
}
}
}
Is the "create-stack" parameter of cloudformation correct?
aws cloudformation create-stack --stack-name dnd --template-body file://final.json
Why am I getting a resource type error like this?
An error occurred (ValidationError) when calling the CreateStack operation: Template format error: Unrecognized resource types: [AWS::Athena::NamedQuery]
It worked when I updated my CLI version as suggested in the comment. This issue is now closed.

Fiware-Cygnus ERROR: invalid input syntax for type point

I have an attribute in Fiware-Orion with type Point, as follows,
{
"name": "Coords",
"type": "geo:point",
"value": "LATITUDE,LONGITUDE"
}
and Cygnus is subscribed to Orion. When Cygnus PostgreSQLSink is receiving the event and it tries to store it in the PostgreSQL database I'm getting the following error
WARN sinks.OrionSink: Bad context data (ERROR: invalid input syntax for type point: "[]"
What should I use in Fiware-Orion?