Suppose I want to create a set of unique edges and vertices.
create vertex A set etc.
create vertex B set etc.
create edge AB, create edge AC,
And all of these edges and vertices are unique--so some of the commands will likely fail when they are unique.
How do I batch these commands such that I am guaranteed all commands will be run, even when some commands fail?
I tried your case, I have a Vertex class with a name property (unique index), you can execute batch commands in different ways:
Studio
begin
LET a = create vertex User set name = 'John'
LET b = create vertex User set name = 'Jane'
LET c = create edge FriendOf from $a to $b
commit retry 100
return $c
Java API
OrientGraph g=new OrientGraph(currentPath);
String cmd = "begin\n";
cmd += "let $user2 = UPDATE User SET user_id = 'userX' UPSERT RETURN AFTER #rid WHERE user_id = 'userX'\n";
cmd += "let $service = UPDATE Service SET service = 'serviceX' UPSERT RETURN AFTER #rid WHERE service = 'serviceX'\n";
cmd += "CREATE edge link FROM $user2 TO $service\n";
cmd += "commit";
g.command(new OCommandScript("sql", cmd)).execute();
Console
create a .txt file with your code like this:
connect remote:localhost/stack49801389 root root
begin
create vertex User set name = 'John'
create vertex User set name = 'Jane'
create edge FriendOf from $a to $b
commit retry 100
return $c
and then run it by console
For more information you can take a look at this link
Hope it helps
Regards
Related
I want to copy entire subgraph from one server to another server via a cypher in Neo 4j .
I have a Neo 4j on Host 1 and another Neo 4j on Host 2 .
My requirement to copy the graph from Host 1 and insert into Host 2
You can use apoc.bolt.execute procedure to move data from one graph to another graph :
Here is an example moving data from the standard movies graph to another graph called movies888 :
Replace username, password and host with your values
MATCH (n:Person)-[r:ACTED_IN]->(m)
WITH n, r, m
CALL apoc.bolt.execute(
'bolt://<username>:<password>#<host>:7687',
'
MERGE (p:Person {name: $n.name}) SET p = $n
MERGE (m:Movie {title: $m.title}) SET m = $m
MERGE (p)-[r:ACTED_IN]->(m)
SET r = $r
',
{n: n{.*}, m: m{.*}, r: r{.*}}, {databaseName: 'movies888'}
)
YIELD row RETURN count(*)
locals{
instance_name = "TESTWINDOWSVM"
instance_count = 4
vm_instances = format("%s%s", local.instance_name,local.instance_count)
}
I am creating a windows VM via terraform azure, I wanted to combine instance_name and instance_count and be able to create a new list variable.
output should be [ TESTWINDOWSVM001, TESTWINDOWSVM002, TESTWINDOWSVM003, TESTWINDOWSVM004]. is there a way to do this in terraform?
You can do this with a straightforward for expression iterating from a range function inside a list constructor, and some string interpolation with a format function to ensure three numbers, and both on the return:
[for idx in range(local.instance_count) : "${local.instance_name}${format("%03d", idx + 1)}"]
I have two editable numeric fields and a table in app designer; the user can enter the values in these editable fields and then push a button. Then, the values are added to a table. Also, I provide an option to attach an excel folder that should have two columns to reflect on the table.
Both of these work perfectly fine individually, but, if I added the values manually then attached an excel folder or vice versa, I get the following error: All tables in the bracketed expression must have the same variable names.
The function that handles the editable fields:
app.t = app.UITable.Data;
x = app.xvalueEditField.Value;
y = app.yvalueEditField.Value;
nr = table(x, y);
app.UITable.Data = [app.t; nr]; %% error happens here if I attach excel then add manually
app.t = app.UITable.Data;
The Function of the excel folder:
text = readtable([pathname filename], "Sheet",1, 'ReadVariableNames',false);
fl = cellfun(#isnumeric,table2cell(text(1,:)));
if (numel(fl(fl == false)) > 0)
flag = false;
else
flag = true;
end
if (flag)
A = [app.t; text]; %% error happens here if I add manually then attach
app.UITable.Data = A;
app.t = text;
end
Note: these are only the parts of the function, where I attempt to combine values
Can someone please help me?
Thank you
The error message is telling you that table only allows you to vertically concatenate tables when the 'VariableNames' properties match. This is documented here: https://www.mathworks.com/help/matlab/ref/vertcat.html#btxzag0-1 .
In your first code example, the table nr will have variable names x and y (derived from the names of the underlying variables you used to construct the table). You could fix that case by doing:
% force nr to have the same VariableNames as app.t:
nr = table(x, y, 'VariableNames', app.t.Properties.VariableNames);
and in the second case, you can force text to have the correct variable names like this:
text.Properties.VariableNames = app.t.Properties.VariableNames
I understand you can probably use the following code to get the job done in most cases:
mydocpath = fullfile(getenv('USERPROFILE'), 'Documents');
However, if the user has moved 'Doucments' folder to a different location, for example: E:\Documents, the above code won't work, since getenv('USERPROFILE') always returns C:\Users\MY_USER_NAME.
In C#, one can use Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments), which always returns the correct path regardless where 'Documents' is. Is there anything similar in Matlab?
My current solution is rather clumsy and probably unsafe:
% search in the MATLAB path lists
% this method assumes that there is always a path containing \Documents\MATLAB registered already
searchPtn = '\Documents\MATLAB';
pathList = strsplit(path,';');
strIdx = strfind(pathList, searchPtn);
candidateIdx = strIdx{find(cellfun(#isempty,strIdx)==0, 1)}(1);
myDocPath = pathList{candidateIdx}(1 : strIdx{candidateIdx}+ numel(searchPtn));
Based on #excaza 's suggestion, I came up with a solution using dos and the cmd command found here to query the registry.
% query the registry
[~,res]=dos('reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" /v Personal');
% parse result
res = strsplit(res, ' ');
myDocPath = strtrim(res{numel(res)});
Edit:
If the document folder in customer's PC has not been relocated or moved to one of the environment path such as %SYSTEMROOT%, the above method would return
%SOME_ENVIRONMENT_PATH%/Documents(Or a custom folder name)
The above path will not work in Matlab's functions such as mkdir or exist, which will take %SOME_ENVIRONMENT_PATH% as a folder name. Therefore we need to check for the existence of environment path in the return value and get the correct path:
[startidx, endidx] = regexp(myDocPath,'%[A-Z]+%');
if ~isempty(startidx)
myDocPath = fullfile(getenv(myDocPath(startidx(1)+1:endidx(1)-1)), myDocPath(endidx(1)+1:end));
end
Full code:
% query the registry
[~,res]=dos('reg query "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders" /v Personal');
% parse result
res = strsplit(res, ' ');
% get path
myDocPath = strtrim(res{numel(res)});
% if it returns %AAAAA%/xxxx, meaning the Documents folder is
% in some environment path.
[startidx, endidx] = regexp(myDocPath,'%[A-Z]+%');
if ~isempty(startidx)
myDocPath = fullfile(getenv(myDocPath(startidx(1)+1:endidx(1)-1)), myDocPath(endidx(1)+1:end));
end
I have a pig script that uses a macro twice, on the same relation but with different parameters; for each use I filter the same relation on a different field. Macro is shaped more or as as follows:
DEFINE doubleGroupJoin (mainField, mainRelation) returns out {
valid = FILTER $mainRelation BY $mainField != '';
r1 = FOREACH (GROUP valid BY $mainField) GENERATE
field1_1, field1_2, ...;
r2 = FOREACH (GROUP valid BY ($mainField, otherfield1, ...) GENERATE
field2_1, field2_2, ...;
$out = FOREACH (JOIN R1 BY field1_1, R2 BY field1_2) GENERATE
final1, final2, ...;
}
In the script I have the following:
-- Output1
finalR1 = doubleGroupJoin('field1', initialData);
STORE finalR1 INTO '$output/R1';
-- Output2
finalR2 = doubleGroupJoin('field2', initialData);
STORE finalR2 INTO '$output/R2';
If I comment out either Output1 or Output2 blocks, the job works fine, but if I try to use both I get the following error:
java.lang.ClassCastException: org.apache.pig.data.BinSedesTuple cannot be cast to java.lang.String
at org.apache.pig.backend.hadoop.HDataType.getWritableComparableTypes(HDataType.java:106)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Map.collect(PigGenericMapReduce.java:111)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
Using Pig 0.12.0 here. Any suggestion on why this might be happening?