BUG: insert select edge creates invalid TO references - orientdb

I suspect this is a serious bug and is raising doubts about how OrientDB manages the integrity of the graphs data during various DML operations.
How can an Edge have multiple TOs?
The below outlines a simple operation that is using 'insert into select' to copy a Vertex.
Initial State
{
"result": [
{
"#type": "d",
"#rid": "#-2:1",
"#version": 0,
"rid": "#133:46",
"version": 1,
"class": "RuleSet",
"out_HasRule": [
"#80:32"
],
"#fieldTypes": "rid=x,out_HasRule=g"
},
{
"#type": "d",
"#rid": "#-2:2",
"#version": 0,
"rid": "#130:39",
"version": 1,
"class": "Rule",
"in_HasRule": [
"#80:32"
],
"#fieldTypes": "rid=x,in_HasRule=g"
}
],
"notification": "Query executed in 0.213 sec. Returned 2 record(s)" }
If I execute the following, bad edge data is created.
Notice that #80:32 is the IN for more than one Vertex.
insert into Rule from select * from #130:39;
{
"result": [
{
"#type": "d",
"#rid": "#-2:1",
"#version": 0,
"rid": "#133:46",
"version": 1,
"class": "RuleSet",
"out_HasRule": [
"#80:32"
],
"#fieldTypes": "rid=x,out_HasRule=g"
},
{
"#type": "d",
"#rid": "#-2:2",
"#version": 0,
"rid": "#131:38",
"version": 1,
"class": "Rule",
"in_HasRule": [
"#80:32"
],
"#fieldTypes": "rid=x,in_HasRule=g"
},
{
"#type": "d",
"#rid": "#-2:3",
"#version": 0,
"rid": "#130:39",
"version": 1,
"class": "Rule",
"in_HasRule": [
"#80:32"
],
"#fieldTypes": "rid=x,in_HasRule=g"
}
],
"notification": "Query executed in 0.151 sec. Returned 3 record(s)"}

Yes, INSERT/SELECT was designed to copy documents, not vertices, so it also copies edge pointers.
I think it's worth fixig it in one of the two following ways:
remove the edge links
also copy the edges
Could you please open an issue here: https://github.com/orientechnologies/orientdb/issues
it will be easier to discuss

Thanks Luigi!
This issue gets at the heart of what we expect from any database, data integrity, so I hope this issue get's to the top of the stack.
Issue has been opened here -> https://github.com/orientechnologies/orientdb/issues/7826

Related

MongoDB update_one vs update_many - Improve speed

I got a collection of 10000 ca. docs, where each doc has the following format:
{
"_id": {
"$oid": "631edc6e207c89b932a70a26"
},
"name": "Ethereum",
"auditInfoList": [
{
"coinId": "1027",
"auditor": "Fairyproof",
"auditStatus": 2,
"reportUrl": "https://www.fairyproof.com/report/Covalent"
}
],
"circulatingSupply": 122335921.0615,
"cmcRank": 2,
"dateAdded": "2015-08-07T00:00:00.000Z",
"id": 1027,
"isActive": 1,
"isAudited": true,
"lastUpdated": 1662969360,
"marketPairCount": 6085,
"quotes": [
{
"name": "USD",
"price": 1737.1982544180462,
"volume24h": 14326453277.535921,
"marketCap": 212521748520.66168,
"percentChange1h": 0.62330307,
"percentChange24h": -1.08847937,
"percentChange7d": 10.96517745,
"lastUpdated": 1662966780,
"percentChange30d": -13.49374496,
"percentChange60d": 58.25153862,
"percentChange90d": 42.27475921,
"fullyDilluttedMarketCap": 212521748520.66,
"marketCapByTotalSupply": 212521748520.66168,
"dominance": 20.0725,
"turnover": 0.0674117,
"ytdPriceChangePercentage": -53.9168
}
],
"selfReportedCirculatingSupply": 0,
"slug": "ethereum",
"symbol": "ETH",
"tags": [
"mineable",
"pow",
"smart-contracts",
"ethereum-ecosystem",
"coinbase-ventures-portfolio",
"three-arrows-capital-portfolio",
"polychain-capital-portfolio",
"binance-labs-portfolio",
"blockchain-capital-portfolio",
"boostvc-portfolio",
"cms-holdings-portfolio",
"dcg-portfolio",
"dragonfly-capital-portfolio",
"electric-capital-portfolio",
"fabric-ventures-portfolio",
"framework-ventures-portfolio",
"hashkey-capital-portfolio",
"kenetic-capital-portfolio",
"huobi-capital-portfolio",
"alameda-research-portfolio",
"a16z-portfolio",
"1confirmation-portfolio",
"winklevoss-capital-portfolio",
"usv-portfolio",
"placeholder-ventures-portfolio",
"pantera-capital-portfolio",
"multicoin-capital-portfolio",
"paradigm-portfolio",
"injective-ecosystem"
],
"totalSupply": 122335921.0615
}
Im pulling updated version of it and, to aviod duplicates, im doing the following by using 'update_one'
for doc in new_doc_list:
CRYPTO_TEMPORARY_LIST.update_one(
{ "name" : doc['name']},
{ "$set": {
"lastUpdated": doc['lastUpdated']
}
},
upsert=True)
The problem is it's too slow.
I'm trying to figure out how to improve speed by using update_many but can't figure out how to set it up.
I Basically want to update every document x name. Completely change the doc and not the "lastUpdated" field would b even better.
Thanks guys <3

Check if student submitted assignment via Moodle web services

I am trying to use the Moodle API (web services) to get information about (my) assignment submissions. I want to know whether I submitted an attempt for the assignment already or not. I am using the mod_assign_get_assignments function (which doesn't have too much documentation) and the results I get (when looking at the assignments portion of each course) are:
{
"id": 25960,
"cmid": 350053,
"course": 8013502,
"name": "\u05d4\u05d2\u05e9\u05ea \u05ea\u05e8\u05d2\u05d9\u05dc \u05d1\u05d9\u05ea 1",
"nosubmissions": 0,
"submissiondrafts": 0,
"sendnotifications": 0,
"sendlatenotifications": 0,
"sendstudentnotifications": 0,
"duedate": 1617566400,
"allowsubmissionsfromdate": 0,
"grade": 100,
"timemodified": 1615897679,
"completionsubmit": 1,
"cutoffdate": 1617569940,
"gradingduedate": 0,
"teamsubmission": 0,
"requireallteammemberssubmit": 0,
"teamsubmissiongroupingid": 0,
"blindmarking": 0,
"hidegrader": 0,
"revealidentities": 0,
"attemptreopenmethod": "manual",
"maxattempts": 1,
"markingworkflow": 0,
"markingallocation": 0,
"requiresubmissionstatement": 0,
"preventsubmissionnotingroup": 0
...irrelevant configuations
}
The above result is for an assignment I have already submitted.
An example of an assignment I did not submit is:
{
"id": 19764,
"cmid": 268225,
"course": 8013201,
"name": "\u05ea\u05d9\u05d1\u05ea \u05d4\u05d2\u05e9\u05d4 14",
"nosubmissions": 0,
"submissiondrafts": 0,
"sendnotifications": 0,
"sendlatenotifications": 0,
"sendstudentnotifications": 0,
"duedate": 1611693000,
"allowsubmissionsfromdate": 0,
"grade": 100,
"timemodified": 1610972842,
"completionsubmit": 0,
"cutoffdate": 1611694860,
"gradingduedate": 0,
"teamsubmission": 0,
"requireallteammemberssubmit": 0,
"teamsubmissiongroupingid": 0,
"blindmarking": 0,
"hidegrader": 0,
"revealidentities": 0,
"attemptreopenmethod": "manual",
"maxattempts": 1,
"markingworkflow": 0,
"markingallocation": 0,
"requiresubmissionstatement": 0,
"preventsubmissionnotingroup": 0
...irrelevant configuations
}
The only apparent difference between these (that might point to a way to check if I submitted it or not) is the completionsubmit property, but that cannot be the solution because a different assignment that I have submitted has it set to 0 (just like the one I didn't submit).
Does someone have an idea how I can solve this issue?
Thanks in Advance!
EDIT: mod_assign_get_submissions denies my access
{"assignments":[],"warnings":[{"item":"assignment","itemid":myitemname,"warningcode":"1","message":"No access rights in module context"}]}
I looked now into mod_assign_get_submission_status but it seems like it is only able to parse one assignment at a time, does anyone have a way to make this more efficient?
You could try using mod_assign_get_submissions instead to retrieve submissions to assignments. Available since Moodle 2.5
References
Moodle API
Emulated Data For Get Submissions from Moodle
Sample Response
{
"assignments": [
{
"assignmentid": 14,
"submissions": [
{
"id": 7,
"userid": 3,
"attemptnumber": 0,
"timecreated": 1426865031,
"timemodified": 1426865062,
"status": "submitted",
"groupid": 0,
"plugins": [
{
"type": "onlinetext",
"name": "Online text",
"fileareas": [
{
"area": "submissions_onlinetext"
}
],
"editorfields": [
{
"name": "onlinetext",
"description": "Submission comments",
"text": "<p>But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. <br></p>",
"format": 1
}
]
},
{
"type": "file",
"name": "File submissions",
"fileareas": [
{
"area": "submission_files",
"files": [
{
"filepath": "APDFfile.pdf",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
},
{
"filepath": "anotherfile.docx",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
}
]
}
]
},
{
"type": "comments",
"name": "Submission comments"
}
]
},
{
"id": 5,
"userid": 4,
"attemptnumber": 0,
"timecreated": 1426864693,
"timemodified": 1426864740,
"status": "draft",
"groupid": 0,
"plugins": [
{
"type": "onlinetext",
"name": "Online text",
"fileareas": [
{
"area": "submissions_onlinetext",
"files": [
{
"filepath": "/Arte esquemático-Cigüeña.png",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/245/assignsubmission_onlinetext/submissions_onlinetext/5/Arte%20esquem%C3%A1tico-Cig%C3%BCe%C3%B1a.png"
}
]
}
],
"editorfields": [
{
"name": "onlinetext",
"description": "Submission comments",
"text": "<p>Blah Blah Blah lorem ipsum</p><p><br></p><p><b>Blah Blah Blah lorem ipsum</b><br></p><p><b><br></b></p><p><b><span style=\"font-weight: normal;\"><i>Blah Blah Blah lorem ipsum</i></span><br></b></p><p><b><span style=\"font-weight: normal;\"><i><br></i></span></b></p><p><b><span style=\"font-weight: normal;\"><i><img src=\"##PLUGINFILE##/Arte%20esquem%C3%A1tico-Cig%C3%BCe%C3%B1a.png\" alt=\"\" width=\"734\" height=\"844\" role=\"presentation\" style=\"vertical-align:text-bottom; margin: 0 .5em;\" class=\"img-responsive\"><br></i></span></b></p>",
"format": 1
}
]
},
{
"type": "file",
"name": "File submissions",
"fileareas": [
{
"area": "submission_files",
"files": [
{
"filepath": "somefile.pdf",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
}
]
}
]
},
{
"type": "comments",
"name": "Submission comments"
}
]
}
]
}
],
"warnings": []
}

Building Smaller Sized Raspberry pi images

Am trying to build small size Raspberry Pi images using packer-builder-arm (https://github.com/mkaczanowski/packer-builder-arm) community plugin.
The resulting images are still 2GB. Anybody suggestion how to reduce the image size. Thanks!
{
"variables": {},
"builders": [
{
"type": "arm",
"file_urls": [
"http://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2020-05-28/2020-05-27-raspios-buster-lite-armhf.zip"
],
"file_checksum_url": "http://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2020-05-28/2020-05-27-raspios-buster-lite-armhf.zip.sha256",
"file_checksum_type": "sha256",
"file_target_extension": "zip",
"image_build_method": "reuse",
"image_path": "custom-raspberry-pi-os.img",
"image_size": "700M",
"image_type": "dos",
"image_partitions": [
{
"name": "boot",
"type": "c",
"start_sector": "8192",
"filesystem": "vfat",
"size": "256M",
"mountpoint": "/boot"
},
{
"name": "root",
"type": "83",
"start_sector": "532480",
"filesystem": "ext4",
"size": "0",
"mountpoint": "/"
}
],
"image_chroot_env": [
"PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin"
],
"qemu_binary_source_path": "/usr/bin/qemu-arm-static",
"qemu_binary_destination_path": "/usr/bin/qemu-arm-static"
}
],
"provisioners": [
{
"type": "shell",
"inline": ["touch /tmp/test"]
}
]
}```
You might want to look into removing the programs you don't need in there. And then do a partition resize using something like resize2fs.

Write correct JSON about PlanDefinition and ActivityDefinition on FHIR

I want to write a JSON to create a PlanDefinition resource with some ActivityDefinition resources inside it to persiste on FHIR r4 server these resources.
My sandbox server is Hapi FHIR
Two questions:
The first: How can I write it
The second: When I'll wirte the correct JSON, the result will be the creation of one PlanDefinition resource and some ActivityDefinition resources, or will be created only one PlanDefinition resource with these informations inside it?
This is my JSON to create a simple PlanDefinition, but I donìt know how to add ActivityDefinition inside it
{
"resourceType": "PlanDefinition",
"id": "999999",
"meta": {
"versionId": "1",
"lastUpdated": "2020-04-16T11:10:45.868+00:00",
"source": "#YS2h8QIqvGKHDy4x"
},
"url": "www.myserver.it",
"identifier": [ {
"system": "www.myserver.it",
"value": "jtr-pd1"
} ],
"version": "versione 1",
"status": "active",
"action": [ {
"title": "A",
"definitionCanonical": "#Process_Alex1"
}, {
"title": "B",
"definitionCanonical": "#Process_Alex2"
}, {
"title": "C",
"definitionCanonical": "ActivityDefinition"
} ]
}
Typically in FHIR we don't contain resources inside each other. References instead point to other independently maintained resource instances. For example, multiple PlanDefinitions might point to the same ActivityDefinition because that one activity is a 'step' in multiple protocols/order sets.
If you have a situation where an activity definition is tied to a single PlanDefinition and can't exist independent of that PlanDefinition (e.g. if the PlanDefinition were deleted, the ActivityDefinition would go too; no other PlanDefinition can point to the Activity, any update to the activity would be considered an update to the plan, etc.), you can send the ActivityDefinition as a 'contained' resource. Your instance would look like this:
{
"resourceType": "PlanDefinition",
"id": "999999",
"meta": {
"versionId": "1",
"lastUpdated": "2020-04-16T11:10:45.868+00:00",
"source": "#YS2h8QIqvGKHDy4x"
},
"contained": [ {
"resourceType": "ActivityDefinition",
"id": "Process_Alex1",
...
},
{
"resourceType": "ActivityDefinition",
"id": "Process_Alex2",
...
} ],
{
"url": "www.myserver.it",
"identifier": [ {
"system": "www.myserver.it",
"value": "jtr-pd1"
} ],
"version": "versione 1",
"status": "active",
"action": [ {
"title": "A",
"definitionCanonical": "#Process_Alex1"
}, {
"title": "B",
"definitionCanonical": "#Process_Alex2"
}, {
"title": "C",
"definitionCanonical": "http://somewhere.org/ActivityDefinition/foo"
} ]
}

How to find a TFS release in progress?

I need to fail a build on TFS 2018 if its pipeline is not fully complete. Batching just the build is not enough; the linked release must be finished as well before another build can begin. My idea is to do this in a PowerShell script via the REST API.
I see in the official documentation here that there's a property called TaskStatus. It provides a value of inProgress, presumably for releases that are in progress. This might do the trick, but there's no indication of how to actually use it.
Using the REST API, how can I get the TaskStatus of a given release?
The in process and some other values such as succeeded, canceled just stand for the status of a task in release pipeline.
You could simply use the Rest API to get a release
GET https://fabrikam.vsrm.visualstudio.com/MyFirstProject/_apis/release/releases/{releaseId}?api-version=4.1-preview.6
There should be a value called status:
"id": 18,
"name": "Release-18",
"status": "abandoned",
"createdOn": "2017-06-16T01:36:20.397Z",
"modifiedOn": "2017-06-16T01:36:21.07Z",
"modifiedBy": {
"id": "4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"displayName": "Chuck Reinhart",
"uniqueName": "fabfiber#outlook.com",
"url": "https://app.vssps.visualstudio.com/A168224e4-29ff-4081-9954-c8780ce81117/_apis/Identities/4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"imageUrl": "https://fabfiber-inc.visualstudio.com/_api/_common/identityImage?id=4adb1680-0eac-6149-b5ee-fc8b4f6ca227"
},
"createdBy": {
"id": "4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"displayName": "Chuck Reinhart",
"uniqueName": "fabfiber#outlook.com",
"url": "https://app.vssps.visualstudio.com/A168224e4-29ff-4081-9954-c8780ce81117/_apis/Identities/4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"imageUrl": "https://fabfiber-inc.visualstudio.com/_api/_common/identityImage?id=4adb1680-0eac-6149-b5ee-fc8b4f6ca227"
},
"environments": [
{
"id": 69,
"releaseId": 18,
"name": "Dev",
"status": "notStarted",
"variables": {},
"preDeployApprovals": [],
"postDeployApprovals": [],
"preApprovalsSnapshot": {
"approvals": [
{
"rank": 1,
"isAutomated": false,
"isNotificationOn": false,
"approver": {
"id": "4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"displayName": "Chuck Reinhart",
"uniqueName": "fabfiber#outlook.com",
"url": "https://app.vssps.visualstudio.com/A168224e4-29ff-4081-9954-c8780ce81117/_apis/Identities/4adb1680-0eac-6149-b5ee-fc8b4f6ca227",
"imageUrl": "https://fabfiber-inc.visualstudio.com/_api/_common/identityImage?id=4adb1680-0eac-6149-b5ee-fc8b4f6ca227"
},
"id": 0
}
You could fetch the value status in your return json file, and judge if the release succeed or failed. Finally according to this status to trigger another build or not.
Update
A sample of the returned json with task's status info:
"deploymentJobs": [
{
"job": {
"id": 5,
"timelineRecordId": "855ea6d6-9ed0-442d-b921-0c4add8bb068",
"name": "Release",
"dateStarted": "2018-07-04T08:53:05.9133333Z",
"dateEnded": "2018-07-04T08:53:21.34Z",
"startTime": "2018-07-04T08:53:05.9133333Z",
"finishTime": "2018-07-04T08:53:21.34Z",
"status": "succeeded",
"rank": 1,
"issues": [],
"agentName": "DFA00"
},
"tasks": [
{
"id": 1,
"timelineRecordId": "fa3bb635-eab4-4c1b-9cc0-fdccd7ced33f",
"name": "Initialize Job",
"dateStarted": "2018-07-04T08:53:06.5833333Z",
"dateEnded": "2018-07-04T08:53:06.8033333Z",
"startTime": "2018-07-04T08:53:06.5833333Z",
"finishTime": "2018-07-04T08:53:06.8033333Z",
"status": "succeeded",
"rank": 1,
"issues": [],
"agentName": "DFA00",
"logUrl": "http://xxxx:8080/tfs/DefaultCollection/7658559e-6e61-422a-952b-a5fce0b6ca1d/_apis/Release/releases/49/environments/49/tasks/1/logs?releaseDeployPhaseId=54"
},
There should be timelinerecord, starttime, finishtime, status for a task deployment result in a single release.