Related
I have some write performance struggle with MongoDB 5.0.8 in an PSA (Primary-Secondary-Arbiter) deployment when one data bearing member goes down.
I am aware of the "Mitigate Performance Issues with PSA Replica Set" page and the procedure to temporarily work around this issue.
However, in my opinion, the manual intervention described here should not be necessary during operation. So what can I do to ensure that the system continues to run efficiently even if a node fails? In other words, as in MongoDB 4.x with the option "enableMajorityReadConcern=false".
As I understand the problem has something to do with the defaultRWConcern. When configuring a PSA Replica Set in MongoDB you are forced to set the DefaultRWConcern. Otherwise the following message will appear when rs.addArb is called:
MongoServerError: Reconfig attempted to install a config that would
change the implicit default write concern. Use the setDefaultRWConcern
command to set a cluster-wide write concern and try the reconfig
again.
So I did
db.adminCommand({
"setDefaultRWConcern": 1,
"defaultWriteConcern": {
"w": 1
},
"defaultReadConcern": {
"level": "local"
}
})
I would expect that this configuration causes no lag when reading/writing to a PSA System with only one data bearing node available.
But I observe "slow query" messages in the mongod log like this one:
{
"t": {
"$date": "2022-05-13T10:21:41.297+02:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn149",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "<db>.<col>",
"command": {
"insert": "<col>",
"ordered": true,
"txnNumber": 4889253,
"$db": "<db>",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1652430100,
"i": 86
}
},
"signature": {
"hash": {
"$binary": {
"base64": "bEs41U6TJk/EDoSQwfzzerjx2E0=",
"subType": "0"
}
},
"keyId": 7096095617276968965
}
},
"lsid": {
"id": {
"$uuid": "25659dc5-a50a-4f9d-a197-73b3c9e6e556"
}
}
},
"ninserted": 1,
"keysInserted": 3,
"numYields": 0,
"reslen": 230,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 2
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 3
}
},
"Global": {
"acquireCount": {
"w": 2
}
},
"Database": {
"acquireCount": {
"w": 2
}
},
"Collection": {
"acquireCount": {
"w": 2
}
},
"Mutex": {
"acquireCount": {
"r": 2
}
}
},
"flowControl": {
"acquireCount": 1,
"acquireWaitCount": 1,
"timeAcquiringMicros": 982988
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": 1,
"wtimeout": 0,
"provenance": "customDefault"
},
"storage": {},
"remote": "10.10.7.12:34258",
"protocol": "op_msg",
"durationMillis": 983
}
The collection involved here is under proper load with about 1000 reads and 1000 writes per second from different (concurrent) clients.
MongoDB 4.x with "enableMajorityReadConcern=false" performed "normal" here and I have not noticed any loss of performance in my application. MongoDB 5.x doesn't manage that and in my application data is piling up that I can't get written away in a performant way.
So my question is, if I can get the MongoDB 4.x behaviour back. A write guarantee from the single data bearing node which is available in the failure scenario would be OK for me. But in a failure scenario, having to manually reconfigure the faulty node should actually be avoided.
Thanks for any advice!
At the end we changed the setup to a PSS layout.
This was also recommended in the MongoDB Community Forum.
I am running a simple query using mongoDB compass using filter and project and I'm having a behaviour I can't explain.
Here is my filter:
{
"$and": [
{
"_id": ObjectId('611ee5ee6b93815ee436969e')
},
{
"type": "article"
}
]
}
I get the following result:
{
"_id": {
"$oid": "611ee5ee6b93815ee436969e"
},
"type": "article",
"history": [],
"liked": [],
"parentId": "61105f00cc11ec10406fd1c4",
"permissionList": [],
"title": "Test",
"wikiId": "610de623fbfa1e58cdba9d2c"
}
As expected I get all the fields, in particular type and wikiId
However if i add the following projection:
{
"_id": 1,
"wikiId": 1,
"parentId": 1,
"title": 1,
"type": 1,
"permissionList": 1,
"liked": 1,
"history": 1
}
I would expect the same result, however i get:
{
"_id": {
"$oid": "611ee5ee6b93815ee436969e"
},
"type": "article",
"history": [],
"liked": [],
"parentId": "61105f00cc11ec10406fd1c4",
"permissionList": [],
"title": "Test"
}
This time i don't have the field wikiId, however it was requested in the projection.
And what's bug me is that if I do this projection instead:
{
"_id": 1,
"wikiId": 1,
"parentId": 1,
"title": 1,
"permissionList": 1,
"liked": 1,
"history": 1
}
Then I got the wikiId field as expected in the result again.
Anyone can provide me an insight of what is going on with those queries and where i'm mistaken.
Edit 1: The fact I want to use a projection is that depending on the type field I can use different document with different field.
In my Java code I'm using #BsonDiscriminator(key = "type") but when I explicitly want a kind of document I'm creating the appropriate projection to be sure. However in this case I just wanted to simplify the issue I'm facing to the simplest.
Thanks
in compass by default filters are having and condition so your filter can be refactored as below:
{ "_id": ObjectId('611ee5ee6b93815ee436969e'), "type": "article" }
And regarding projection you can just simply mention fields to Exclude or include like below:
{
"_id": 1,
"wikiId": 1,
"parentId": 1,
"title": 1,
"type": 1,
"permissionList": 1,
"liked": 1,
"history": 1
}
if you need all fields then there is no need to mention anything by default in compass you get all fields, but lets say you want all but few fields(_id,wikiId) then use below projection:
{
"_id": 0,
"wikiId": 0,
}
Also in compass you can try clicking find again or refresh button to see as sometimes it does not reflects current filter as as we change filter it fetches data so just hit refresh or find and see it should work.
I am trying to use the Moodle API (web services) to get information about (my) assignment submissions. I want to know whether I submitted an attempt for the assignment already or not. I am using the mod_assign_get_assignments function (which doesn't have too much documentation) and the results I get (when looking at the assignments portion of each course) are:
{
"id": 25960,
"cmid": 350053,
"course": 8013502,
"name": "\u05d4\u05d2\u05e9\u05ea \u05ea\u05e8\u05d2\u05d9\u05dc \u05d1\u05d9\u05ea 1",
"nosubmissions": 0,
"submissiondrafts": 0,
"sendnotifications": 0,
"sendlatenotifications": 0,
"sendstudentnotifications": 0,
"duedate": 1617566400,
"allowsubmissionsfromdate": 0,
"grade": 100,
"timemodified": 1615897679,
"completionsubmit": 1,
"cutoffdate": 1617569940,
"gradingduedate": 0,
"teamsubmission": 0,
"requireallteammemberssubmit": 0,
"teamsubmissiongroupingid": 0,
"blindmarking": 0,
"hidegrader": 0,
"revealidentities": 0,
"attemptreopenmethod": "manual",
"maxattempts": 1,
"markingworkflow": 0,
"markingallocation": 0,
"requiresubmissionstatement": 0,
"preventsubmissionnotingroup": 0
...irrelevant configuations
}
The above result is for an assignment I have already submitted.
An example of an assignment I did not submit is:
{
"id": 19764,
"cmid": 268225,
"course": 8013201,
"name": "\u05ea\u05d9\u05d1\u05ea \u05d4\u05d2\u05e9\u05d4 14",
"nosubmissions": 0,
"submissiondrafts": 0,
"sendnotifications": 0,
"sendlatenotifications": 0,
"sendstudentnotifications": 0,
"duedate": 1611693000,
"allowsubmissionsfromdate": 0,
"grade": 100,
"timemodified": 1610972842,
"completionsubmit": 0,
"cutoffdate": 1611694860,
"gradingduedate": 0,
"teamsubmission": 0,
"requireallteammemberssubmit": 0,
"teamsubmissiongroupingid": 0,
"blindmarking": 0,
"hidegrader": 0,
"revealidentities": 0,
"attemptreopenmethod": "manual",
"maxattempts": 1,
"markingworkflow": 0,
"markingallocation": 0,
"requiresubmissionstatement": 0,
"preventsubmissionnotingroup": 0
...irrelevant configuations
}
The only apparent difference between these (that might point to a way to check if I submitted it or not) is the completionsubmit property, but that cannot be the solution because a different assignment that I have submitted has it set to 0 (just like the one I didn't submit).
Does someone have an idea how I can solve this issue?
Thanks in Advance!
EDIT: mod_assign_get_submissions denies my access
{"assignments":[],"warnings":[{"item":"assignment","itemid":myitemname,"warningcode":"1","message":"No access rights in module context"}]}
I looked now into mod_assign_get_submission_status but it seems like it is only able to parse one assignment at a time, does anyone have a way to make this more efficient?
You could try using mod_assign_get_submissions instead to retrieve submissions to assignments. Available since Moodle 2.5
References
Moodle API
Emulated Data For Get Submissions from Moodle
Sample Response
{
"assignments": [
{
"assignmentid": 14,
"submissions": [
{
"id": 7,
"userid": 3,
"attemptnumber": 0,
"timecreated": 1426865031,
"timemodified": 1426865062,
"status": "submitted",
"groupid": 0,
"plugins": [
{
"type": "onlinetext",
"name": "Online text",
"fileareas": [
{
"area": "submissions_onlinetext"
}
],
"editorfields": [
{
"name": "onlinetext",
"description": "Submission comments",
"text": "<p>But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. <br></p>",
"format": 1
}
]
},
{
"type": "file",
"name": "File submissions",
"fileareas": [
{
"area": "submission_files",
"files": [
{
"filepath": "APDFfile.pdf",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
},
{
"filepath": "anotherfile.docx",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
}
]
}
]
},
{
"type": "comments",
"name": "Submission comments"
}
]
},
{
"id": 5,
"userid": 4,
"attemptnumber": 0,
"timecreated": 1426864693,
"timemodified": 1426864740,
"status": "draft",
"groupid": 0,
"plugins": [
{
"type": "onlinetext",
"name": "Online text",
"fileareas": [
{
"area": "submissions_onlinetext",
"files": [
{
"filepath": "/Arte esquemático-Cigüeña.png",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/245/assignsubmission_onlinetext/submissions_onlinetext/5/Arte%20esquem%C3%A1tico-Cig%C3%BCe%C3%B1a.png"
}
]
}
],
"editorfields": [
{
"name": "onlinetext",
"description": "Submission comments",
"text": "<p>Blah Blah Blah lorem ipsum</p><p><br></p><p><b>Blah Blah Blah lorem ipsum</b><br></p><p><b><br></b></p><p><b><span style=\"font-weight: normal;\"><i>Blah Blah Blah lorem ipsum</i></span><br></b></p><p><b><span style=\"font-weight: normal;\"><i><br></i></span></b></p><p><b><span style=\"font-weight: normal;\"><i><img src=\"##PLUGINFILE##/Arte%20esquem%C3%A1tico-Cig%C3%BCe%C3%B1a.png\" alt=\"\" width=\"734\" height=\"844\" role=\"presentation\" style=\"vertical-align:text-bottom; margin: 0 .5em;\" class=\"img-responsive\"><br></i></span></b></p>",
"format": 1
}
]
},
{
"type": "file",
"name": "File submissions",
"fileareas": [
{
"area": "submission_files",
"files": [
{
"filepath": "somefile.pdf",
"fileurl": "http://localhost/m/stable_master/webservice/pluginfile.php/247/assignsubmission_file/submission_files/12/somefile.pdf"
}
]
}
]
},
{
"type": "comments",
"name": "Submission comments"
}
]
}
]
}
],
"warnings": []
}
i'm new to the topic MongoDB and have 4 different problems importing a big (16GB) file (jsonl) into my MongoDB (simple PSA-Cluster).
Below attached you will find a sample entry from the mentiond JSON-Dump.
With this file which i get from an external provider I actually have 4 problems.
"hotel_id" is the key and should normally be (re-)named as "_id"
"hotel_id" should not be treated as string rather than as Number
"location" is not properly formatted (if i understood correctly the MongoDB Manual) as GeoJSON as it should be like
"location": {
"type": "Point",
"coordinates": [-93.26838,37.15845]
}
instead of
"location": {
"coordinates": {
"latitude": 37.15845,
"longitude": -93.26838
}
}
"dates" can this be used to efficiently update just the records which needs to be updated?
So my challenge is now to transform the data according to my needs before importing the data or at time of import, but in both cases of course as quickly as possible.
Therefore i searched a lot for hints and best practices, but i was not able to find a solution yet, maybe due to the fact that i'm a beginner with MongoDB.
I played around with "jq" to adjust the data and for example add the type which seems to be necessary for the location (point 3), but wasn't really successful.
cat dump.jsonl | ./bin/jq --arg typeOfField Point '.location + {type: $typeOfField}'
Beside that i was injecting a sample dump of round-about 500MB which took 1,5 mins when importing it the first time (empty database). If i run it in "upsert" mode it will take round-about 12 hours. So i was also wondering what is the best practice to import such a big JSON-dump?
Any help is appreciated!! :-)
Kind regards,
Lumpy
{
"hotel_id": "12345",
"name": "Test Hotel",
"address": {
"line_1": "123 Test St",
"line_2": "Apt A",
"city": "Test City",
},
"ratings": {
"property": {
"rating": "3.5",
"type": "Star"
},
"guest": {
"count": 48382,
"average": "3.1"
}
},
"location": {
"coordinates": {
"latitude": 22.54845,
"longitude": -90.11838
}
},
"phone": "555-0153",
"fax": "555-7249",
"category": {
"id": 1,
"name": "Hotel"
},
"rank": 42,
"dates": {
"added": "1998-07-19T05:00:00.000Z",
"updated": "2018-03-22T07:23:14.000Z"
},
"statistics": {
"11": {
"id": 11,
"name": "Total number of rooms - 220",
"value": "220"
},
"12": {
"id": 12,
"name": "Number of floors - 7",
"value": "7"
}
},
"chain": {
"id": -2,
"name": "Test Hotels"
},
"brand": {
"id": 2,
"name": "Test Brand"
}
}
I'm trying to setup Teamcity building and verifying patchsets from Gerrit. The last step should set Verified to -1 if build failed. I'm playing around with Gerrit REST API and I think I found a proper command:
https://gerrit-review.googlesource.com/Documentation/rest-api-changes.html#set-review
The documentation says:
As response a ReviewInfo entity is returned that describes the applied
labels.
My request looks like this:
POST <gerrit-url>/a/changes/I696f00f4968fcb35fa614ce6325499aa15367150/revisions/current/review
{
"message": "Build failed",
"labels": {
"Verified": -1
}
}
As a response I get full revision info:
{
"id": "dev_test~master~<change-id>",
"project": "dev_test",
"branch": "master",
"hashtags": [],
"change_id": "<change-id>",
"subject": "a test",
"status": "NEW",
"created": "2017-04-03 07:53:19.000000000",
"updated": "2017-04-04 08:47:34.000000000",
"submit_type": "MERGE_IF_NECESSARY",
"mergeable": true,
"insertions": 133,
"deletions": 7,
"unresolved_comment_count": 0,
"_number": 381,
"owner": {
"_account_id": 4,
"name": "<my-name>",
"email": "<my-email>",
"username": "<my-username>",
},
"labels": {
"Code-Review": {
"all": [
{
"value": 1,
"date": "2017-04-04 08:47:34.000000000",
"permitted_voting_range": {
"min": -2,
"max": 2
},
"_account_id": 4,
"name": "<my-name>",
"email": "<my-email>",
"username": "<my-username>"
}
],
"values": {
"-2": "This shall not be merged",
"-1": "I would prefer this is not merged as is",
" 0": "No score",
"+1": "Looks good to me, but someone else must approve",
"+2": "Looks good to me, approved"
},
"default_value": 0
},
"Verified": {
"all": [
{
"value": 0,
"permitted_voting_range": {
"min": -1,
"max": 1
},
"_account_id": 4,
"name": "<my-name>",
"email": "<my-email>",
"username": "<my-username>"
}
],
"values": {
"-1": "Fails",
" 0": "No score",
"+1": "Verified"
},
"default_value": 0
}
},
"permitted_labels": {},
"removable_reviewers": [],
"reviewers": {
"REVIEWER": [
{
"_account_id": 4,
"name": "<my-name>",
"email": "<my-email>",
"username": "<my-username>"
}
]
},
"current_revision": "913330441711b067899a664a60c78be518e547b4",
"revisions": {
"913330441711b067899a664a60c78be518e547b4": {
"kind": "REWORK",
"_number": 6,
"created": "2017-04-03 14:08:14.000000000",
"uploader": {
"_account_id": 4,
"name": "<my-name>",
"email": "<my-email>",
"username": "<my-username>"
},
"ref": "refs/changes/81/381/6",
"fetch": {
"ssh": {
"url": "ssh://<url>",
"ref": "refs/changes/81/381/6"
},
"http": {
"url": "http://<url>",
"ref": "refs/changes/81/381/6"
}
}
}
}
}
It's not affected by request. Same response is returned when I send request using GET method or using POST method with invalid JSON in body(!)
Gerrit version is: 2.13.6-3008-gcdc381e
Do I something wrong?
PS. Here is similar question, but it isn't helpful: Gerrit set-review api doesn't work
EDIT:
It seems that I getting response from GET request not POST
I figured it out. It's not gerrit problem. I used http request and our server redirected to https with 301 which the Postman fallowed and returned response for GET request.