I'm building an app which analysis the step of the person that wears the sensor, and I haven't found and mention in the API or the exapmle apps for what are the unit of measurement (in the linear acceleration and angular velocity).
I believe that timestamp is milliseconds (correct me if I'm wrong), but what are the others?
Thanks!
Time Api returns data in uS
$ wbcmd --port com9 --op get --path Time
{
"response": 200,
"responsestring": "HTTP_CODE_OK",
"operation": "get",
"uri": "/net/ECKIB9870DA4/Time",
"content": 1420070440653000,
"querytimems": 26,
"querytimens": 26918122
}
But Timestamp for Acc goes from the LSM6DS3 chip and it is in mS.
$ wbcmd --port com9 --op subscribe --path Meas/Acc/13
/net/ECKIB9870DA4/Meas/Acc/13::onSubscribeResult
Subscribed and listening for notifications. Press ESC to stop:
#191 { {
"Timestamp": 158,
"ArrayAcc": [
{
"x": -0.07657305896282196,
"y": -0.66044265031814575,
"z": 9.9186038970947266
}
]
} }
#255 { {
"Timestamp": 235,
"ArrayAcc": [
{
"x": -0.11485958844423294,
"y": -0.64129936695098877,
"z": 9.9377470016479492
}
]
} }
#383 { {
"Timestamp": 312,
"ArrayAcc": [
{
"x": -0.052643977105617523,
"y": -0.6341206431388855,
"z": 9.9329614639282227
}
]
} }
You have to know that LSM chip doesn't have any good clock. So real timestamp can drift a little (depends on temperature) like +-2%.
Related
I will need your help with understanding an performance problem.
We have a system where we are storing set of documents (1k-4k docs) in batches. Documents have this structure: {_id: ObjectId(), RepositoryId: UUID(), data...}
where repository id is same for all instance in the set. We also set an unique indexes for: {_id: 1, RepositoryId: 1}, {RepositoryId: 1, ...}.
In the usecase is: delete all documents with same RepositoryId:
db.collection.deleteMany(
{ RepositoryId: UUID("SomeGUID") },
{ writeConcern: {w: "majority", j: true} }
)
And then re-upsert batches (300 items per batch) with same RepositoryId as we delete before:
db.collection.insertMany(
[ { RepositoryId: UUID(), data... }, ... ],
{
writeConcern: {w: 1, j: false},
ordered: false
}
)
The issue is that upsert of first few (3-5) batches take much more time then reset (first batch: 10s, 8th bach 0.1s). There is also entry in log file:
{
"t": {
"$date": "2023-01-19T15:49:02.258+01:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn64",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "####.$cmd",
"command": {
"update": "########",
"ordered": false,
"writeConcern": {
"w": 1,
"fsync": false,
"j": false
},
"txnNumber": 16,
"$db": "#####",
"lsid": {
"id": {
"$uuid": "6ffb319a-6003-4221-9925-710e9e2aa315"
}
},
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1674139729,
"i": 5
}
},
"numYields": 0,
"reslen": 11550,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 600
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 601
}
},
"Global": {
"acquireCount": {
"w": 600
}
},
"Database": {
"acquireCount": {
"w": 600
}
},
"Collection": {
"acquireCount": {
"w": 600
}
},
"Mutex": {
"acquireCount": {
"r": 600
}
}
},
"flowControl": {
"acquireCount": 300,
"timeAcquiringMicros": 379
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": 1,
"j": false,
"wtimeout": 0,
"provenance": "clientSupplied"
},
"storage": {
},
"remote": "127.0.0.1:52800",
"protocol": "op_msg",
"durationMillis": 13043
}
}
}
}
Is there some background process that is running after delete that affects upsert pefrormance of first batches? It was not a problem until we switched from standalone to single instance replica set, due to transaction support in another part of app. This case does not require transaction but we can not host two instances of mongo with different setup. The DB is exclusive for this operation, no other operation runs on DB (running in isolated test environment). How we can fix it?
The issue is reproducible, seems when there is time gap in test run (few minutes), the problem is not there for first run but then following runs are problematic.
Runing on machine with Ryzen 7 PRO 4750U, 32 GB Ram and Samsung 970 EVO M2 SSD. MongoDB version 5.0.5
In that log entry timeAcquiringMicros indicates that this operation waited while attempt to acquire a lock.
flowControl is a throttling mechanism that delays writes on the primary node when the secondary nodes are lagging, with the intent of letting them catch up before the get so far behind that consistency is lost.
Waiting on the flowControl lock would suggest that there was a backlog of operations that were still be replicated to the secondaries, and they were a bit behind, so the new writes were being slowed.
See Replication Lag and Flow Control for more detail
I have some write performance struggle with MongoDB 5.0.8 in an PSA (Primary-Secondary-Arbiter) deployment when one data bearing member goes down.
I am aware of the "Mitigate Performance Issues with PSA Replica Set" page and the procedure to temporarily work around this issue.
However, in my opinion, the manual intervention described here should not be necessary during operation. So what can I do to ensure that the system continues to run efficiently even if a node fails? In other words, as in MongoDB 4.x with the option "enableMajorityReadConcern=false".
As I understand the problem has something to do with the defaultRWConcern. When configuring a PSA Replica Set in MongoDB you are forced to set the DefaultRWConcern. Otherwise the following message will appear when rs.addArb is called:
MongoServerError: Reconfig attempted to install a config that would
change the implicit default write concern. Use the setDefaultRWConcern
command to set a cluster-wide write concern and try the reconfig
again.
So I did
db.adminCommand({
"setDefaultRWConcern": 1,
"defaultWriteConcern": {
"w": 1
},
"defaultReadConcern": {
"level": "local"
}
})
I would expect that this configuration causes no lag when reading/writing to a PSA System with only one data bearing node available.
But I observe "slow query" messages in the mongod log like this one:
{
"t": {
"$date": "2022-05-13T10:21:41.297+02:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn149",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "<db>.<col>",
"command": {
"insert": "<col>",
"ordered": true,
"txnNumber": 4889253,
"$db": "<db>",
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1652430100,
"i": 86
}
},
"signature": {
"hash": {
"$binary": {
"base64": "bEs41U6TJk/EDoSQwfzzerjx2E0=",
"subType": "0"
}
},
"keyId": 7096095617276968965
}
},
"lsid": {
"id": {
"$uuid": "25659dc5-a50a-4f9d-a197-73b3c9e6e556"
}
}
},
"ninserted": 1,
"keysInserted": 3,
"numYields": 0,
"reslen": 230,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 2
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 3
}
},
"Global": {
"acquireCount": {
"w": 2
}
},
"Database": {
"acquireCount": {
"w": 2
}
},
"Collection": {
"acquireCount": {
"w": 2
}
},
"Mutex": {
"acquireCount": {
"r": 2
}
}
},
"flowControl": {
"acquireCount": 1,
"acquireWaitCount": 1,
"timeAcquiringMicros": 982988
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": 1,
"wtimeout": 0,
"provenance": "customDefault"
},
"storage": {},
"remote": "10.10.7.12:34258",
"protocol": "op_msg",
"durationMillis": 983
}
The collection involved here is under proper load with about 1000 reads and 1000 writes per second from different (concurrent) clients.
MongoDB 4.x with "enableMajorityReadConcern=false" performed "normal" here and I have not noticed any loss of performance in my application. MongoDB 5.x doesn't manage that and in my application data is piling up that I can't get written away in a performant way.
So my question is, if I can get the MongoDB 4.x behaviour back. A write guarantee from the single data bearing node which is available in the failure scenario would be OK for me. But in a failure scenario, having to manually reconfigure the faulty node should actually be avoided.
Thanks for any advice!
At the end we changed the setup to a PSS layout.
This was also recommended in the MongoDB Community Forum.
I want to retrieve speed limit based on a single coordinate(of a road in the USA) using HERE REST API(not PDE). So, I only got the api_key(no app code). I searched and found some solutions which were deprecated and required the PDE api. If anybody knows the latest solution please let me know. Thanks. Anyway I got something which gives speed limit for two coordinates(not my case) and if I put the two coordinates same I get the wrong(very small) value for speed limit. which is given bellow:
https://router.hereapi.com/v8/routes?apiKey=APP_KEY&transportMode=car&origin=-37.956650,145.220673&destination=-37.956650,145.220673&spans=speedLimit&return=polyline
{
"routes": [
{
"id": "0a903f38-aecd-4850-a87e-0848c7583b8a",
"sections": [
{
"id": "4aa23cb0-0209-4e31-bca8-32f4cfb9f98e",
"type": "vehicle",
"departure": {
"time": "2021-02-25T12:52:34+11:00",
"place": {
"type": "place",
"location": {
"lat": -37.9566675,
"lng": 145.2206532
},
"originalLocation": {
"lat": -37.9566501,
"lng": 145.220673
}
}
},
"arrival": {
"time": "2021-02-25T12:52:34+11:00",
"place": {
"type": "place",
"location": {
"lat": -37.9566675,
"lng": 145.2206532
},
"originalLocation": {
"lat": -37.9566501,
"lng": 145.220673
}
}
},
"polyline": "BG1j2soC6iy_0IAA",
"spans": [
{
"offset": 0,
"speedLimit": 27.7777786
}
],
"transport": {
"mode": "car"
}
}
]
}
]
}
Is there any latest rest api service provided by Here that gives speed limit for a geo coordinate(not the pde one)?
i'm new to the topic MongoDB and have 4 different problems importing a big (16GB) file (jsonl) into my MongoDB (simple PSA-Cluster).
Below attached you will find a sample entry from the mentiond JSON-Dump.
With this file which i get from an external provider I actually have 4 problems.
"hotel_id" is the key and should normally be (re-)named as "_id"
"hotel_id" should not be treated as string rather than as Number
"location" is not properly formatted (if i understood correctly the MongoDB Manual) as GeoJSON as it should be like
"location": {
"type": "Point",
"coordinates": [-93.26838,37.15845]
}
instead of
"location": {
"coordinates": {
"latitude": 37.15845,
"longitude": -93.26838
}
}
"dates" can this be used to efficiently update just the records which needs to be updated?
So my challenge is now to transform the data according to my needs before importing the data or at time of import, but in both cases of course as quickly as possible.
Therefore i searched a lot for hints and best practices, but i was not able to find a solution yet, maybe due to the fact that i'm a beginner with MongoDB.
I played around with "jq" to adjust the data and for example add the type which seems to be necessary for the location (point 3), but wasn't really successful.
cat dump.jsonl | ./bin/jq --arg typeOfField Point '.location + {type: $typeOfField}'
Beside that i was injecting a sample dump of round-about 500MB which took 1,5 mins when importing it the first time (empty database). If i run it in "upsert" mode it will take round-about 12 hours. So i was also wondering what is the best practice to import such a big JSON-dump?
Any help is appreciated!! :-)
Kind regards,
Lumpy
{
"hotel_id": "12345",
"name": "Test Hotel",
"address": {
"line_1": "123 Test St",
"line_2": "Apt A",
"city": "Test City",
},
"ratings": {
"property": {
"rating": "3.5",
"type": "Star"
},
"guest": {
"count": 48382,
"average": "3.1"
}
},
"location": {
"coordinates": {
"latitude": 22.54845,
"longitude": -90.11838
}
},
"phone": "555-0153",
"fax": "555-7249",
"category": {
"id": 1,
"name": "Hotel"
},
"rank": 42,
"dates": {
"added": "1998-07-19T05:00:00.000Z",
"updated": "2018-03-22T07:23:14.000Z"
},
"statistics": {
"11": {
"id": 11,
"name": "Total number of rooms - 220",
"value": "220"
},
"12": {
"id": 12,
"name": "Number of floors - 7",
"value": "7"
}
},
"chain": {
"id": -2,
"name": "Test Hotels"
},
"brand": {
"id": 2,
"name": "Test Brand"
}
}
Consider a document with two arrays, lines and nodes each containing many sub-documents and add'l nested documents not shown for clarity
{
{"lines": [
{
"guid": 1
"def": {
"n1": 49,
"n2": 50
}
},
...]
},
{"nodes": [
{
"guid": 49,
"def": {
"x": -43.9709,
"y": 14.8688,
"z": -121.988
}
},
{
"guid": 50,
"def": {
"x": -30.5955,
"y": 29.5512,
"z": -115.024
}
},
...]
}
}
How could one query Mongo to return a restructured document of lines, pulling only what we need from nodes?
{
{"lines": [
{
"guid": 1
"def": {
"n1": {
"guid": 49,
"def": {
"x": -43.9709,
"y": 14.8688,
"z": -121.988
}
}
"n2": {
"guid": 50,
"def": {
"x": -30.5955,
"y": 29.5512,
"z": -115.024
}
}
}
},
...]
}
}
Coming from a healthy understanding of RDBMS, and new to MongoDB. I've been spending a lot of time hacking away with the aggregation framework to try and achieve this, but having no luck. Is there something simple I am missing?
I expect an alternate data model might be more appropriate. The above will no doubt start to crash up against the 16 MB BSON document limit for the concerning use case.