How is "after" and "before" method implemented in Google Spanner truetime? - distributed-computing

According to their pager, it says "The TT.after() and TT.before() methods are convenience wrappers around TT.now()."
And according to What is the TrueTime API in Google's Spanner?
It also provides two functions:
after(t) returns true if t has definitely passed. E.g. t < now().earliest.
before(t) returns true if t has definitely not arrived, or t > now().latest.
My question are:
On all servers in spanner, does TT.now() return the same result?
For a given time t, is possible that on server A before(t) is true and on server B is false?
Are they monotonic? e.g. On server A, TT.after(t) is true, sometime later, is it possible that TT.after(t) is false?

For details of how Truetime and Spanner work, look at section 3 of the Spanner Whitpaper (1).
A discussion of how it could be implemented is in the question "Why is Google's TrueTime API hard to duplicate?" (2)
From the Spanner Whitepaper, a TrueTime value is not a single value but a timestamp range which is guaranteed to contain the absolute value. This range takes into account the potential clock drift - which in Google's network, with syncing the server clocks to the atomic/GPS reference time every 30secs is up to 7ms (from the whitepaper).
So if TTstamp1 is the range (t1_lo, t1_hi) and TTstamp2 is the range (t2_lo, t2_hi), then before() and after() simply compare the extremes of these ranges to confirm that they do not overlap.
TTstamp1.before(TTValue2) = t1_hi < t2_lo
TTstamp1.after(TTValue2) = t1_lo > t2_hi
The answer to your questions are therefore:
No, TT.now() does not return the same result on all servers even if called at the exact same instant. However the values obtained on all servers when called at that instant will overlap each other, meaning that none of them are before nor after each other.
So for a given TrueTime t, it is technically possible that on server A t.before(now) is true and on server B t.before(now) is false due to the overlap comparison and the differences between the possible ranges. This is not an problem for Spanner because it will wait until there is no overlap (t.before(now)==true) before committing a transaction and storing its timestamp.
(Note: this information is derived from the public whitepapers and documentation)

According to the picture below , TT.after(t) is TT.now().earliest > t, so I think it's quite possible that TT.before(t) is TT.now().latest < t.
Try to answer my own question:
TT.now() does not return the same result on all servers at the same time.
Yes, it's possbile. Even though for case of TT.after(),this means time t has definitely passed only on some servers.
I don't know. If it is possbile, then it means t has definitely passed, and then not sure about that, which sounds a little weird
I think "definitely" in their description is a little misleading, they should simply be "higher than the latest" or "lower than the earliest”

Related

patch endpoint naming and realisation

We have rest resource
/tasks/{task-type}
and only GET methods available.
GET /tasks/{task-type}
GET /tasks/{task-type}/{id}
Task entity contains meta info like created, finished, status, ref key and try counts for scheduled tasks.
Now we faced with problem, when task may contains incorrect data and its execution always failed.
Due to scheduler invoked tasks every 5 min there are a lot of errors in logs and largest try counts around 500k. The solution i found is to limit try_count to five (for example). And now we need way to manual discard try-count to zero. So i found two solutions:
1.
PATCH /tasks/{task-type}/{id}/discard-try-count - no response body
This solution look pretty simple, but violates the REST convention, because we use action(verb) in naming. But if we need to change other fields, then we will make a lot of endpoints in this style.
2a.
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int
}
This looks like REST want to see it and we can easy add new fields to modify, but now client can set any value for tryCount.
2b
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int // validate that try count can be only zero
}
Differs from the previous one by the presence of validation.
This looks like the most reliable solution. Is it really the best fit?
The non-verb convention is not a standard, you can violate it if you want to, though it can be worked around with very simple stuff, just convert the verb into a noun and you will be ok, something like:
POST /tasks/{task-type}/{id}/try-count-discarding
Another way is setting the try count to zero:
PUT /tasks/{task-type}/{id}/try-count 0
Yet another solution is combining the two, which I like the most:
PATCH /tasks/{task-type}/{id}/try-count {"op": "reset"}
Or another variant:
PATCH /tasks/{task-type}/{id} {"op": "discard-try-count"}

how do duckduckgo spice IA secondary API calls get their parameters?

I have been looking through the spice instant answer source code. Yes, I know it is in maintenance mode, but I am still curious.
The documentation makes it fairly clear that the primary spice to API gets its numerical parameters $1, $2, etc. from the handle function.
My question: should there be secondary API calls included with spice alt_to as, say, in the movie spice IA, where do the numerical parameters to that API call come from?
Note, for instance, the $1 in both the movie_image and cast_image secondary API calls in spice alt_to at the preceding link. I am asking which regex capture returns those instances of $1.
I believe I see how this works now. The flow of information is still a bit murky to me, but at least I see how all of the requisite information is there.
I'll take the cryptocurrency instant answer as an example. The alt_to element in the perl package file at that link has a key named cryptonator. The corresponding .js file constructs a matching endpoint:
var endpoint = "/js/spice/cryptonator/" + from + "/" + to;
Note the general shape of the "remainder" past /js/spice/cryptonator: from/to, where from and to will be two strings.
Back in the perl package the hash alt_to->{cryptonator} has a key from which receives, I think, this remainder from/to. The value corresponding to that key is a regex meant to split up that string into its two constituents:
from => '([^/]+)/([^/]*)'
Applied to from/to, that regex will return $1=from and $2=to. These, then, are the $1 and $2 that go into
to => 'https://api.cryptonator.com/api/full/$1-$2'
in alt_to.
In short:
The to field of alt_to->{blah} receives its numerical parameters by having the from regex operate on the remainder past /js/spice/blah/ of the name of the corresponding endpoint constructed in the relevant .js file.

Movesense, setting the system time

I am trying to set the system time in Movesense. I couldn't find an example of that, but based on the documentation I think that this should do:
asyncPut(WB_RES::LOCAL::TIME(),
AsyncRequestOptions::Empty,
(int64_t)0);
In this case, I'm just trying to reset the epoch to zero but onPutResults gives me
HTTP_CODE_BAD_REQUEST
So what is the right way?
Minimum timestamp seems to be 1483228800000000 us which corresponds to 1.1.2017. So you can't set the time to 70's as zero would set it to.
This should be documented in the yaml api but currently is not. We will add that to the list of tasks to make sure it's documented in the next release of device-lib.

How does resource.data.size() work in firestore rules (what is being counted)?

TLDR: What is request.resource.data.size() counting in the firestore rules when writing, say, some booleans and a nested Object to a document? Not sure what the docs mean by "entries in the map" (https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#data, https://firebase.google.com/docs/reference/rules/rules.Map) and my assumptions appear to be wrong when testing in the rules simulator (similar problem with request.resource.data.keys().size()).
Longer version: Running into a problem in Firestore rules where not being able to update data as expected (despite similar tests working in the rules simulator). Have narrowed down the problem to point where can see that it is a rule checking for request.resource.data.size() equaling a certain number.
An example of the data being passed to the firestore update function looks like
Object {
"parentObj": Object {
"nestedObj": Object {
"key1": Timestamp {
"nanoseconds": 998000000,
"seconds": 1536498767,
},
},
},
"otherKey": true,
}
where the timestamp is generated via firebase.firestore.Timestamp.now().
This appears to work fine in the rules simulator, but not for the actual data when doing
let obj = {}
obj.otherKey = true
// since want to set object key name dynamically as nestedObj value,
// see https://stackoverflow.com/a/47296152/8236733
obj.parentObj = {} // needed for adding nested dynamic keys
obj.parentObj[nestedObj] = {
key1: fb.firestore.Timestamp.now()
}
firebase.firestore.collection('mycollection')
.doc('mydoc')
.update(obj)
Among some other rules, I use the rule request.resource.data.size() == 2 and this appears to be the rules that causes a permission denied error (since commenting out this rules get things working again). Would think that since the object is being passed with 2 (top-level) keys, then request.resource.data.size()=2, but this is apparently not the case (nor is it the number of keys total in the passed object) (similar problem with request.resource.data.keys().size()). So there's a long example to a short question. Would be very helpful if someone could clarify for me what is going wrong here.
From my last communications with firebase support around a month ago - there were issues with request.resource.data.size() and timestamp based security rules for queries.
I was also told that request.resource.data.size() is the size of the document AFTER a successful write. So if you're writing 2 additional keys to a document with 4 keys, that value you should be checking against is 6, not 2.
Having said all that - I am still having problems with request.resource.data.size() and any alternatives such as request.resource.size() which seems to be used in this documentation
https://firebase.google.com/docs/firestore/solutions/role-based-access
I also have some places in my security rules where it seems to work. I personally don't know why that is though.
Been struggling with that for a few hours and I see now that the doc on Firebase is clear: "the request.resource variable contains the future state of the document". So with ALL the fields, not only the ones being sent.
https://firebase.google.com/docs/firestore/security/rules-conditions#data_validation.
But there is actually another way to ONLY count the number of fields being sent with request.writeFields.size(). The property writeFields is a table with all the incoming fields.
Beware: writeFields is deprecated and may stop working anytime, but I have not found any replacement.
EDIT: writeFields apparently does not work in the simulator anymore...

How to use OSRM's match service

As stated in the header: how can I use the match call?
I tried
http://router.project-osrm.org/match/v1/driving/8.610048,46.99917;8.530232,47.051?overview=full&radiuses=49;49
I am not sure, whether the list of radiuses is given correctly.
I can't get it work. I also tried [49;49] or {49;49} The command works with route:
http://router.project-osrm.org/route/v1/driving/8.610048,46.99917;8.530232,47.051?overview=full
For backround see here
Edit: If you look at the example here, itr seems, the timestamps are not needed /match/v1/{profile}/{coordinates}?steps={true|false}&geometries={polyline|polyline6|geojson}&overview={simplified|full|false}&annotations={true|false}
From the docs:
Large jumps in the timestamps (> 60s) or improbable transitions lead to trace splits if a complete matching could not be found.
I think that's the problem with your request. The two given points are more than 60s appart and OSRM cannot match them successfully. The radiuses are specified correctly.
The following query works for me:
http://router.project-osrm.org/match/v1/driving/8.610048,46.99917;8.620048,46.99917?overview=full&radiuses=49;49
This returns:
{"tracepoints":[{"location":[8.610971,46.998963],"name":"Alte Kantonstrasse","hint":"GKUFgJEhBwAAAAAAHQAAAAAAAAC5AAAAAAAAAB0AAAAAAAAAuQAAAPsCAACbZIMAsyXNAgBhgwCCJs0CAAAPABki8hY=","matchings_index":0,"waypoint_index":0,"alternatives_count":0},{"location":[8.620295,46.999681],"name":"Schönenbuchstrasse","hint":"nIEFAJ7IFIA3AAAAZAAAAAAAAADYAAAANwAAAGQAAAAAAAAA2AAAAPsCAAAHiYMAgSjNAhCIgwCCJs0CAAAPABki8hY=","matchings_index":0,"waypoint_index":1,"alternatives_count":5}],"matchings":[{"distance":922.3,"duration":114.1,"weight":114.1,"weight_name":"routability","geometry":"onz}Gqyps#Wg#S_#aCaFMUYo#c#w#OKOCWmAWs#aBiDsAsCMYH[HY\\_#h#ObBW^w#BQAUKu#ASF[ZaABOFYpAyIf#mD","confidence":0.000982,"legs":[{"distance":922.3,"duration":114.1,"weight":114.1,"summary":"","steps":[]}]}],"code":"Ok"}
So the two given input points 8.610048,46.99917 and 8.620048,46.99917 are matched to 8.610971,46.998963 and 8.620295,46.999681.
So as far as I can see, if you want to implement something like that, you need to give OSRM more input points on its way which are less than 60s apart.
See also here for an explanation about the differences between route and match service.