How to use OSRM's match service - osrm

As stated in the header: how can I use the match call?
I tried
http://router.project-osrm.org/match/v1/driving/8.610048,46.99917;8.530232,47.051?overview=full&radiuses=49;49
I am not sure, whether the list of radiuses is given correctly.
I can't get it work. I also tried [49;49] or {49;49} The command works with route:
http://router.project-osrm.org/route/v1/driving/8.610048,46.99917;8.530232,47.051?overview=full
For backround see here
Edit: If you look at the example here, itr seems, the timestamps are not needed /match/v1/{profile}/{coordinates}?steps={true|false}&geometries={polyline|polyline6|geojson}&overview={simplified|full|false}&annotations={true|false}

From the docs:
Large jumps in the timestamps (> 60s) or improbable transitions lead to trace splits if a complete matching could not be found.
I think that's the problem with your request. The two given points are more than 60s appart and OSRM cannot match them successfully. The radiuses are specified correctly.
The following query works for me:
http://router.project-osrm.org/match/v1/driving/8.610048,46.99917;8.620048,46.99917?overview=full&radiuses=49;49
This returns:
{"tracepoints":[{"location":[8.610971,46.998963],"name":"Alte Kantonstrasse","hint":"GKUFgJEhBwAAAAAAHQAAAAAAAAC5AAAAAAAAAB0AAAAAAAAAuQAAAPsCAACbZIMAsyXNAgBhgwCCJs0CAAAPABki8hY=","matchings_index":0,"waypoint_index":0,"alternatives_count":0},{"location":[8.620295,46.999681],"name":"Schönenbuchstrasse","hint":"nIEFAJ7IFIA3AAAAZAAAAAAAAADYAAAANwAAAGQAAAAAAAAA2AAAAPsCAAAHiYMAgSjNAhCIgwCCJs0CAAAPABki8hY=","matchings_index":0,"waypoint_index":1,"alternatives_count":5}],"matchings":[{"distance":922.3,"duration":114.1,"weight":114.1,"weight_name":"routability","geometry":"onz}Gqyps#Wg#S_#aCaFMUYo#c#w#OKOCWmAWs#aBiDsAsCMYH[HY\\_#h#ObBW^w#BQAUKu#ASF[ZaABOFYpAyIf#mD","confidence":0.000982,"legs":[{"distance":922.3,"duration":114.1,"weight":114.1,"summary":"","steps":[]}]}],"code":"Ok"}
So the two given input points 8.610048,46.99917 and 8.620048,46.99917 are matched to 8.610971,46.998963 and 8.620295,46.999681.
So as far as I can see, if you want to implement something like that, you need to give OSRM more input points on its way which are less than 60s apart.
See also here for an explanation about the differences between route and match service.

Related

patch endpoint naming and realisation

We have rest resource
/tasks/{task-type}
and only GET methods available.
GET /tasks/{task-type}
GET /tasks/{task-type}/{id}
Task entity contains meta info like created, finished, status, ref key and try counts for scheduled tasks.
Now we faced with problem, when task may contains incorrect data and its execution always failed.
Due to scheduler invoked tasks every 5 min there are a lot of errors in logs and largest try counts around 500k. The solution i found is to limit try_count to five (for example). And now we need way to manual discard try-count to zero. So i found two solutions:
1.
PATCH /tasks/{task-type}/{id}/discard-try-count - no response body
This solution look pretty simple, but violates the REST convention, because we use action(verb) in naming. But if we need to change other fields, then we will make a lot of endpoints in this style.
2a.
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int
}
This looks like REST want to see it and we can easy add new fields to modify, but now client can set any value for tryCount.
2b
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int // validate that try count can be only zero
}
Differs from the previous one by the presence of validation.
This looks like the most reliable solution. Is it really the best fit?
The non-verb convention is not a standard, you can violate it if you want to, though it can be worked around with very simple stuff, just convert the verb into a noun and you will be ok, something like:
POST /tasks/{task-type}/{id}/try-count-discarding
Another way is setting the try count to zero:
PUT /tasks/{task-type}/{id}/try-count 0
Yet another solution is combining the two, which I like the most:
PATCH /tasks/{task-type}/{id}/try-count {"op": "reset"}
Or another variant:
PATCH /tasks/{task-type}/{id} {"op": "discard-try-count"}

How do I reduce the number of 301 redirect entries using wildcards and variables in Squarespace?

I recently renamed all of the URLs that make up my blog... and have written redirects for almost every page... using wildcards where I can... keeping in mind... all that I know is the * wildcard at this time...
Here is an example of what I have...
/season-1/2017/1/1/snl-s01e01-host-george-carlin -> /season-1/snl-s01e01-george-carlin 301
I want to write a catch-all that will redirect all 38 seasons of reviews with one redirect entry... but I can't figure out how to get rid of just the word "host" between s01e01- and -george-carlin... and was thinking it would work something like this...
/season-*/*/*/*/snl-s*e*-host-*-* -> /season-*/snl-s*e*[code to remove the word "host"]-*-* 301
Is that even close to being correct? Do I need that many *s
Thanks in advance for any help...
Unfortunately, you won't be able to reduce the number of individual redirect entries using the redirect features that Squarespace has to offer, namely the wildcard (*) and a single variable ([name]). Multiple variables would be needed, but only [name] is supported.
The closest you can get is:
/season-1/*/*/*/snl-s01e01-host-[name] -> /season-1/snl-s01e01-[name] 301
But, if I'm understanding things, while the above redirect appears more general, it would still need to be copy/pasted for each post individually. So although it demonstrates the best that could be achieved, it is not a technical improvement.
Therefore, there are only two alternatives:
Create a Google Sheet (or other spreadsheet) where the old URLs are copy/pasted in column one, a formula using arrayformula and regular expressions to parse the old URL and generate the new URL is added in column two, and in column three a formula is written to join the two cells with -> and 301. With that done, you could click, drag and highlight all cells in column 3, copy, and paste them in the "URL Shortcuts" text area in Squarespace.It can be quite time consuming to figure out, write and test the correct formula, but it does avoid having to manually type out every redirect. Whether it is less time/effort in total depends on the number of redirects and one's proficiency with writing spreadsheet formulas.It could be that using the redirect code above would simplify the formula that'd need to be written in the spreadsheet, which may save some time.
Another alternative would be to remove your redirects and instead handle the redirect via JavaScript added to the 404/Page-not-found page. Because it sounds like you already have all of the redirects in place but are simply trying to reduce the overall number, I wouldn't recommend changing to a JavaScript-based approach. There are other drawbacks to using JavaScript, in any case.

Using "is null" when retrieving prices doesn't work

I am trying to get the price items for performance block storage that are generic (not specific to a certain datacenter). I can see that these have the locationGroupId set to blank or null, but I can't seem to get the objectFilter to work with that, the query returns nothing. If I omit the locationGroupId filter I get a result that contain both location-specific and non-location specific prices.
GET /rest/v3.1/SoftLayer_Product_Package/759/getItemPrices.json?objectMask=mask[locationGroupId,id,categories,item]&objectFilter={"itemPrices":{"categories":{"categoryCode":{"operation":"performance_storage_space"}},"item":{"keyName":{"operation":"$=GBs"}},"locationGroupId":{"operation":"is null"}}}
I am guessing there is something wrong with the object filter, any ideas?
If I filter on locationGroupId 509 it works:
/rest/v3.1/SoftLayer_Product_Package/759/getItemPrices.json?objectMask=mask[locationGroupId,id,categories,item]&objectFilter={"itemPrices":{"categories":{"categoryCode":{"operation":"performance_storage_space"}},"item":{"keyName":{"operation":"$=GBs"}},"locationGroupId":{"operation":509}}}
The reason it the first query didn't work while the second did was that I used the command "curl -sg" to do the request. While that eliminates the need to escape the {}[] characters - it also turns off escaping other characters correctly in the URL - like the space in "is null". Changing that to "is%20null" solves the issue.
I am posting this as the answer as I find it likely for others to encounter this problem.

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

How to filter by both text and property in Chrome DevTool's network panel?

I want to filter Chrome DevTool's network panel by the method property and text in the URL. For example, if I am searching for the text chromequestion in the URL and only HTTP GET requests (ignore PUT, POST, DELETE, etc).
I am able to filter by text or by method:
I am not able to combine the filter to search by both text and method:
I read the documentation at https://developers.google.com/web/tools/chrome-devtools/network-performance/reference#filters and I am able to filter by multiple properties (.e.g, domain:*.com method:GET). However, I am unable to filter by text and property (e.g., method:GET chromequestion).
Unfortunately, it's not possible to do this currently. I played around in DevTools originally, but couldn't find a way. I later had a look into how the filtering was implemented, and can confirm there's a limitation preventing you from mixing the pre-defined filters and text filters.
Implementation details
This is a bit long but I thought it might be interesting for some to see how it's implemented. I will probably look into improving the implementation, either myself or I'll log it because it's limited.
There's a _parseFilterQuery function that parses the input field and categorises the entries into two arrays. The first is called filters, and it's the pre-defined filtering options, such as method:GET etc. The second is a text array filter, split up by spaces. The parser determines the difference fairly naively, by checking for the occurrence of :, and - at the start (for negation).
Scenario 1
You only input a pre-defined filter, or multiple filters. For each filter, the specific filter function, which looks at the different properties of the request object, is pushed to a network module filters array (this._filters). Later on, for each request, the function is called on it, and a match returns true, otherwise false. This will determine whether the request is shown. There's obviously a requirement for ALL filters to return true for the row to show.
Scenario 2
This is the interesting one, where you input both a pre-defined filter and a bit of text. This covers the Stack Overflow question. The _parseFilterQuery function looks at the text filters first, before the pre-defined ones. In Scenario 1, this was empty, so it was skipped.
We pass each text word to the _createTextFilter, and push each of the resulting filters to the network module filters array. However, the implementation of this is questionable. The only time the actual word passed in is used is to check whether its a negation filter for a bit of text. If the first character is -, it means the user doesn't want to see a request with the following word in the name. For example -icon means don't show any request with that in the name/page. If there is no negation, it simply returns the WHOLE input text as a regular expression, NOT the word passed in. In my case, it returns /method:GET icon/i.
The pre-defined filters are looked at next. In this case, method:GET is pushed.
Finally, it loops over the requests calling each filter on it. However, since the first filter is /method:GET icon/i, it makes ALL other filters redundant because it will NEVER pass. The text filters only apply to name and path, so method:GET in a text filter will be invalid.