Routing using OSRM for multiple profiles - does profile in the URL actually do anything? - osrm

With ORSM there are 3 profiles for different modes of transport, cycle, foot and car. These come with OSRM.
According to the following post which was made 1 year ago, OSRM does not support multiple profiles:
OSM routing (OSRM): do I need to duplicate all data for different profiles?
Yet in the official documentation there is a profile argument as part of the URL called for retrieving a route from a running OSRM instance:
http://project-osrm.org/docs/v5.6.4/api/#general-options
The path would look something like this:
http://router.project-osrm.org/route/v1/driving/
Without driving, foot or cycle in the URL a route won't be retrieved so one of them is required for the API, yet if I compile a route for car on the server, but then use /foot/ in the URL to retrieve a route, it will still retrieve a car based route, completely ignoring 'foot'.
Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn, and what the point of driving is in the above URL seeing as it is ignored anyway and just appears to use the profile attached to the running instance of OSRM?
The solution to the problem of multiple profiles appears to be to host parallel copies of the routing machine for each profile and address different IP's, so again, what is the point of 'profile' in the URL?

Could anybody from OSRM explain why something as useful as multiple profile support has been withdrawn
The support has never been there. You will need to run separate osrm instances for each profile.
The URL option is merely there to make it easier to stick a nginx in front of your OSRM instances and distribute to the correct instance based on profile string.
We might implement multiple profiles in the same OSRM instance in the future, but this is still far out.

Related

How to configure Big Blue Button for Xirsys TURN server?

I run an self-hosted instance of BigBlueButton and signed up for Xirsys TURN server services because we need to serve clients behind (pretty restrictive) firewalls. Before I had been running my own instance of coturn, but as this led to problems recently, I thought I will got someone who does this for a living a try.
Now the configuration in BBB is explained here:
https://docs.bigbluebutton.org/2.2/setup-turn-server.html
Yet so far I completely failed to match the parameters I receive from Xirsys with what I have to put into the /usr/share/bbb-web/WEB-INF/classes/spring/turn-stun-servers.xml file in the place of the <turn.example.com> and <secret_value>.
Did anyone ever make this work? I did try and find a tutorial but also failed.
bbb_web, is returning this the turn uris. passwords to the html5 client, that the client is using in sip.js
so you can either get bbb-web to send valid username/passwords is same method is used, or modify the html5 client to make a Xirsys api call, to get access to the turn candidates.
Would need to look at api docs. twilio has a similar service.
regards,
Stephen
not the most elegant solution but the easiest one for me:
modify the final bbb js bundle to load the stunturn info from a fixed url in
e.g.
/usr/share/meteor/bundle/programs/web.browser/f30716b2b57e2862c4db2325 b7aac63f4622842b.js
the minified part should then look somewhat like:
const r=Meteor.settings.public.media,i='https://<yourbbburl>/html5client/stunturn.json',a=r.cacheStunTurnServers,s=r.fallbackStunServer;
and put either the static credentials or generated ones in a file stunturn.json besides the js bundle.

Is there a way for a service worker to know the url of the requesting page?

I'm converting my large legacy application to a PWA and finding that many things don't work, because they were designed with very different assumptions about caching, state, etc. I want to convert the whole site eventually, but I'd like to restrict the scope of the service worker to specific pages, so that I can convert them one by one.
I know that you can limit service worker scope to a sub-directory by placing the service worker in that sub-directory, but that doesn't map well to the order in which I need to do the conversion. I would prefer to have the service worker in the root directory, but implement a whitelist of pages that I want it to handle, until such time as all pages are converted and I can remove the restriction.
Within a service worker, is there a way to access the url of the referring/current page, so that I could check to see if it's on the whitelist and allow the fetch, and if not just return?
Or is there a better way to handle this situation?
It turns out that FetchEvent.request has a referrer property, which is exactly what I needed. Not sure how I missed this.

Proper way to distinguish between multiple services using zeroconf

I'm writing a piece of software that will run on computers as well as phones.
The service uses an HTTP API for communication and will be published over the local network using Zeroconf.
Initially I published my service using _http._tcp. as the service type but I quickly discovered that both my NAS and my music receiver(!) also broadcasts themselves with that exact service type.
So the question now arises how to differentiate between my service and other services that are using HTTP.
Alternatives
Using a different service type
The is certainly the most certainly the easiest way and (almost) guarantees no other services will be picked up.
However, according to Apple1 new services should be registered with IANA. This is obviously not required but seeing as they recommend it it feels like it would be the wrong way to do it
Using the TXT record
Apple2 describes the TXT record like this:
When a service is registered, three related DNS records are created: a service (SRV) record, a pointer (PTR) record, and a text (TXT) record. The TXT record contains additional data needed to resolve or use the service, although it is also often empty.
The certainly feels like it could be the right way to do it, but I'm still not sure and it's hard to find a description of what the field should contain.
My first though would be to put something like <service_name>-<version> which will then be parsed to see which service it actually is.
My NAS seems to use this for identifying model and version numbers.
Try talking to the service
After finding a service one could always perform a HEAD request on a known endpoint and look for a known header set by the service.
This feels like a fairly slow approach and who knows what making a HEAD request to my receiver will do.
And just to be clear, this question has nothing to do with a specific language or framework, it's about the concepts of zeroconf.
I could show some code but I don't see how that would help.
First, does the service you're advertising actually meet the qualifications for _http as defined by RFC 2782. Specifically- is it not just using HTTP for a transport but is also:
can be displayed by "typical" web browser client software, and
is intended primarily to be viewed by a human user.
If no, register your own service type (there are a couple other services that use HTTP as a transport but don't meet those qualifications so they have -http as a suffix to the service name, see pgpkey-http, senteo-http, xul-http).
If yes, there are a couple ways to go depending on how strict one's interpretation of the RFC is. The least strict being just adding a TXT record as you've already noted in your question. iTunes registers itself with a TXT record in the format iTSh Version=196618.
If you're feeling a little more strict, the RFC only explicitly states that the u=, p= and path= TXT records exist for HTTP. Perhaps someone can chime in on this, but I haven't seen much discussion on whether adding TXT records to already existing entries is frowned upon or not. So with that, the other way is to just an algorithmic instance name. For example, adding the suffix "-NicklasAService" to the device name. Hopefully giving it a unique name to the local network but still making it so that the service can be easily picked out by the PTR record by just looking for the suffix.

Using HTTPS and multiple NSURLProtectionSpace's in iOS

I'm creating a iOS app that requires the user to log in at startup, and then uses those credentials to query 4-5 different services on a server over the course of the session.
The server (xyz) it self doesn't accept the credentials, but if the services that it provides are queried then they get accepted. For example https://xyz/service1 works, https://xyz doesn't.
Now what I'm wondering about is if there is anything that stands in the way of creating 4-5 NSURLProtectionSpace's at log in, one for each service on the server, and then use the corresponding protection space when use each service?
Or is there a better way of implementing something that could work in this situation?
All help would be appreciated.
Turns out that there is nothing that stands in the way of creating multiple NSURLProtectionSpace's since each is created for a separate url.

Connectedness & HATEOAS

It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources:
/collection1
collection1
|-sub1
|-sub1sub1
|-sub1sub1sub1
|-sub1sub1sub1sub1
|-sub1sub2
|-sub2
|-sub2sub1
|-sub2sub2
|-sub3
|-sub3sub1
|-sub3sub2
If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong?
Best regards,
Suresh
You will run into this mismatch when you try and build a REST api that does not match the flow of the user agent that is consuming the API.
Consider when you run a client application, the user is always presented with some initial screen. If you match the content and options on this screen with the root representation then the available links and desired transitions will match nicely. As the user selects options on the screen, you can transition to other representations and the client UI should be updated to reflect the new representation.
If you try and model your REST API as some kind of linked data repository and your client UI as an independent set of transitions then you will find HATEOAS quite painful.
Yes, it's right that the client application should traverse the links, but once it's discovered a resource, there's nothing wrong with keeping a reference to that resource and using it for a longer time than one request. If your client has the possibility of remembering things permanently, it can do so.
consider how a web browser keeps its bookmarks. You probably have maybe ten or a hundred bookmarks in the browser, and you probably found some of these deep in a hierarchy of pages, but the browser dutifully remembers them without requiring remembering the path it took to find them.
A more rich client application could remember the URI of sub1sub1sub1sub1 and reuse it if it still works. It's likely that it still represents the same thing (it ought to). If it no longer exists or fails for any other client reason (4xx) you could retrace your steps to see if you can find a suitable replacement.
And of course what Darrel Miller said :-)
I don't think that that's the strict requirement. From how I understand it, it is legal for a client to access resources directly and start from there. The important thing is that you do not do this for state transitions, i.e. do not automatically proceed with /foo2 after operating on /foo1 and so forth. Retrieving /products/1234 initially to edit it seems perfectly fine. The server could always return, say, a redirect to /shop/products/1234 to remain backwards compatible (which is desirable for search engines, bookmarks and external links as well).