I have installed nominatim 4.1.0 (tokenizer= ICU) via following instructions on nominatim documentation, added wikipedia data during the installation, and imported an updated pbf file from geofabrik.de.
All works but when I sent some kind of request (e.g. Cagliari via Roma) the answer I get from Nominatim Website (https://nominatim.openstreetmap.org/) and my local installation are very different. The right results are on nominatim website of course.
The problems seems to be with the search candidate algorithm or the attribuition/calc of AdressImportance parameter.
The very strange thing is that I get these wrong results only for some requests.
There is any particular parameter to set or anything else to verify?
I hope it is clear for you and even small advice or comment would be very helpful for me
Thanks
Michele
After a discussion with the maintainers (https://github.com/osm-search/Nominatim/discussions/2839), I found an acceptable solution by editing the Geocode.php file line as follows:
$this->iLimit = $iLimit + max($iLimit, 150)
the result is not exactly the same as that of the online version but it works fine for me
Related
I use Python for plotting geospatial data on maps.
For certain map-styles, such as ["basic", "streets", "outdoors", "light", "dark", "satellite", "satellite-streets"], I need a mapbox-access token and for some geospatial plotting packages like folium I even need to create my own link for retrieving the map-tiles.
So far, it worked great with the style "satellite":
mapbox_style = "satellite"
mapbox_access_token = "....blabla"
request_link = f"https://api.mapbox.com/v4/mapbox.{mapbox_style}/{{z}}/{{x}}/{{y}}#2x.jpg90?access_token={mapbox_access_token}"
However, when choosing "satellite-streets" as mapbox-tile-ID, the output doesn't show a background map anymore. It fails with inserting "satellite-streets", "satellitestreets" and "satellite_streets" into the aforementioned link-string.
Why is that and how can I come to know what's the correct tile-ID-name for "satellite-streets"?
I found an answer when reaching out to the customer support.
Apparently, one has to access the static APIs which have specific names listed on their website:
"In general, the styles that you mentioned including
"satellite_streets" that you are referencing are our classic styles
that are going to be deprecated starting June 1st. I would recommend
using our modern static API the equivalent modern styles. This
will allow you to see the most updated street data as well.
Like the example request below:
https://api.mapbox.com/styles/v1/mapbox/satellite-streets-v11/tiles/1/1/0?access_token={your_token}
Here is more info on the deprecation of the classic styles and
the migration guide for them."
My personal adaptation after having tried everything out myself, is:
Via combining the above-mentioned with the details on how to construct a Mapbox-request link on this documention from mapbox' website,
I finally managed to make it work.
An example request looks like so (in python using f-strings):
mapbox_tile_URL = f"https://api.mapbox.com/styles/v1/mapbox/{tileset_ID_str}/tiles/{tilesize_pixels}/{{z}}/{{x}}/{{y}}#2x?access_token={mapbox_access_token}"
The tileset_ID_str could be e.g. "satellite-streets-v11" which can be seen at the following link containing valid static maps.
My apologies if the question is duplicated. We are facing an issue with the analysis of a profile using Watson Personality Insights API in Spanish. We have a demo we implemented using PI API version 2 and then we tested the results (exact same text) with the demo published on developer cloud(in spanish) and we found important differences on how the big five were calculated when the facet values were not that different. Is it possible that these differences are caused because of the API version? The issue that with our demo the big five values produced a kind of negative summary profile when the developercloud summary is kinder.
We could send both result jsons. For example here is how the big five rated:
BigFive DeveloperCloud Demo V2
Openness 0.773834349 0.847273232
Conscientiousness 0.916616088 0.914907481
Extraversion 0.796331544 0.612606551
Agreeableness 0.17445636 0.096118648
Emotional range 0.036287447 0.01623536
thanks in advance!!
So the API version would not make a difference, as that just governs the format of the API; the back-end models are the same for both v2 and v3 of the API.
So the jist of your question is that when you run the same text in your app, and in the demo you get different big5 results, while the facet values are about the same.
This might be easiest solved by you opening a support ticket so we can debug the issue together; if you'd rather not do that then can you provide a sample text? Typically it boils down to a difference in the way the text is parsed.
Another question; did you try making the request using curl? That would cut out any custom logic in your app and narrow down the problem.
thanks Neil for you answer!
We tested the text using CURL and we noticed that the results didn't change by the service version used but instead by how the text was sent. If we called the service using curl passing a plain text input(formatted in UTF-8 with line breaks) it returned the same results for version2 and version 3 and also matched the ones from our demo. If we called the service using curl passing json input WITHOUT line breaks it returned the same values as well. But if we called the service passing the json input WITH line breaks then the results changed and almost matched those shown by ibm demo. My question here is which are the correct results? The ones shown when the text is sent as a plain text input(with line breaks) or when the text is sent as json input(with line breaks)? Is there any technical guideline besides the one shown in developercloud on how the text should be parsed to use this service?
Thanks again!
Hi I'm using a Confluence macro called 'PockketQuery'(PQ). I have connected to a server located at my client's base through PostgreSQL. I run PQ to fetch results from the db into my confluence page. However, it's fecthing an extra unwanted word "Hallo" along with every result. I m unable to figure out where this string maybe coming from and getting attached to my results like this. Please help me get rid of it.
For example I run a PQ on the db which is supposed to fetch me result "Jack London", so the result that I see is "hallo Jack London".
Note: I use VPN to connect to my client's server and Confluence.
Are you using the latest version from the Marketplace 1.14.6? This issue shouldn't exist in the latest version.
I got an upgrade to version 1.14.6 of Confluence's PocketQuery macro. The issue that I had is resolved, the unwanted string in the result is there no more. The bad part is they don't mention it anywhere on the macro's bug fixes. There are no release notes attached to this fix.Thank you Felix for your help.
I am working on Solaris 12 and I am trying to get device path like this:
/pci#0,0/pci108e,4856#1f,2:devctl
I could obtain the this path through CLI using prtconf -v. How could I obtain the path through api using C function? I tried serveral functions in libdevinfo, such as di_devfs_path, but it didn't give the same path as the prtconf gives me. Should I use functions like di_node_name, di_instance, di_binding_name to get pieces of information and construct the path by my own. Or there is a function to get the whole device path?
Thanks.
Firstly, unless you're working for Oracle in the Systems division, you're not working on Solaris 12. (If you are working for Oracle, why haven't you asked
Oracle internal mailing lists for help?)
Secondly, the :devctl node is a minor for the device, so you'll need to walk the minor nodes using di_walk_minor() and check di_minor_name() to see if it matches your criteria.
Finally, yes, this should work on Solaris 10 and later.
I am using Feed crawlIssues = wtr.GetCrawlIssues(encodedSiteID); to get the crawl errors from my webmaster tool account. There are more than 5k errors but the above code retrieves just the first 100. How do I retrieve all the errors?
Thanks
I've run into the same issue as you have, I only got the first 100 errors, too. Basically, because of a bug in the webmaster tools, it only shows you the errors in 100 batches.
It does not have a built in solution as far as I know, but there is a workaround. Instead of using the GetCrawlIssues function, you can access the data through http requests with the provided ExecRequest.exe command line tool. The basic usage is:
ExecRequest cl QUERY http://www.google.com/webmasters/tools/feeds/example_site.com/crawlissues/?start-index=1&max-results=100 example#gmail.com mypassword
This will output the resulting XML to the console. You can specify the starting point, and the number of errors you want to download:
?start-index=startIndex
&max-results=100
You can set the max-result value to wathever you want, but it will only download a maximum of 100 items.
After downloading in batches, you can get the data from the downloaded xml files.
If you only need the data, I've also written a small script in Python, you can check it out here, it's pretty straightforward.