Nominatim search results are always empty - openstreetmap

In my server's Nominatim I'm trying to search anything on the map.
For example I'm trying to search London on my map . I always got :
No search results found
Addresses and postcodes are approximate
Can somebody tell me how to fix it ?
Here are links to website and xml results to anything
[1]: http://91.185.184.63/nominatim/search.php?q=London&viewbox=-151.18,66.02,151.18,-66.02
[2]: http://91.185.184.63/nominatim/search?q=london&format=xml "XML Results"

If you are using Nominatim with a single-country dataset then searching for anything outside of that country returns an empty response with status 200.
So if you installed Nominatim using Docker and followed the official readme file then you will be using the dataset of Monaco only.
docker run -it \
-e PBF_URL=https://download.geofabrik.de/europe/monaco-latest.osm.pbf \
-e REPLICATION_URL=https://download.geofabrik.de/europe/monaco-updates/ \
-p 8080:8080 \
--name nominatim \
mediagis/nominatim:4.1
PBF_URL in the above command points to Monaco's dataset.
If you would like to cover the entire planet you will need to download the planet dataset and you will need a min of 64GB RAM as per Nominatim hardware specs.

Related

Displaying planet in OpenMapTiles not working, but Spain works

I'm setting up an OpenMapTiles-server-dev to work in Docker container and as a map source i downloaded and configured planet. The map is not showing up in view and also in main UI.
My source of OMT-Server, also tried other servers:
docker pull klokantech/openmaptiles-server
I grant all permissions, give more resoursces to Docker, installed Node.js etc. The best part is that i run another map source like Spain or Switzeland and it works like a charm.
PowerShell commands:
docker run -it -v D:\spain:/data -p 8080:80 klokantech/tileserver-gl
docker run -it -v D:\planet:/data -p 8080:80 klokantech/tileserver-gl
Output of both maps are identical after executing ^commands
So the both maps(planet, Spain) was configured succesfuly, but only Spain works properly. Also using another PC with normal Windows 10 I was able to display properly planet map.
[SLOVED]
I just installed Windows 10 edu for testing and it works fine. So if you have similar problem just have in mind that Windows 2019 server is not compatible to run larger maps.

Move default docker postgres data volume

I've created a docker postgis container with the following command :
docker run -p 5435:5432 -e POSTGRES_PASSWORD=xxxxxxx -e POSTGRES_INITDB_ARGS="-d" mdillon/postgis:9.6
This created a volume for data in /var/lib/docker/volumes/[some_very_long_id]/_data
Now I need to move this volume to somewhere else to ease backup for my outsourcing contractor... and don't know how to do this. I'm a bit lost as there seems to be different alternative with data volumes and fs mounts for example.
So what's the correct way to do it as today ? And how to move my current data directory to a better place ?
Thanks.
You can declare a volume mount when you run your container. For example, you could run your container like this:
docker run -p 5435:5432 -e POSTGRES_PASSWORD=xxxxxxx -e POSTGRES_INITDB_ARGS="-d" \
-v /the/path/you/want/on/your/host:/var/lib/postgresql/data \
mdillon/postgis:9.6
This way the postgres data directory will be in the /the/path/you/want/on/your/host in your host.
I didn't check or search deeply, but in your case I suggest to do following step:
Create another container with outside folder.
Using pg_basebackup get all data from the old container to the new container. Or using replicate.
So, you have the data folder outside the container.
Hopefully it will help your case.

IBM Object Storage Command Line Access

Using this guide, I have been trying to access my containers at IBM Object Storage, I have installed the python-swiftclient library and running this command(AUTH_URL, USERNAME,KEY are from IBM Bluemix Object Storage Credentials Section):
swift -A <AUTH_URL> -U <USERNAME> -K <KEY> stat -v
I get the following error:
Auth GET failed: https://identity.open.softlayer.com/ 300 Multiple Choices [first 60 chars of response] {"versions": {"values": [{"status": "stable", "updated": "20
I have tried with other credentials as well, looked online, no luck so far. What is wrong with this?
If you are referring to the Cloud Object Storage (S3 compatible version) look at https://ibm-public-cos.github.io/crs-docs/crs-python.html instead. The example in the KnowledgeLayer is for the SWIFT based option. The new Cloud Object Storage is using S3 API style commands.
Use the following:
swift \
--os-auth-url=https://identity.open.softlayer.com/v3 \
--auth-version=3 \
--os-project-id=<projectId> \
--os-region-name=<region> \
--os-username=<username> \
--os-password=<password> \
--os-user-domain-id=<domainId> \
stat -v
You will find the values for projectId, region, username, password, domainId in the credentials section of your Object Storage service in the Bluemix dashboard.
Another option is to set the environment variables OS_AUTH_URL, OS_AUTH_VERSION, OS_PROJECT_ID, OS_REGION_NAME, OS_USERNAME (or OS_USER_ID), OS_PASSWORD and OS_DOMAIN_ID.

How to specify the Bluemix Docker container "size" through the CLI?

I want to create a Docker container from an image I already have on my Bluemix account with a "micro" size (256MB RAM/16GB storage).
A sample of the command I have so far is (with fake IP address):
cf ic run -p 123.123.123.123:80:8080 \
--expose=2003 \
-v graphite_volume:/opt/graphite/storage/whisper \
--name graphite \
registry.ng.bluemix.net/sitespeed/graphite
However, I cannot figure out a way to set a size for this container in this command, so it defaults to "Pico", which has too little RAM to be usable for my purposes. If I use the UI and set the size, I'm not sure how to forward ports (I think they are only exposed) and setting the volume fails to work (it gets set to "None").
Setting memory limit with -m 256M hasn't worked, as it still sets the size to Pico, with 64M. Is there a way to set the "size" for Docker containers through the Bluemix CLI?
It appears that the CF CLI plugin for IBM Containers does not support this functionality yet.
You can still use the ICE tool to start containers from the command line and set the memory explicitly.
usage: ice run [-h] [--name NAME] [--memory MEMORY] [--env ENV]
[--publish PORT] [--volume VOL] [--bind APP] [--ssh SSHKEY]
[--link LINK]
IMAGE [CMD [CMD ...]]

AWS EC2 stop all through PowerShell/CMD tools

I utilise a number of 'throwaway' servers in AWS and we're looking at trying to keep the cost of these down.
Initially, we're looking for a fairly basic 'awsec2 stop all' command to be run on a scheduled basis from a server we do know will be running 24/7.
Upon checking against what AWS have documented, it appears that we need to pull in all the currently running instances, grab the ID's of these and then pass them through into the command, rather than simply stating I want all instances to turn off.
Is there a better method collecting these ID's such as simply being able to issue a 'stop all'?
Appreciate the help.
The AWS CLI provides built-in JSON parsing with the --query option. Additionally, you can use the --filter option to execute stop commands on running instances only.
aws ec2 describe-instances \
--filter Name=instance-state-name,Values=running \
--query 'Reservations[].Instances[].InstanceId' \
--output text | xargs aws ec2 stop-instances --instance-ids
This is untested, but should do the trick with AWS Tools for Powershell:
#(Get-EC2Instance) | % {$_.RunningInstance} | % {Stop-EC2Instance $_.InstanceId}
In plain English, the line above gets a collection of EC2 instance objects (Amazon.EC2.Model.Reservation), grabs the RunningInstance property for each (a collection of various properties relating to instance), and uses that to grab the InstanceId of each and stop the instance.
These functions are mapped as follows:
Get-EC2Instance -> ec2-describe-instances
Stop-EC2Instance -> ec2-stop-instances
Be sure to check out the help for Stop-EC2Instance... has some useful parameters like -Terminate and -Force that you may be interested in.
This one-liner will stop all the instnaces:
for i in $(aws ec2 describe-instances | jq '.Reservations[].Instances[].InstanceId'); do aws ec2 stop-instances --instance-ids $i; done
Provided:
You have AWS-CLI instlled (http://aws.amazon.com/cli/)
You have jq json parser installed. (http://stedolan.github.io/jq/)
..and yeah, above syntax is for Linux Bash shell specific. You can mimic the same for powershell on windows and figure out a powersehll way of parsing json.
if anyone ever wants to do what Peter Moon described via AWS DataPipeline:
aws ec2 describe-instances --region eu-west-1 --filter Name=instance-state-name,Values=running --query 'Reservations[].Instances[].InstanceId' --output text | xargs aws ec2 stop-instances --region eu-west-1 --instance-ids
it's basically the same command but you have to add the --region after describe-instances and after stop-instances to make it work. watch out for the a/b/c that's usually included in the region name. that does seems to cause errors if included here.