Why are there tag keys missing when downloading OSM data to Postgis / Postgresql? - openstreetmap

I'm working on a routing application using OSM data in pgrouting. I'm using overpass-api to access the data from a specific bounding box. However, after downloading the data, there seem to be tag_keys missing from the data.
When inspecting the data using postgis or QGIS, certain tag_keys are there, like "highway", "oneway" or "maxpeed". However, others seem to be missing. In particular the tag keys "bicycle" (with possible values like "yes" or "no") or "access" are not included in the data. These tag keys are available on OSM online, however.
The following code is used to retrieve the data from OSM through Overpass-API and put it into PGrouting
CITY="Utrecht_west"
BBOX="4.9926,52.0698,5.0772,52.1172"
wget --progress=dot:mega -O "$CITY.osm" "http://www.overpass-api.de/api/xapi?*[bbox=${BBOX}][#meta]"
OSM2pgrouting converter
cd ~/Desktop/Utrecht
osm2pgrouting \
-f Utrecht_west.osm \
-d utrecht_west \
-U user
I expect these lines to download all data in the bounding box, but some tag keys seem to be missing. What am I doing wrong here?
edit: it seems to be a similar issue to this post, however, I cannot find another answer to a similar issue

I'm not familiar with osm2pgrouting. However it looks like mapconfig.xml doesn't include "bicycle" and "access" tags. You either need to add them or create your own config file. If you want osm2pgrouting to consider these tags during routing this might not be enough, though.

Related

Is there a way to filter for ways that are outers in Osmium?

I am trying to get an extract of an osm file that contains ways that are outers of relations. How would I go about doing this? I have tried using osmium tags-filter but cannot seem to find an option that works for finding outers, as those specifically have no tags.
This is the website from which I downloaded the input .osm.pbf file. The file is linked to in the “United States of America” row under the “.osm.pbf” column
Thank you for your help!

Update Hidden Settings After Initial Upload

I'd like to change my Candy Machine from having hidden settings to no longer be hidden.
Initially, the Candy Machine is created with hidden settings like these:
hiddenSettings: {
name: "Name",
uri: "uri..."
hash: '44kiGWWsSgdqPMvmqYgTS78Mx2BKCWzd',
}
I have attempted updating the candy machine to set the value of hidden settings to null, but this does not change any of the NFTs' metadata or seem to do anything at all.
Is there a way to unhide the assets after initializing them to have hiddenSettings?
Very late but answering for others who may have the same question...
Unfortunately it's not that simple. The "hidden" settings for a candy machine determines how the NFTs are uploaded. With it set, all NFTs will be uploaded with the same URI - the placeholder image and metadata.
Once an NFT is uploaded and minted, the candy machine does not control its metadata. Even if you could remove the "hidden settings" field, this would not reveal your NFTs. In fact you need to keep the hidden settings (in particular the hash) for a reason listed below. Instead, you need to update the NFTs themselves, setting the new URI to the actual metadata file.
The tool which makes this easier is Metaboss. It that can explore the blockchain and make changes for you. In particular, you can find the mint accounts of the NFTs which have been minted, and update the URIs. Updating will require your keypair for the wallet with the update authority for the collection.
After installing Metaboss, the command
metaboss snapshot mints -c [YourCandyMachineAddress] --v2
will output an array of the mint accounts to ./[YourCandyMachineAddress]_mint_accounts.json
You can change the output destination with the -o flag. Then for a given NFT you can find the metadata using
metaboss decode mint -a [MintAddress]
which will output the metadata to ./[MintAddress]. Again the output destination can be changed. You will see that this metadata has the URI of your placeholder. The name field, like "SomeCollection #1", identifies which NFT this is. By changing the URI to the actual URI for that NFT, you reveal it. Then wallet and marketplace apps will see the real NFT. You can do this with
metaboss update uri -k [/path/to/keypair.json] -a [MintAddress] -u [https://somestorage.com/realurifornft1]
All these commands have good nested documentation with --help. Obviously doing this manually for a large collection is very impractical. I have uploaded a bash script for this here. Read the script for usage info.
Now you may be thinking "isn't editing the NFT metadata like this shady? Couldn't someone use this to maliciously change my NFT?" You would be correct. To prevent this, the hash field from the hidden settings is very important. This should be the MD5 hash of the cache file created when you launched your candy machine, which contains the "real" metadata URIs. If you were to change the metadata to a different URI, you could totally change the NFT. This hash field exists so that users can confirm after reveal that the real URIs have not been changed, by reconstructing that cache file and comparing the MD5 hashes. Hence you should not remove your hidden settings - without that hash, your collection cannot be trusted.
You can not unhide. Only solution is creating a new candy machine.

List all files in a sub-folder in a given branch / tag

I have been spending the last couple of hours trying to figure out a way to reliably list all the files in a given git repo's sub-folder. For example, if I want to list all files under in the repo -
https://github.com/aws/aws-sdk-java
under the tag -
/tree/1.11.244
using the github api v3 or v4, could anyone please point me in the direction on the different steps to be done for this? Also, if we have a lot of files, is there a way I could add a file filter to look for a file pattern to list?
You need to pass the ref parameter when listing contents:
GET /repos/:owner/:repo/contents/:path?ref=:ref
And if you want it to be recursive you can use recursive=1
For example
GET /repos/user/my_repo/contents/tests/units?ref=0.1
If I want to see the files in tests/units under tag 0.1, with curl it would look like:
curl -u user:pass -X GET https://api.github.com/repos/user/my_repo/contents/tests/units?ref=0.1
Unfortunately I don't think you can add a query string to a request to this endpoint (also this endpoint has a maximum 1000 files to retrieve from a directory, if you have more you are going to need the Git Trees API)

Launch OSRM server on large area

in the tutorial it is shown how to start an OSRM server with this example :
wget http://download.geofabrik.de/europe/germany/berlin-latest.osm.pbf
osrm-extract berlin-latest.osm.pbf -p profiles/car.lua
osrm-contract berlin-latest.osrm
osrm-routed berlin-latest.osrm
I would like to start a server not only on Berlin dataset, but on a full country dataset. For instance all German country roads. Maybe there is something to do with the contract, but i don't really know what king of .osrm i should put as argument to tell it to use a larger dataset that would be the combination of several dataset.
I think the answer should be really obvious when we know it, but it still feel a bit wooly.
Thank you.
According to an OSRM issue it is not possible to merge .osrm files. However you can merge multiple PBF files before generating your .osrm files.
Merging of OSM XML or PBF files can be done with osmium:
osmium merge file1.osm.pbf file2.osm.pbf -o merged.osm.pbf.
Or with osmosis:
osmosis --rb file1.osm.pbf --rb file2.osm.pbf --m --wb merged.osm.pbf
wget http://download.geofabrik.de/europe/germany-latest.osm.pbf
osrm-extract germany-latest.osm.pbf -p profiles/car.lua
osrm-contract germany-latest.osrm
osrm-routed germany-latest.osrm
Should work, but please note it will require around 16GB of RAM and probably a similar amount of disk space.
EDIT:
After clarification what you will need to do is merge the .osm.pbf files using the osmium tool.
./osmium merge first.osm.pbf second.osm.pbf third.osm.pbf -o result.osm.pbf

Export AD structure from specific OU, then re-create structure in new domain

I've researched and found the way to export our active directory information for our application is like this:
csvde -d OU=MyAppsOU,DC=dot,DC=testdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Now, I've read that to do an import from that file, you just have to add the -i option to the line like this:
csvde -i -d OU=MyAppsOU-New,DC=dot,DC=newdmz,DC=lan
-f C:\temp\addump_ou.csv -r (objectClass=organizationalUnit)
Obviously, I'm very scared to try this as I don't want to blow away anything. My questions are:
Does specifying the OU=MyAppsOU-New create the new OU structure with that specific name? (I'm just trying to be 100% positive)
Does specifying the different domain name (newdmz) just update all of the data in the file to contain the new domains name?
or
Do I need to modify the exported csv file to change the domain name (testdmz) to what the new domain name will be (newdmz)?
Is there a different way I should be doing this?
I just want to re-create the OU structure without groups, roles (which are groups) and users. I will probably do those in a different process because we have different usernames for test and production.
Wow ! lost of question here, but according to me not enougth.
Begining by the end. CSVE.EXE is really not the exact tool I would use. As a Directorie developper I prefer LDIFDE.EXE, because it generates LDIF (LDAP data Interchange Format) which is more standard and more readable. You can also have a look to tools like ADAMSync.EXE that allow to synchronize two directories in AD world (but it's a big hammer for whant you want to do here)
Now choosing LDIFDE.EXE you will see that LDIF format is almost importable as is, but you nned to remove operational attributes (system attributes) from the file. The best way is to take them during the rxport. So you will use -L to only export the attributes you need or -O option to omit operational attributes.
To import in another domain, you will use -C option to change original domain part (DC=dot,DC=testdmz,DC=lan) by the new domain part.
Try it before in a virtual machine.