For many tasks (e.g., visual odometry, object detection), KITTI officially provides the mapping to raw data, however, I cannot find the mapping between tracking dataset and raw data. Any help would be appreciated.
I download the development kit on the official website and cannot find the mapping. It just provides the mapping result but not the mapping relationship.
Official readme
Related
I want to fetch complete metadata of the given dataset through API call. Can anyone please suggest how to fetch metadata
You actually already manipulate and interact with various forms of metadata inside your Transforms Python builds today, but in a way that is structured to be safe when reading and writing.
While not all forms of metadata are possible to access today, this generally is because of the desire to ensure product stability and good version controls of your builds.
That said, if there's a certain interaction with metadata you'd like to see in the product, I'd recommend reaching out to your support engineers with a feature request so they can understand your request more specifically and discuss with our product teams.
Hi everyone
I've started to learn about Autodesk Forge and I'm a beginner in coding.
I've been able to put together the Model 3D Viewer following this tutorial:
https://www.youtube.com/watch?v=8FMwgJcRHz8
My current tusk is:
to build a WebApp on Forge for model elements naming check based on customisable validation schemas similar to this one:
https://www.youtube.com/watch?v=pxM5TojTmLE
With additional functionality of creating a BIM360 issue for every mismatch fond by checker, like this:
https://www.youtube.com/watch?v=j9EgshGh2is
My questions are:
Is there any learning paths or educational platform that I can use to achieve my goals on this?
Can you please share any relevant experience?
Any advices would be highly appreciated
P.S: I know about this one already:
https://learnforge.autodesk.io/#/?id=learn-autodesk-forge
Thanks in advance
Cheers
You can check the object names in two ways:
Option 1: Translate your designs with Forge & check the conversion output
The Forge Model Derivative service can extract all sorts of information (3D geometry, 2D drawings, property metadata, design hierarchies, etc.) from over 60 different design file formats, incl. Revit. The extracted data can then be viewed in Forge Viewer, or explored using various endpoints or through SDKs for specific programming language. In your case you could use the GET :urn/metadata/:guid endpoint to retrieve the design hierarchy which includes object names.
Option 2: Check your designs with custom Revit plugin using Design Automation
If you're familiar with Revit API and plugin development, you could also use the Forge Design Automation service to process Revit models with your custom plugin remotely, by starting a Revit instance on Autodesk servers.
We are able to provide an initial training model and ask for recommendations. When asking for recommendations we can provide new usage events. Are these persisted at all into the model? Do they manipulate the model at all?
Is there another way the data is supposed to be updated or do we need to retrain a new model every time we want to enrich the model?
https://azure.microsoft.com/en-us/services/cognitive-services/recommendations/
EDIT:
We are trying to use the "Recommendations Solution Template" which deploys a solution to Azure and provides a swagger endpoint for working with the model (https://gallery.cortanaintelligence.com/Tutorial/Recommendations-Solution)
It appears the Cognitive Services API is much richer than this. Can the swagger version's models be updated?
After more experience with this I discovered a few things as of August 21st, 2017:
While not intuitive for the uninitiated, new data requires training a new model for the data to be persisted into the model.
This allows a form of versioning the model, and means when you make new models you can switch recommendations to work how they did before if they don't work as well.
The recommended method appears to be to batch usage data and create new builds of the model on an interval.
The APIs do allow passing in recent usage data to allow recent data to be accounted for at scoring time, it's just not persisted.
The "upload usage events" call in the cognitive services API does not seem to work. Uploading the new usage data via a file does appear to work.
The Recommended Solutions Template vs. The Cognitive Services API
It appears the Recommended Solutions Template is a packaged version of the SAR (Smart Adaptive Recommendations) model inside the Cognitive Services API that is optimized for ease of use.
I'm presuming for other popular recommendation models like FBT the Cognitive Services API should be used as the deployable template only allows one model type.
Additional note on the Preview Status of the API
It seems microsoft is deprecating the datamart as of February and sending people to this preview API instead. Therefore it seem reasonable to presume this Preview is highly likely to move on past preview and not be killed.
I am trying to combined data from multiple sources like RDBMS, xml files, web services using Marklogic. For this as I see from MarkLogic documentation on Metadata Catalog (https://www.marklogic.com/solutions/metadata-catalog/), Data Virtualization (https://www.marklogic.com/solutions/data-virtualization/) and Data Unification it is very well possible. But I am not able to get hold of any documentation describing how exactly to go about it or which tools to use to achieve this.
Looking for some pointers.
As the second image in the data-virtualization link shows, you need to ingest all data into MarkLogic databases. MarkLogic can then be put in between to become the single entry point for end user applications that need access to that data.
The first link describes the capabilities of MarkLogic to hold all kinds of data. It partly does so by storing them as-is, partly by extracting text and metadata for searching, partly by conversion (if you needs go beyond what the original format allows).
MarkLogic provides the general purpose MarkLogic Content Pump (MLCP) tool for this purpose. It allows ingesting zipped or unzipped files, and applying transformations if necessary. If you need to retrieve your data from a different database, you might need a bit more work to get that out. http://developer.marklogic.com holds tutorials, blogs, and tools that should help you get going. Searching the MarkLogic Mailing List through http://marklogic.markmail.org/ can provide answers as well.
HTH!
Combining a lot of data is a very broad topic. Can you describe a couple types of data you'd like to integrate, and what services or queries you would like to build on that data?
I'm researching Umbraco for use as a base in a large CMS project, however the project calls for the SQL Server 2008 database to store spatial data against content.
Being new to Umbraco I'm still reading through the documentation and slowly building up an idea of it's architecture. However so far it doesn't look like Umbraco supports the storage of spatial data.
There only appears to be four database datatype options: date, integer, ntext, nvarchar
Is it possible to store spatial data to the database?
Update: Futher research into how Umbraco works has showed me I was on the wrong track. It seems the way to do this is store the lat/long data in the data inside the usual XML format Umbraco uses.
Then to use the Spatial.net extensions that have been built on top of Lucene.net, rather than use the limited search capabilities Examine exposes.
However this is all still theoretical, I've just not been able to achieve this. If I do before someone answers this question, I'll post my findings here to help others.
You could take a look at how to make user controls (with Visual Studio) in Umbraco.
It is also possible the versatility in Umbraco 'Document Types' is enough for you.
It is possible to extend Umbraco in any sort of way to get the solution you want. I don't know how you want the spatial data to interact with your frontend - so it is difficult to provide a direct solution.
Although there are ways to store spatial data and perform queries against it using Spatial.net, it's not a very elegant solution.
Instead I've created an additional table in SQL Server 2008 with the geometry/geography datatype and a reference to the Umbraco content it's connected with.
I've then got a event hook which updates this whether content is added/updated/deleted.