I have a large landscape composed of polygons (a polygon contains several patches). Without using Netlogo, I create a file.txt which contains values of distance between each source polygon in my landscape and each destination polygon that are situated in an buffer of 1km around the source polygon. I search for a rapid solution to retrieve distance contained in file .txt from my Netlogo program. My file .csv is as follows:
source-polygon destination-polygon distance
A 1 101
A 2 220
A 3 412
B 5 536
B 9 789
For example, from Netlogo, I would like to rapidly retrieve the distance between the polygon A and polygon 3 (i.e. 412) in the file .txt. I imported and read my file .txt into Netlogo from read file lines with spaces into NetLogo as lists. But I find that searching values in a file .txt from Netlogo is slow. At each time step in my program, I retrieve values in my file .txt. So is there a more rapid solution ?
Thanks in advance for your help and advices.
I would try the Table extension. You could use (word source-polygon destination-polygon) to create a string that serves as the table key. I do something similar and it is extremely fast. It should be clear how to do this from the Arrays & Tables section of the Extensions part of the User Manual.
Related
I downloaded data showing polygons from statscan with 1643 fsa polygons (the first 3 letters of a postal code).
TWO datasets meant to be combined, hope this is clear.
DATA1 -provided to me as descriptors for specific FSA's
FSA
DATA PROVIDED
FSA1
text1
...
text ...
FSA667
text 1667
--------
--------
DATA2 - downloaded from statscan as .shp file
FSA
Polygon coords
FSA1
Cell 2
FSA1643
Cell 4
I am combining the polygons with another dataset to show layover data for each fsa. The problem is I was provided with data showing 1667 FSA's and I'm asked to produce a map that reflects their dataset (1667 items of layover information) when combined with an equal and matching number of polygons.
Effectively there are 1667-1643 = 24 missing FSA's as polygons.
Does anyone know a good source for FSA only? Other than stats canada I can't seem to find what I need. Paid and free.... I need to see what's out there and available.
Link to statscan https://www12.statcan.gc.ca/census-recensement/2021/geo/sip-pis/boundary-limites/index2021-eng.cfm?year=21
I am using leaflet.js to show the data but this is really a question about the datasets themselves. In summary I seek 1667 polygons represending canadian FSA's (forward sorting areas) and only can find 1643.
Thanks
I can successfully view and import the data I am using qgis, the issue is the data itself. I seek 1667 polygon coordinates not the 1643 I can only find online. Hopefully free... maybe paid.
I'm using autocad to slice a very large model into smaller pieces (.stl) for 3d printing.
Imagine a 2d road map with all the roads extruded out an inch to created 3d pieces.
Since there are many individual .stl files to export (and store) I need to name them. So far this has to be done manually with the pieces named 1-100 for example.
I've refined the process in autocad (using built in functions MULTIPLE and EXPORT) to:
MULTIPLE
EXPORT
save as window appears
enter part number: (1:n)
enter
select part
MULTIPLE command returns process to 3.
Ideally I would like to use powershell or something similar to automate the numbering from a for loop and the keystrokes so all I have to do is select each piece iteratively.
Any thoughts?
Note: I am largely self taught and learning autocad/coding on the job. Simple is good!
I have a large, labelled, dataset which I have created and I would like to provide it to Matlab to train an R-CNN (using the faster R-CNN algorithm).
How can this be done?
The built-in labeller provided by Matlab requires that the user manually load each data sample and label it with a graphical user interface.
This is not practical for me as the set is already labelled and it contains 500,000 samples.
It should be noted, that I can control the format in which the data set is stored. So, I can create .csv files or excel files if needed.
I have tried two directions:
1. Creating a mat file, similar to the one created by the labeller.
2. Looked for ways within Matlab to import the data from .csv or excel files.
I have had no success with either methods.
For Direction 1:
Though there are many libraries that can open mat files, they are not able to open or create files similar to the Matlab ground truths because these are not simple matrices (the cells themselves contain matrices of varying dimensions that represent the bounding boxes of each classified object). Moreover, though the Matlab Level 5 file format is open source I have not been successful in using it to write my own code (C# or C++) to parse and write such files.
For Direction 2:
There are generic methods in Matlab to load .csv and excel files but I do not know how to organize these files in such a way as to produce the structure that the labeller creates and that is consumed by the fasterRCNN trainer.
G'day,
I have ocean model output in the form of netCDF files. The netCDF files are approximately 21GB, and the variables that I want to load are also pretty big (~ 120 * 31 * 300 * 400 sized matrices).
I want to load some of these variables from a netCDF file into MATLAB. Usually, I would do this via:
ncload('filename.nc',var1)
Which would load the variables var1 into similarly named MATLAB variables. However, since I only need a single column of var1, I only want to load a subset of var1 - This should speed up the loading process. For example, say,
size(var1)
>> var1 120x31x260x381
I only want the 31st column, and loading the other 30 columns, and discarding the information seems like a waste of time. In other words, this is what I want to accomplish: ncload('filename.nc',var1(:,31,:,:)).
I know there are a few different netCDF toolboxes floating around, and I have heard that one can use a stride flag to only load every xth entry... but I'm not sure if it's possible to do what I want. Does anyone know of a way to do this?
Cheers
If you have a current version of MATLAB, look for NCREAD and the example therein.
I am working on a project that requires reading intensity values of several images from a text file that has 3 lines of file header, followed by each image. Each image consists of 15 lines of header followed by the intensity values that are arranged in 48 rows, where each row has 144 tab-delimited pixel values.
I have already created a .mat file to read these into Matlab and create a structure array for each image. I'd like to use OpenCV to track features in the image sequence.
Would it make more sense to create a .cpp file that will read the text file or use OpenCV and Matlab mex files in order to accomplish my goal?
I'd recommend writing C++ code to read the file directly independent of Matlab. This way, you don't have to mess with row major vs. column major ordering and all that jazz. Also, are there specs for the image format? If it turns out to be a reasonably common format, you may be able to find an off-the-self reader/library for it.
If you want the visualization and/or other image processing capabilities of Matlab, then mex file might be a reasonable approach.