Given (x_top_left, y_top_left) and (x_low_right, y_low_right) in the Netlogo source, what should the width and height of the saved Netlogo applet be?
Background
I have a ton of authentic Netlogo files, prepared for courses and demos. Using Perl or Ruby, I'd like to export them in batch as an applet in different files, possibly related by a table of contents in a left frame or so. Much like "save as applet", but then in batch, to different HTML files.
All is trivial to do were it not that I got stuck in finding out which applet dimensions I am supposed to use in writing
<applet code="org.nlogo.lite.Applet" archive="NetLogoLite.jar"
width="???" height="???">
<param name="DefaultModel" value="netlogofile.nlogo">
</applet>
Notice the ???. I searched for other Netlogo file parsers and encountered https://github.com/NetLogo/NetLogo/wiki/Model-file-format, which is not specific enough and https://github.com/rikblok/dokuwiki-plugin-netlogo/blob/master/syntax/applet.php which is a parser but yields results which appear useless to me. (I got it running but it seems to parse the Netlogo source wrongly.)
Netlogo file format
I figured out that the Netlogo file format is like the following (comments after semicolon)
##$###$##
GRAPHICS-WINDOW
210 ; x-coord of upper left corner
10 ; y-coord of upper left corner
544 ; x-coord of lower right corner?
215 ; y-coord of lower right corner?
-1
-1
2.77 ; patch size
1
10 ; font size
1
1
1
0
1
0 ; world-wrap
1 world-wrap
-45 ; min-pxcor
71 ; max-pcor
-33 ; min-pycor
29 max-pycor
0
0
1 ; show tick counter
ticks ; tick counter label
To get a feeling for the logics I parsed a few saved applets and got the following results:
(x_top_left, y_top_left) = (210, 10).
(x_low_right, y_low_right) = (649, 470).
Netlogo saves applet with width x height: 794 x 480.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (535, 470).
Netlogo saves applet with width x height: 629 x 480.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (483, 340).
Netlogo saves applet with width x height: 575 x 350.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (396, 271).
Netlogo saves applet with width x height: 690 x 300.
From these data I tried to discover a pattern in these numbers but the relation between them frankly is beyond me.
My question is: given (x_top_left, y_top_left) and (x_low_right, y_low_right) in the Netlogo source, what should the width and height of the saved Netlogo applet be?
You have to look at the dimensions of all of the widgets in the Interface tab, compute the bounding box for all of them together, and then add some slop.
I know of two implementations of this calculation, both pretty craptastic.
One is the one in NetLogo itself. It's here, split across two files:
https://github.com/NetLogo/NetLogo/blob/22bd1361ab7ecc1c186448ebb2a77ba993b8fb8b/src/main/org/nlogo/app/AppletSaver.scala#L23-L47
https://github.com/NetLogo/NetLogo/blob/22bd1361ab7ecc1c186448ebb2a77ba993b8fb8b/src/main/org/nlogo/app/WidgetPanel.java#L112-L145
The other is on the Perl scripts (Perl? yeah, it was 2002, barely been touched since) on the NetLogo website that serve up the applet versions of the Models Library models. Those scripts are in a private repo, but I made a gist of the relevant section:
https://gist.github.com/SethTisue/98a1b92db00dcd6a4f79
I haven't looked at this stuff in donkey's years, but if you have questions about it, it's possible my memory could be jogged.
Related
I'm new to image processing and need a bit of help.
My goal is to calculate the area, in hectares, of my TIFF image of a farm field. When I check the image data in Preview, it tells me that the DPI is at 72 pixels/inch.
My field image is diagonal and so what I am currently doing is, as a simple proxy, only counting the pixels that are not white to get all the pixels that are part of the field itself.
My first issue is with accessing the DPI information via my Python3 script. With Pillow, when I do image.info['dpi'] I get (1,1) which doesn't seem right. With Rasterio, I am unsure how to get the DPI information.
Once that is accessible, how would I go about calculating the area in hectares?
Any help would be greatly appreciated!
Here's my current attempt:
import rasterio
Image.MAX_IMAGE_PIXELS = None
im = Image.open('/home/ubuntu/poly-quickcount/server/data/1/1/geotiff/champ.tif')
numOfPixels = 0
for pixel in im.getdata():
if pixel != (255, 255, 255):
numOfPixels += 1
dpi = im.info['dpi']
area = numOfPixels * dpi[0] * dpi[1] / 10000
I am using a therm-app camera to take infra-red photos of bats. I would like to draw around parts of the bat and find the hottest, coldest and average temperature and do further analysis.
The software that comes with the camera doesn't let me draw polygons so I would like to load the image in another program such as MATLAB or maybe imageJ (also happy to use Python or other if that would work).
The camera creates 4 files total:
I have a .jpg file, however when I open this in MATLAB it just appears as an image and I think it is just opening as a normal image, not sure how to accurately get the temperatures from this. I used the following to open it:
im=imread('C:\18. Bats\20190321_064039.jpg');
imshow(im);
I also have three other files, two are metadata (e.g. show date-time emissivity settings etc.) and one is a text file.
The text file appears to show the temperature of every pixel in the image.
e.g. (for a photo that had a minimum temperature of 15deg and max of 20deg it would be a text file with a minimum value of 1500 and maximum value of 2000)
1516 1530 1530 1540 1600 1600 1600 1600 1536 1536 ........
This file looks very useful, just wondering if there is some way I can open this as an image, probably in a program like MATLAB, which I think has image analysis so that I could draw around certain parts of the image (e.g. the wing of the bat) and find the average, max, min etc.
Has anyone had experience with this type of thing, can I just assign colours to numbers somehow? Or maybe other people have done it already and there is a much easier way. I will keep searching on the internet also and try to find out.
Alternatively maybe I need to open the .jpg image, draw around different parts, write a program to find out which pixels I drew around, find these in the txt file and then do averaging etc? Or somehow link the values in the text file to the .jpg file.
Sorry if this is the wrong place to ask, I can't find an image processing site on stack exchange.
All help is really appreciated, I will continue searching on the internet in the meantime.
the following worked in the end, it was much much easier than I thought it would be. Now a big fan of MATLAB, I thought it could take days to do this.
Just pasting here in case it is useful to someone else. I'm sure there is a more elegant way to write the code, however this is the first time I've used MATLAB in 20 years :p Use at your own risk, I haven't double checked I'm getting the correct results yet (though will do before I use it for anything important).
edit, since writing this I've found that the output .txt file of temperatures is actually sensor temperatures which need to be corrected for emissivity and background temperature to obtain the target temperatures. (One way to do this is to use the software which comes free with the camera to create new output .csv files of temperatures and use those instead).
Thanks to bla who put me on the right track with dlmread.
M=dlmread('C:\18. Bats\20190321_064039\20190321_064039_temps.txt') % read in the text file as a matrix (call it M)
% note that file seems to be a list of temperature values for each pixel
% e.g. 1934 1935 1935 1960 2000 2199...
M = rot90( M , 1 ) % rotate M anti-clockwise by 1*90 (All the pictures were saved sideways for some reason so rotate for easier viewing)
a = min(M(:)); % find the minimum temperature in the image
b = max(M(:)); % find the maximum temperature in the image
imresize(M,1.64); % resize the image to fit the computer screen prior to showing it on the screen
imshow(M,[a b]); % show image on the screen and fit the colours so that white is the value with the highest temperature in the image (b) and black is the lowest (a).
h = drawpolygon('FaceAlpha',0); % Let the user draw a polygon around the region of interest (ROI)
%(this stops code until polygon is drawn)
maskOfROI = h.createMask(); % For each pixel in the image assign a binary number, pixels inside the polygon (ROI) area are given 1 outside are 0
selectedValues = M(maskOfROI); % Now get the image values for all pixels where the mask value is '1' (i.e. all pixels within the polygon) and call this selectedValues.
averageTemperature = mean(selectedValues); % Get the mean of selectedValues (i.e. mean of the temperatures inside the polygon area)
maxTemperature = max(selectedValues); % Get the max of selectedValues
minTemperature = min(selectedValues); % Get the min of selectedValues
I have been experimenting trying to work out how does the export-world work, in particular how does the DRAWING section work.
I created the following code to experiment using the default environment size of max-pxcor and max-pycor of 16. If you run this code shown at the end of this post, each file produced will be 5 megabytes, so it is easy to use up over a gigabyte of data after a minute of running.
Anyway, my question is this: How does the DRAWING section work? I have seen that the first entry is at -16.4, 16.4. I have summarised some of my observations in the simple table below. The first column is how much the turtle has moved, while the second column shows partial output in the CSV file.
0.001 FF1D9F78
0.016 FF1D9F78FF1D9F78
0.093 FF1D9F78FF1D9F78FF1D9F78
I have also seen that the first entry is created when the turtle moves by 0.001.The second entry seems to happen when the turtle has moved by 0.016 and the third entry is 0.093.
I am trying to work out what the pattern could be, but there doesn't seem to be one. How much turtle movement does one of the entries represent in the CSV file?
Thanks.
---- The code is below.
globals
[
totalAmount
]
to setup
ca
crt 1
[
setxy -16.4 16.4
pd
set heading 90
set color turquoise
]
set totalAmount 0
end
to go
ask turtles
[
fd moveAmount
]
set totalAmount moveAmount + totalAmount
export
end
to export
let filetemp word "turtletest" totalAmount
let filename word filetemp ".csv"
;print filename
export-world filename
end
The drawing layer is just a bitmap – a grid of pixels. It doesn't know what turtles moved and how far, it only knows what pixels the turtles colored in while moving. Internally, it's a java.awt.image.BufferedImage with TYPE_INT_ARGB encoding.
It's written to an exported world file by this code:
https://github.com/NetLogo/NetLogo/blob/533131ddb63da21ac35639e61d67601a3dae7aa2/src/main/org/nlogo/render/TrailDrawer.java#L217-L228
where colors is the array of ints backing the BufferedImage, and toHexString just writes bytes as hexadecimal digits (code).
If your image is mostly black, you'll mostly see a bunch of 00 bytes in the file.
As for your non-zero bytes, it appears to me that FF1D9F78 is a pixel with alpha = FF (opaque), red = 29, green = 159, blue = 120. At least, I think that's the right interpretation? Is that plausible for what you're seeing on screen? Maybe the A-R-G-B bytes are in the reverse order? To double check it, I'd need to do export-view and then look at the resulting PNG file in a program that could tell me the RGB values of individual pixels -- I don't have such a program handy right now. But hopefully this'll put you on the right track.
I work with MATLAB on the right half of the screen, so I want figures to open on the left half of the screen. However, the figure height should be about the size of a default figure, so not the height of the screen. Also, I use MATLAB on different computers with variable screen sizes (pixels), so figure dimensions should depend on the screen size, but produce identical figures on screen. The figure dimensions and position are therefore dependent on the screen resolution, but the code generating the dimensions and position should be independent on it.
I've accomplished this with the code in my answer below, which I thought I'd share for anyone who finds this useful for their own setup.
The default MATLAB current folder can be set in MATLAB's preferences. I've set this to the network folder on all my MATLAB computers, this can also be a cloud folder of a cloud service, e.g. Dropbox. Then I put a file startup.m in that folder containing the following code.
ss = get(0,'screensize');
b = 7; % border around figures is 7 pixels wide
%TODO different for various operating systems and possibly configurations.
p = 0; % extra padding pixels from left edge of screen
if ispc
win = feature('getos');
i = (1:2) + regexp(win,'Windows ','end');
switch win(i)
case '10'
b = 0;
p = 2;
otherwise
% other cases will be added in the future
end
end
fwp = ss(3)/2-2*b-p; % figure width in pixels
b = b+p;
n = 5;
set(0,'defaultfigureposition',[b ss(4)/n, fwp, ss(4)*(1-2/n)])
clear
Now, every time I start MATLAB, it runs this script and it moves the default figures I create to the left half of the screen with a nice size (the axes are just a little wider than they are tall).
The figure's units are normalised, but they can be set to pixels or whatever measure you like as well. I hope someone will find this a useful script for their setup.
EDIT: I've update the script to keep the default figure units: pixels. This is necessary, because apps such as the curve fitting tool (cftool) or the Classification Learner (classificationLearner) and probably others are bugged with normalised figure units. Their (dialog) windows either don't show up (they are positioned outside your screen area) or are too small or too large.
EDIT 2: I've updated the script for compatibility with Windows 10. The figure windows now have a border of 1 pixel, instead of 7. Also, the figures are padded a bit to the right, because Windows 10 puts them too far to the left. Windows 10 is detected automatically.
TO DO: support additional operating systems (with detection), e.g. Mac, Linux. If you have such a system, please report the following in a comment:
Open MATLAB and copy paste the resulting string from the feature getos command here.
Position the figure against (not on or over) the left edge of the screen and against (not on or over) the right half of the screen and report the figure's position and outerposition here.
I'm trying out the OpenStreetMap bundler program and I can't find details on the camera position data. The point cloud data is in a *.ply file that looks like this:
ply
format ascii 1.0
element face 0
property list uchar int vertex_indices
element vertex 1340
property float x
property float y
property float z
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
end_header
-1.967914e-001 -8.918888e-001 -3.318706e+000 92 86 88
-1.745216e-001 -2.186521e-001 -3.227759e+000 50 33 31
-1.585826e-001 -1.894233e-001 -3.271651e+000 61 43 43
...
-2.649703e-003 2.197792e-002 3.906710e-002 0 255 0
-2.354721e-003 2.235805e-002 -1.093058e-002 255 255 0
5.296331e-003 4.755635e-001 -1.298959e+000 255 0 0
3.155302e-003 4.634443e-001 -1.347420e+000 255 255 0
1.910245e-003 2.891324e-001 -1.070228e-001 0 255 0
2.508708e-003 2.884968e-001 -1.570152e-001 255 255 0
-2.246127e-002 -6.257610e-001 9.884196e-001 255 0 0
-2.333330e-002 -6.187732e-001 9.389180e-001 255 255 0
The last eight lines appear to be the positions for four cameras (from four images). One line is position, second line is orientation. The position colors are either green or red and the orientation is yellow.
I can't find info on this so I'm wondering if this is correct and also what does red and green mean? Good/bad data? Any other info about using osm-bundler results is helpful.
I'm also looking at how to get the camera position data from Bundler (note I'm not using osm-bundler but the original program). However, as well as outputting the PLY file bundler also outputs an ASCII file called bundle.out. This contains parameters that allow you to calculate the camera positions, as described in the bundler documentation.
Bundler incrementally solves for the camera positions/poses and outputs the final answer in the bundler.out file. The .ply file contains point cloud vertices, faces, and RGB color information. The .ply file does not contain the camera poses. You can find information about the bundler.out file here. ( osm-bundler uses the Noah Slavely's bundler program, so this answer applies to both of your questions )
http://www.cs.cornell.edu/~snavely/bundler/bundler-v0.4-manual.html#S6
So, you look at the first number in the second row to determine the number of cameras. The next number tells you the number of points which follow the cameras. Each camera entry consists of five rows.
<f> <k1> <k2> row one
<R> rows two, three, and four
<t> row five
So, lines one and two give you header information. Then each group of five rows is a seperate camera entry starting with camera number zero. If rows contain zero, then their is no data for that camera/image.
If the first two rows bundle.out contain
#Bundle file v0.3
16 32675
There will be 16 cameras and 32675 points. The camera information will be on lines
3 through (16*5 + 2). In vi or emacs you can display line numbers to help you examine the file. ( In vi, :set numbers on ) Remember that the rotation matrix is three lines of three numbers and the translation three vector is the fith and last line of a camera definition.
The points follow the camera definitions. You can find information about the format of points at the link I provided above.