I have been experimenting trying to work out how does the export-world work, in particular how does the DRAWING section work.
I created the following code to experiment using the default environment size of max-pxcor and max-pycor of 16. If you run this code shown at the end of this post, each file produced will be 5 megabytes, so it is easy to use up over a gigabyte of data after a minute of running.
Anyway, my question is this: How does the DRAWING section work? I have seen that the first entry is at -16.4, 16.4. I have summarised some of my observations in the simple table below. The first column is how much the turtle has moved, while the second column shows partial output in the CSV file.
0.001 FF1D9F78
0.016 FF1D9F78FF1D9F78
0.093 FF1D9F78FF1D9F78FF1D9F78
I have also seen that the first entry is created when the turtle moves by 0.001.The second entry seems to happen when the turtle has moved by 0.016 and the third entry is 0.093.
I am trying to work out what the pattern could be, but there doesn't seem to be one. How much turtle movement does one of the entries represent in the CSV file?
Thanks.
---- The code is below.
globals
[
totalAmount
]
to setup
ca
crt 1
[
setxy -16.4 16.4
pd
set heading 90
set color turquoise
]
set totalAmount 0
end
to go
ask turtles
[
fd moveAmount
]
set totalAmount moveAmount + totalAmount
export
end
to export
let filetemp word "turtletest" totalAmount
let filename word filetemp ".csv"
;print filename
export-world filename
end
The drawing layer is just a bitmap – a grid of pixels. It doesn't know what turtles moved and how far, it only knows what pixels the turtles colored in while moving. Internally, it's a java.awt.image.BufferedImage with TYPE_INT_ARGB encoding.
It's written to an exported world file by this code:
https://github.com/NetLogo/NetLogo/blob/533131ddb63da21ac35639e61d67601a3dae7aa2/src/main/org/nlogo/render/TrailDrawer.java#L217-L228
where colors is the array of ints backing the BufferedImage, and toHexString just writes bytes as hexadecimal digits (code).
If your image is mostly black, you'll mostly see a bunch of 00 bytes in the file.
As for your non-zero bytes, it appears to me that FF1D9F78 is a pixel with alpha = FF (opaque), red = 29, green = 159, blue = 120. At least, I think that's the right interpretation? Is that plausible for what you're seeing on screen? Maybe the A-R-G-B bytes are in the reverse order? To double check it, I'd need to do export-view and then look at the resulting PNG file in a program that could tell me the RGB values of individual pixels -- I don't have such a program handy right now. But hopefully this'll put you on the right track.
Related
I want to make a JPEG where for each of the 3 components (Y, Cb, Cr), you encode a 8x8 block one after another, and then move to the next 8x8 block in the image.
E.X.
A 16x16 image exists.
write header (is there anything special I need to mark? I opened a known jpeg to confirm I was writing quantization tables and Huffman tables right, is there a special thing I need to make to make this format work? Also I DON'T want subsample. I want a 1:1 ratio (from my understanding this means I encode 8x8 pixels into a 8x8 block to process through the steps that I am about to name, correct? How do I mark that in the header? With 0x11?).
Steps:
Grab the first 8x8 (top left) of this image.
For Y: DCTII-\>quant-\>RLE-\>Huffman Encode
then, for Cb: DCTII-\>quant-\>RLE-\>Huffman Encode
then, for Cr: DCTII-\>quant-\>RLE-\>Huffman Encode
repeat for top right -\> bottom left -\> bottom right 8x8 pixel block in image
write end of image tag, done.
In the data stream it should go: DC-Y -> AC-Y -> DC-Cb -> AC-Cb -> DC-Cr -> AC-Cr, and so forth yes? Is there any tag I need to insert between components, between DC/AC changes, or between 8x8 pixel blocks? I assume between components a EOB Huffman code is present (that's what I have currently).
Negative numbers:
What format are they? 2's comp? -3 for example would be 101 in 2's comp (3 bit size), but in JPEG you would call this 2 bit size and only encode the 01 portion not the "sign" or the MSB bit right? 3 would be 011 in 2's comp 3 bit, but by the same logic its just 11 (2 bit size) and encoded without sign (MSB) in JPEG right? Anything I am missing?
DC vals:
3 components mean you keep track of 3 different previous DC vals right? For example Y-DC-prev is initialized to 0. Then the first Y-DC val is let's say 25. 25-0 = 25, we encode 25. We then remember 25 for the Y components next DC (not the Cb or Cr component right? They have their own "memories"?) Then DC-Y is lets say 40. Diff = 40-25 = 15, encode 15. remember 40 (not 15 right?). And so forth?
I followed the example here: WIKI. My code can get the exact values all the way down to RLE, which makes me think my Huffman encoding might have the bug. When I make a 16x16 image that basically repeats the image on Wikipedia in a 2x2 tile (also makes the image not grey scale since I force Cb Cr to have the same value as Y; I know the image should have a funky tint because of this, no worries.). I end up getting a semi-believable value for the top right block, then the rest turn into garbage. This led me to believe its my file organization or Huffman encoding that is going wrong. To do a quick check (this is from the Wikipedia example):
FORMAT: (RUNLENGTH, SIZE)(VALUE)
(0, 2)(-3);
(1, 2)(-3);
(0, 1)(-2);
(0, 2)(-6);
(0, 1)(2);
(0, 1)(-4);
(0, 1)(1);
(0, 2)(-3);
(0, 1)(1);
(0, 1)(1);
(0, 2)(5);
(0, 1)(1);
(0, 1)(2);
(0, 1)(-1);
(0, 1)(1);
(0, 1)(-1);
(0, 1)(2);
(5, 1)(-1);
(0, 1)(-1);
(0, 0);
standard Huffman AC-Y table in the spec: TABLE-PAGE154 says 0/2 is code 01. We know that -3 is 01 in 2's comp. So we append 0101 to the stream and then get to the next entry. 1/2 is 11011 from the table, -3 is still 01. So we append 1101101 to the stream and keep going.... all the way to the end where we see a 0x0 which is just 1010. Then we rinse and repeat for the 2 other components, then we rinse and repeat for the rest of the 8x8 pixel blocks in the image yes? The DC val was -26 which is 00110 (size 5) in 2's comp w/o MSB / sign. size 5 for DC-Y codes to 110 according to the Huffman table in the spec (page 153). This means the bit stream should start:
110_00110_01_01_11011_01_...
Obviously the _ are just for readability, I don't add those to the actual file.
This is the image I am getting so far for this curious: incorrect image. I hard coded the 8x8 blocks to always match the ones from Wikipedia so we should see a tilized form of the image, it should be off color due to the 2 new chroma components (given the same exact values as Y).
I've been working on this for days, any help is much appreciated!!
I have a Keyence Line Laser System LJ-X 8000, that I use to scan the surface of different objects.
The Controller saves the height information as a bitmap, with each pixel representing one height value. After a lot of tinkering, I found out, that Keyence is not using the actual colors, rather than using the 24-Bit RGB-triplets as some form of binary storage. However, no combination of these bytes seems to work for me. Are there any common storage methods for 24-bit Integers?
To decode those values, I did a scan covering the whole measurement range of the scanner, including some out of range values in the beginning and the end. If you look at the distribution of the values of each color plane, you can see, that the first and third plane actually only use values up to 8/16 which means only 3/4 Bits. This is also visible in the image itself, as it mainly shows a green color.
I concluded that Keyence uses the full byte of the green color plane, 3 Bits of the first and 4 Bits of the last plane to store the height information. Keyence seems to have chosen some weird 15 Bit Integer Format to store their data.
With a little bit-shifting and knowing that the scanner has a valid range from [-2.2, 2.2], I was able to build the following simple little (Matlab-) script to calculate the height information for each pixel:
HeightValBin = bitshift(scanIm(:,:,2),7, 'uint16') ...
+ bitshift(scanIm(:,:,1),4, 'uint16')...
+ bitshift(scanIm(:,:,3),0, 'uint16');
scanBinValScaled = interp1([0,2^15], [-2.2, 2.2], double(scanBinVal));
Keyence offers a software to convert those .bmp into .csv-files, but without an API to automate the process. As I will have to deal with a lot of these files I needed to automate this process.
The calculated values from the rgb triplets are actually even more precise than the exported csv, as the csv only shows 4 digits after the decimal point.
I am new to netlogo and was hoping if someone can help me with how to create turtles based on the user input.
In the interface tab i have a slider whose value ranges between 2 & 10. Depending on the value defined by the user using this slider, that many number of turtles should be created.
I tried using multiple if statements but there is a problem in the succeeding steps.
if (slider-value = 2) [create2]
if (slider-value = 3) [create3]
if (slider-value = 4) [create4]
if (slider-value = 5) [create5]
After creating the turtles using the above if conditions, i have to assign some rules to each individual turtle, and i tried again using multiple if statements. But it doesn't seem to work.
Can someone suggest a way, would really appreciate the help.
Thanks in advance!
Regards
You could more simply use the slider thus
create-turtles slider-value [
;things you want the turtles to do for example
set heading 4 * random 90
set shape "turtle"
set color green + random-normal 0 4
]
is this what you are looking for?
I recommend a switch statement.
A switch statement cycles through all your possible commands ,typically with an int. And then selects the match command.
So for example I could make a switch statement that when user inputs the up arrow. the int 1 is the input. this is matched to a command that tells the turtle to move up so many pixels/units/cubes.
I hope that helps.
Given (x_top_left, y_top_left) and (x_low_right, y_low_right) in the Netlogo source, what should the width and height of the saved Netlogo applet be?
Background
I have a ton of authentic Netlogo files, prepared for courses and demos. Using Perl or Ruby, I'd like to export them in batch as an applet in different files, possibly related by a table of contents in a left frame or so. Much like "save as applet", but then in batch, to different HTML files.
All is trivial to do were it not that I got stuck in finding out which applet dimensions I am supposed to use in writing
<applet code="org.nlogo.lite.Applet" archive="NetLogoLite.jar"
width="???" height="???">
<param name="DefaultModel" value="netlogofile.nlogo">
</applet>
Notice the ???. I searched for other Netlogo file parsers and encountered https://github.com/NetLogo/NetLogo/wiki/Model-file-format, which is not specific enough and https://github.com/rikblok/dokuwiki-plugin-netlogo/blob/master/syntax/applet.php which is a parser but yields results which appear useless to me. (I got it running but it seems to parse the Netlogo source wrongly.)
Netlogo file format
I figured out that the Netlogo file format is like the following (comments after semicolon)
##$###$##
GRAPHICS-WINDOW
210 ; x-coord of upper left corner
10 ; y-coord of upper left corner
544 ; x-coord of lower right corner?
215 ; y-coord of lower right corner?
-1
-1
2.77 ; patch size
1
10 ; font size
1
1
1
0
1
0 ; world-wrap
1 world-wrap
-45 ; min-pxcor
71 ; max-pcor
-33 ; min-pycor
29 max-pycor
0
0
1 ; show tick counter
ticks ; tick counter label
To get a feeling for the logics I parsed a few saved applets and got the following results:
(x_top_left, y_top_left) = (210, 10).
(x_low_right, y_low_right) = (649, 470).
Netlogo saves applet with width x height: 794 x 480.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (535, 470).
Netlogo saves applet with width x height: 629 x 480.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (483, 340).
Netlogo saves applet with width x height: 575 x 350.
(x_top_left, y_top_left) = (96, 10).
(x_low_right, y_low_right) = (396, 271).
Netlogo saves applet with width x height: 690 x 300.
From these data I tried to discover a pattern in these numbers but the relation between them frankly is beyond me.
My question is: given (x_top_left, y_top_left) and (x_low_right, y_low_right) in the Netlogo source, what should the width and height of the saved Netlogo applet be?
You have to look at the dimensions of all of the widgets in the Interface tab, compute the bounding box for all of them together, and then add some slop.
I know of two implementations of this calculation, both pretty craptastic.
One is the one in NetLogo itself. It's here, split across two files:
https://github.com/NetLogo/NetLogo/blob/22bd1361ab7ecc1c186448ebb2a77ba993b8fb8b/src/main/org/nlogo/app/AppletSaver.scala#L23-L47
https://github.com/NetLogo/NetLogo/blob/22bd1361ab7ecc1c186448ebb2a77ba993b8fb8b/src/main/org/nlogo/app/WidgetPanel.java#L112-L145
The other is on the Perl scripts (Perl? yeah, it was 2002, barely been touched since) on the NetLogo website that serve up the applet versions of the Models Library models. Those scripts are in a private repo, but I made a gist of the relevant section:
https://gist.github.com/SethTisue/98a1b92db00dcd6a4f79
I haven't looked at this stuff in donkey's years, but if you have questions about it, it's possible my memory could be jogged.
I'm trying out the OpenStreetMap bundler program and I can't find details on the camera position data. The point cloud data is in a *.ply file that looks like this:
ply
format ascii 1.0
element face 0
property list uchar int vertex_indices
element vertex 1340
property float x
property float y
property float z
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
end_header
-1.967914e-001 -8.918888e-001 -3.318706e+000 92 86 88
-1.745216e-001 -2.186521e-001 -3.227759e+000 50 33 31
-1.585826e-001 -1.894233e-001 -3.271651e+000 61 43 43
...
-2.649703e-003 2.197792e-002 3.906710e-002 0 255 0
-2.354721e-003 2.235805e-002 -1.093058e-002 255 255 0
5.296331e-003 4.755635e-001 -1.298959e+000 255 0 0
3.155302e-003 4.634443e-001 -1.347420e+000 255 255 0
1.910245e-003 2.891324e-001 -1.070228e-001 0 255 0
2.508708e-003 2.884968e-001 -1.570152e-001 255 255 0
-2.246127e-002 -6.257610e-001 9.884196e-001 255 0 0
-2.333330e-002 -6.187732e-001 9.389180e-001 255 255 0
The last eight lines appear to be the positions for four cameras (from four images). One line is position, second line is orientation. The position colors are either green or red and the orientation is yellow.
I can't find info on this so I'm wondering if this is correct and also what does red and green mean? Good/bad data? Any other info about using osm-bundler results is helpful.
I'm also looking at how to get the camera position data from Bundler (note I'm not using osm-bundler but the original program). However, as well as outputting the PLY file bundler also outputs an ASCII file called bundle.out. This contains parameters that allow you to calculate the camera positions, as described in the bundler documentation.
Bundler incrementally solves for the camera positions/poses and outputs the final answer in the bundler.out file. The .ply file contains point cloud vertices, faces, and RGB color information. The .ply file does not contain the camera poses. You can find information about the bundler.out file here. ( osm-bundler uses the Noah Slavely's bundler program, so this answer applies to both of your questions )
http://www.cs.cornell.edu/~snavely/bundler/bundler-v0.4-manual.html#S6
So, you look at the first number in the second row to determine the number of cameras. The next number tells you the number of points which follow the cameras. Each camera entry consists of five rows.
<f> <k1> <k2> row one
<R> rows two, three, and four
<t> row five
So, lines one and two give you header information. Then each group of five rows is a seperate camera entry starting with camera number zero. If rows contain zero, then their is no data for that camera/image.
If the first two rows bundle.out contain
#Bundle file v0.3
16 32675
There will be 16 cameras and 32675 points. The camera information will be on lines
3 through (16*5 + 2). In vi or emacs you can display line numbers to help you examine the file. ( In vi, :set numbers on ) Remember that the rotation matrix is three lines of three numbers and the translation three vector is the fith and last line of a camera definition.
The points follow the camera definitions. You can find information about the format of points at the link I provided above.