`char id=52 x=180 y=5 width=50 height=50 xoffset=0 yoffset=0 xadvance=50 page=0 chnl=0`
The string above is the definition of char '1'.
I wonder what does the property 'page' and 'chnl' mean.
The BMFont format can be found here
AngelCode bmfont file format
page The texture page where the character image is found.
Blockquote
chnl The texture channel where the character image is found (1 = blue, 2 = green, 4 = red, 8 = alpha, 15 = all channels).
page is used when the glyphs are spread over multiple images (pages). In the common section in the header of the file you can see the number of pages
common
pages The number of texture pages included in the font.
The channnels (red/green/blue/alpha) can store different information for the glyph.
common
packed Set to 1 if the monochrome characters have been packed into each of the texture channels. In this case alphaChnl describes what is
stored in each channel.
common
alphaChnl Set to 0 if the channel holds the glyph data, 1 if it holds
the outline, 2 if it holds the glyph and the outline, 3 if its set to
zero, and 4 if its set to one.
common
redChnl Set to 0 if the channel holds the glyph data, 1 if it holds
the outline, 2 if it holds the glyph and the outline, 3 if its set to
zero, and 4 if its set to one.
common
greenChnl Set to 0 if the channel holds the glyph data, 1 if it holds
the outline, 2 if it holds the glyph and the outline, 3 if its set to
zero, and 4 if its set to one.
common
blueChnl Set to 0 if the channel holds the glyph data, 1 if it holds
the outline, 2 if it holds the glyph and the outline, 3 if its set to
zero, and 4 if its set to one.
Related
I have two values (between 0 and 255), from audio sources.
I'm trying to display two different RGB colours, a rectangle split by the middle, the colors from these two changing numbers, the colors changing as the numbers.
thank for you help, I know it's simple, but i'm really stuck.
Have a nice day.
To show a “split rectangle,” I would suggest using two panel objects side by side.
You can control the panel object’s colour using the bgcolor (or bgfillcolor) message followed by a list of four numbers, e.g.: bgcolor 1 0 0 0.5
The numbers correspond to Red, Green, Blue, and Alpha (opacity) and are all in the range of 0–1. So bgcolor 1 0 0 0.5 would set the colour of the panel to 100% red and 50% transparent.
In your case, you have values 0–255 so you need to scale those down to 0–1 and then decide how exactly you want to map them. For example, if you wanted each panel to be white when the value is 255, black when it is 0, and some shade of grey in between, you could do something like this:
value → [scale 0 255 0. 1.] → (bgcolor $1 $1 $1 1) → [panel]
Some notes:
The last two arguments of scale must be floats, otherwise it will just output integers 0 and 1.
The $1 syntax of the message box gets replaced by input. You could decide for example to only control the green part of the colour: bgcolor 0 $1 0 1.
If your value changes very frequently, you might want to limit how often it updates the colour. Most screens won’t update faster than 60 times per second, so changing the colour every millisecond for example is a waste of resources. If you’re using snapshot~ to get the value from audio, you could use an argument of 17 or higher, to limit the values to less than 60 times per second (1000ms / 60 = 16.6…). Alternatively, you could use the speedlim object to limit how often you recalculate your colour.
In brain tumor segmentation,can I consider Images and Labels as color images?
Or images can have 3 channels but Ground Truth/ Mask/ Label must be in 1 channel. Or both must be of 1 channel?? As I have used both (images & GT) of 3 channels for UNET architecture, and giving me output as blank colored image. Why output is so?
There is no necessary to use colored images to perform biomedical image segmentation. The value of CT/MR image has a specific meaning, which denotes different lesions such as bones or vessels.
If you use 3 channels, I don't know whether the value still has the same meaning or not. Also, I do not recommend you take the GT as 3 channels image, because the voxel value denotes different classes. In your case, maybe 1-n for different kinds of tumors, 0 for background.
Thus, 3 channels representation will lose some semantic information, make the problem more complex.
I'm trying to write a TIFF image with RMagick that tesseract can process. Tesseract objects if bits per pixel is > 32 or samples per pixel is other than 1, 3 or 4.
With the defaults, Image.write generates 3 (RGB) samples plus 1 alpha channel at 16-bits per sample for a total of 64 bits per pixel, violating the first constraint.
If I set the colorspace to GRAYColorspace as follows, it still outputs the alpha channel, giving two samples per pixel, violating the second constraint.
Image.write('image.tif) {self.colorspace = GRAYColorspace}
Per the RMagick documentation, the alpha channel is ignored on method operations unless specified, but even if I do self.channel(GREYChannel), the alpha channel is still output.
I know I can run convert on the file afterwards, but I'd like to find a solution that avoids that.
Here is the tiffinfo output for the file currently generated:
TIFF Directory at offset 0x9c48 (40008)
Image Width: 100 Image Length: 100
Bits/Sample: 16
Compression Scheme: None
Photometric Interpretation: min-is-black
Extra Samples: 1<unassoc-alpha>
FillOrder: msb-to-lsb
Orientation: row 0 top, col 0 lhs
Samples/Pixel: 2
Rows/Strip: 20
Planar Configuration: single image plane
Page Number: 0-1
DocumentName: image-gray-colorspace.tif
White Point: 0.3127-0.329
PrimaryChromaticities: 0.640000,0.330000,0.300000,0.600000,0.150000,0.060000
As can be seen in this example, each channel (R, G, B) in a BMP file takes an input. A 24-bit BMP image has 8 bit for-R , 8-bit for G, and 8 bit for B. I saved an image in MS-paint as monochrome(black and white). Its property says the image's depth is 1-bit. The question is who gets this 1 bit: R , G or B? Is it not mandatory that all the three channels must get certain value? I am not able to understand how MS-Paint has drawn this BMP image using 1 bit.
Thanks in advance for your replies.
There's multiple ways to store a bitmap. In this case, the important distinction is RGB versus indexed.
In an RGB bitmap, every pixel is associated with three separate values, one for red, another for green, and another for blue. Depending on the "bitness" (bit depth) and the specific pixel format, the different colour channels can have different amount of bits allocated for them - the simplest case is the typical true-color with 8 bits for each of the channels, and another 8 bits (optional) for the alpha channel. However, some pixel formats allocate a bit differently - the idea is that the human eye has different sensitivity to each of those channels, and you can save up on space and improve visual quality by allocating the bits in a smarter way. For example, one of the more popular pixel formats is BGR-565 - that is, 16 bits total, 5 bits for blue, 6 bits for green and 5 bits for red.
In an indexed bitmap, the value stored with each of the pixels is an index (hence "indexed bitmap") into a palette (a colour table). The palette is usually a simple table of colours, using RGB "pixel" formats to assign each index with some specific colour. For example, index 0 might mean black, 1 might mean turqouise etc.
In this case, the bit-depth doesn't exactly map into colour quality - you're not trying to map the whole colour space, you're focusing on some subset of the possible colours instead. For example, if you have 256 shades of grey (say, from black to white), a true-colour bitmap would need at least three bytes per pixel (and each of those three bytes would have the same value), while you could use an indexed bitmap with a pallete of all the grey colours, requiring only one byte per pixel (plus the cost of the pallete - 256 * 3 bytes). There's a lot of benefits to using indexed bitmaps, and a lot of tricks to improve the visual quality further without using more bits-per-pixel, but that would be way beyond the scope of this question.
This also means that you only need as many possible values as you want to show. If you only need 16 different colours, you only need four bits per pixel. If you only need a monochromatic bitmap (that is, either a pixel is "on", or it's "off"), you only need one bit per pixel - and that's exactly your case. If you have the amount of distinct colours you need, you can easily get the required bit depth by taking a base-2 logarithm (e.g. log 256 = 8).
So let's say you have an image that only uses two colours - black and white. You'll build a pallete with two colours, black and white. And for each of the pixels in the bitmap, you either save 0 if it's black, or 1 if it's white.
Now, when you want to draw a bitmap like this, you simply read the palette (0 -> RGB(0, 0, 0), 1 -> RGB(1, 1, 1) in this case), and then you read one pixel after another. If the bit is zero, paint a black pixel. If it's one, paint a white pixel. Done :)
No, it depends on the type of data you chose to save as. Because you chose to save as monochrome, the RGB mapping is not used here, and the used mapping would go as a one byte per pixel, ranging from white to black.
Each type has its own mapping ways, saving as 24-bit will give you RGB mapping, saving as 256 will map a byte for each pixel, each value represents a color( you can find the table on the internet), as for monochrome, you'll have the same as a 256 bitmap, but the color table will only have white and black colors.
Sorry for the mistake, the way I explained for monochrome is actually used by Gray Scale, the monochrome uses one bit to indicate if the pixel is black or white, depending on the value of each bit, no mapping table is used.
I'm trying out the OpenStreetMap bundler program and I can't find details on the camera position data. The point cloud data is in a *.ply file that looks like this:
ply
format ascii 1.0
element face 0
property list uchar int vertex_indices
element vertex 1340
property float x
property float y
property float z
property uchar diffuse_red
property uchar diffuse_green
property uchar diffuse_blue
end_header
-1.967914e-001 -8.918888e-001 -3.318706e+000 92 86 88
-1.745216e-001 -2.186521e-001 -3.227759e+000 50 33 31
-1.585826e-001 -1.894233e-001 -3.271651e+000 61 43 43
...
-2.649703e-003 2.197792e-002 3.906710e-002 0 255 0
-2.354721e-003 2.235805e-002 -1.093058e-002 255 255 0
5.296331e-003 4.755635e-001 -1.298959e+000 255 0 0
3.155302e-003 4.634443e-001 -1.347420e+000 255 255 0
1.910245e-003 2.891324e-001 -1.070228e-001 0 255 0
2.508708e-003 2.884968e-001 -1.570152e-001 255 255 0
-2.246127e-002 -6.257610e-001 9.884196e-001 255 0 0
-2.333330e-002 -6.187732e-001 9.389180e-001 255 255 0
The last eight lines appear to be the positions for four cameras (from four images). One line is position, second line is orientation. The position colors are either green or red and the orientation is yellow.
I can't find info on this so I'm wondering if this is correct and also what does red and green mean? Good/bad data? Any other info about using osm-bundler results is helpful.
I'm also looking at how to get the camera position data from Bundler (note I'm not using osm-bundler but the original program). However, as well as outputting the PLY file bundler also outputs an ASCII file called bundle.out. This contains parameters that allow you to calculate the camera positions, as described in the bundler documentation.
Bundler incrementally solves for the camera positions/poses and outputs the final answer in the bundler.out file. The .ply file contains point cloud vertices, faces, and RGB color information. The .ply file does not contain the camera poses. You can find information about the bundler.out file here. ( osm-bundler uses the Noah Slavely's bundler program, so this answer applies to both of your questions )
http://www.cs.cornell.edu/~snavely/bundler/bundler-v0.4-manual.html#S6
So, you look at the first number in the second row to determine the number of cameras. The next number tells you the number of points which follow the cameras. Each camera entry consists of five rows.
<f> <k1> <k2> row one
<R> rows two, three, and four
<t> row five
So, lines one and two give you header information. Then each group of five rows is a seperate camera entry starting with camera number zero. If rows contain zero, then their is no data for that camera/image.
If the first two rows bundle.out contain
#Bundle file v0.3
16 32675
There will be 16 cameras and 32675 points. The camera information will be on lines
3 through (16*5 + 2). In vi or emacs you can display line numbers to help you examine the file. ( In vi, :set numbers on ) Remember that the rotation matrix is three lines of three numbers and the translation three vector is the fith and last line of a camera definition.
The points follow the camera definitions. You can find information about the format of points at the link I provided above.