I don't understand :
if we considerate the value 00001111 (15) is a byte and a RGB pixel (220,180,155) it's 3 bytes whatever the values of the pixel.
so why when i reduce the values of my pixels (with bitshift operation or whatever) the size of
my image is not = pixel numbers x 3. when i say "pixel numbers" i mean "pixel numbers bigger than fully black".
how the mechanism works ? is it counted in bits and then divided by eight as an average ?
if i have a 3MB picture and i do a bitshift (factor 2 on each 3 RGB channel) i found a 300 KB picture.
Don't tell me 90% of my pixels turned fully black.
Thanks.
If you shift all the pixel values right by 2 places, you will have around 1/4 as many shades of red as before, and around 1/4 as many greens and likewise for blues. That means overall you will have vastly fewer colours. That means your image may well have fewer than 256 colours which means it can be palettised. It also means it is likely to compress better because there will be more repetition of fewer unique sequences.
You can check if your image is palettised in several ways:
open it with PIL and check if image.mode contains a P
run exiftool on it and check if Colour Type is Palette
run ImageMagick on it with magick identify -verbose YOURIMAGE
You can count the number of unique colours in your image with ImageMagick using:
magick identify -format %k YOURIMAGE
Or you can do it in Python with the last part (entitled "Update") of this answer.
Related
I am searching for a faster way to blur an image than to use the GaussianBlur.
The solution I am looking for can be a command line solution, but I prefer code in perl notation.
Actually, we use the Perl image magick API to blur images:
# $image is our Perl object holding a imagemagick perl image
# level is a natural number between 1 and 10
$image->GaussianBlur('x' . $level);
This works fine, but with the level height the amount of time it consumes seems to grow exponentially.
Question: How can I improve the time used for the bluring operation?
Is there another faster approach to blur images?
I found that the suggested method of resizing image for blur imitation makes output look very pixelated for very large values of sigma like 25 or more. So I finally came to an idea of downscale-blur-enlarge, which makes very nice result (almost indistinguishable from simple blur with large sigma):
# plain slow blur
convert -blur 0x25 sample.jpg blurred_slow.jpg
# much faster
convert -scale 10% -blur 0x2.5 -resize 1000% sample.jpg blurred_fast.jpg
On my i5 2.7Ghz it shows up to 10x speed up.
The documentation speaks of the difference between Blur and GaussianBlur.
There has been some confusion as to which operator, "-blur" or the
"-gaussian-blur" is better for blurring images. First of all "-blur"
is faster, but it does this using two stage technique. First in one
axis, then in the other. The "-gaussian-blur" operator on the other
hand is more mathematically correct as it blurs in all directions
simultaneously. The speed cost between the two can be enormous, by a
factor of 10 or more, depending on the amount of bluring involved.
[...]
In summary, the two operators are slightly different, but only
minimally. As "-blur" is much faster, use it. I do in just about all
the examples involving blurring. Large
That would simply be:
$image->Blur( 'x' . $level );
But the Perl ImageMagick documentation has the same text on both Blur and GaussianBlur (emphasis mine). I can't try now, you would have to benchmark it yourself.
Blur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
GaussianBlur: reduce image noise and reduce detail levels with a Gaussian operator of the given radius and standard deviation (sigma).
An alternative that the documentation also lists is resizing the image to be very tiny, and then enlarging again.
Using large sigma values for image bluring is very slow. But onw
technique can be used to speed up this process. This however is only a
rough method and could use some mathematicaly rigor to improve
results. Essentually the reason large blurs are slow is because you
need a large window or 'kernel' to merge lots of pixels together, for
each and every pixel in the image. However resize (making image
smaller) does the same thing but generates fewer pixels in the
process. The technique is basically shrink the image, then enlarge it
again to generate the heavilly blured result. The Gaussian Filter is
especially useful for this as you can directly specify a Gaussian
Sigma define.
The example command line code is this:
convert rose: -blur 0x5 rose_blur_5.png
convert rose: -filter Gaussian -resize 50% \
-define filter:sigma=2.5 -resize 200% rose_resize_5.png
Not sure if I could still help OP with this, but I recently tried the same for a blurred screenlock picture.
I found that omitting the -blur part saves even more calculation time and is still delivering great results for a 4K picture:
convert in.png -scale 2.5% -resize 4000% out.png
# real: 0.174s user: 0.144s size: 1.2MiB
convert in.png -scale 10% -blur 0x2.5 -resize 1000% out.png
# real: 0.136s user: 2.117s size: 1.2MiB
convert in.png -blur 0x25 out.png
# real: 2.425s user: 21.408s size: 1KiB
However, you couldn't go lower than 2.5% with 3840x2160. It will resize the image. I guess the eps value differs for pictures of other sizes.
It should be noted, that the resulting image sizes differ noticably!
I am trying to write a sequence of color images to a dicom file in Matlab. Each image is of type uint16. The sequence is stored in a 4D matrix named output of size 200x360x3x360 (num of rows x num of cols x num of channels x num of images). When I execute dicomwrite(output,'outputfile.dcm'), it gives the following error:
It says data bit depth is 8 but I've ensured that each image is 16-bit. Not sure what's going wrong.
The documentation for dicomwrite says it can write color images as well. In fact dicomread can read color dicom images such that the size of the matrix which stores the read data is 200x360x3x360. So I guess it should be possible to write color images as well using dicomwrite. Any help in this regard is appreciated. There is a related post but it doesn't talk about color image sequence.
The comment by JohnnyQ is correct.
From this page down in section A.8.5.4, they list the Multi-frame True Color SC Image IOD Content Constraints (partial list quote):
In the Image Pixel Module, the following constraints apply:
Samples per Pixel (0028,0002) shall be 3
Bits Allocated (0028,0100) shall be 8
Bits Stored (0028,0101) shall be 8
High Bit (0028,0102) shall be 7
Pixel Representation (0028,0103) shall be 0
It seems matlab will not do the conversion for you, so you should down convert each 16-bit color channel to 8-bit for DICOM
As can be seen in this example, each channel (R, G, B) in a BMP file takes an input. A 24-bit BMP image has 8 bit for-R , 8-bit for G, and 8 bit for B. I saved an image in MS-paint as monochrome(black and white). Its property says the image's depth is 1-bit. The question is who gets this 1 bit: R , G or B? Is it not mandatory that all the three channels must get certain value? I am not able to understand how MS-Paint has drawn this BMP image using 1 bit.
Thanks in advance for your replies.
There's multiple ways to store a bitmap. In this case, the important distinction is RGB versus indexed.
In an RGB bitmap, every pixel is associated with three separate values, one for red, another for green, and another for blue. Depending on the "bitness" (bit depth) and the specific pixel format, the different colour channels can have different amount of bits allocated for them - the simplest case is the typical true-color with 8 bits for each of the channels, and another 8 bits (optional) for the alpha channel. However, some pixel formats allocate a bit differently - the idea is that the human eye has different sensitivity to each of those channels, and you can save up on space and improve visual quality by allocating the bits in a smarter way. For example, one of the more popular pixel formats is BGR-565 - that is, 16 bits total, 5 bits for blue, 6 bits for green and 5 bits for red.
In an indexed bitmap, the value stored with each of the pixels is an index (hence "indexed bitmap") into a palette (a colour table). The palette is usually a simple table of colours, using RGB "pixel" formats to assign each index with some specific colour. For example, index 0 might mean black, 1 might mean turqouise etc.
In this case, the bit-depth doesn't exactly map into colour quality - you're not trying to map the whole colour space, you're focusing on some subset of the possible colours instead. For example, if you have 256 shades of grey (say, from black to white), a true-colour bitmap would need at least three bytes per pixel (and each of those three bytes would have the same value), while you could use an indexed bitmap with a pallete of all the grey colours, requiring only one byte per pixel (plus the cost of the pallete - 256 * 3 bytes). There's a lot of benefits to using indexed bitmaps, and a lot of tricks to improve the visual quality further without using more bits-per-pixel, but that would be way beyond the scope of this question.
This also means that you only need as many possible values as you want to show. If you only need 16 different colours, you only need four bits per pixel. If you only need a monochromatic bitmap (that is, either a pixel is "on", or it's "off"), you only need one bit per pixel - and that's exactly your case. If you have the amount of distinct colours you need, you can easily get the required bit depth by taking a base-2 logarithm (e.g. log 256 = 8).
So let's say you have an image that only uses two colours - black and white. You'll build a pallete with two colours, black and white. And for each of the pixels in the bitmap, you either save 0 if it's black, or 1 if it's white.
Now, when you want to draw a bitmap like this, you simply read the palette (0 -> RGB(0, 0, 0), 1 -> RGB(1, 1, 1) in this case), and then you read one pixel after another. If the bit is zero, paint a black pixel. If it's one, paint a white pixel. Done :)
No, it depends on the type of data you chose to save as. Because you chose to save as monochrome, the RGB mapping is not used here, and the used mapping would go as a one byte per pixel, ranging from white to black.
Each type has its own mapping ways, saving as 24-bit will give you RGB mapping, saving as 256 will map a byte for each pixel, each value represents a color( you can find the table on the internet), as for monochrome, you'll have the same as a 256 bitmap, but the color table will only have white and black colors.
Sorry for the mistake, the way I explained for monochrome is actually used by Gray Scale, the monochrome uses one bit to indicate if the pixel is black or white, depending on the value of each bit, no mapping table is used.
I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.
In Fireworks, when you export a PNG8 file with alpha transparency, the resulting file will be something like this:
png8-fireworks.png: PNG image data, 500 x 500, 8-bit colormap, non-interlaced
If you convert a 32bit PNG using other tools (PNGOUT, Smush.it) the result looks like this:
png24-smushit.png: PNG image data, 500 x 500, 8-bit/color RGBA, non-interlaced
png8-pngout.png: PNG image data, 500 x 500, 8-bit/color RGBA, non-interlaced
What exactly is the difference? They both have alpha transparency, but the Fireworks file is 8KB while the others are 20KB. Now the Fireworks file in noticeably lower quality (namely with banding on gradients).
For some images the PNG8+alpha from Fireworks works great and has a super small file size comparatively. I just haven't been able to figure out what Fireworks is doing and how it is different than the other methods.
The PNG8 file is a very efficient format. It finds the unique colors in the image and only saves those in a small palette. The cool part is that it also saves alpha transparency in the palette with each color. (If you have three pure reds (#FF0000) in your image, but each has a different alpha value, let's say 255, 128, 65, it will save three entries in the palette.
You can also in Fireworks choose to limit the palette size to a power of 2, so you can reduce colors used for more savings. Often a 256 color image will look fine at 64 colors and save a lot of weight.
from sites of both tools:
PNGOUT:
It won't convert an image to a color type or bit depth that cannot losslessly store the image.
It won't reduce the number of colors being used in an image, or convert the colors to grayscale unless all the colors correspond to PNG grayscale values already.
Smush.it:
It is a "lossless" tool […]
Neither gives you a 256 paletted png: it's the diff between "colormap" (= palette) and "rgba" (truecolor = R of 2^8 x G of 2^8 x B of 2^8 x Alpha of 2^8, with 2^8 = 256).
Fireworks does.
PNG-8 means 8 bits per pixel, which means it can only display 256 different colours (from a pallet).
24 and 32 bits per pixel allow you to use far more colours (and hence get nice smooth gradients) but come at the cost of filesize.