I'm reading in image that doesn't have an alpha channel:
my $image = Image::Magick->new;
$image->Read("./noalpha.png");
And then trying to set certain pixels to a different color/alpha value:
my #color = ( 0.2, 0.4, 0.6, $alpha );
$image->SetPixel( x=>$X, y=>$Y, channel=>'RGBA', normalize=>'True', color => \#color);
But unless the starting image file already had an alpha channel the file I write:
$image->Write('out.png');
doesn't contain an alpha channel.
I've been reading through the PerlMagick documentation, but I must not be looking for the right thing. Is there a way to add an alpha channel to my $image object?
Do I need to create a new image object with the sizing of the original image and re-write everything to that one?
The existence of an alpha channel is an attribute of the image, which needs to be turned on:
$image->Set(alpha => 'On');
Related
I'm trying to find the algorithm of this blending in rust.
When searching through the image crate source, the blending method used is 'src-over' or 'SrcAlpha InvSrcAlpha' if I'm correct: source.
So the only thing I need to change is the source factor, from SrcAlpha to One.
Unity doc say: "The value of this input is one. Use this to use the value of the source or the destination color."
So only use the source value without mult.
I tried to change that:
let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a);
let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * fg_a, fg_g * fg_a, fg_b * fg_a);
to
let (bg_r_a, bg_g_a, bg_b_a) = (bg_r * bg_a, bg_g * bg_a, bg_b * bg_a);
let (fg_r_a, fg_g_a, fg_b_a) = (fg_r * 1., fg_g * 1., fg_b * 1.);
Here's an example result
The 'expected' is in fact not the exact result that i want but very close. I got it from a manual unmultiply alpha, and manual color boost (multiply each channel by 5).
When debugging the blend function I noticed that r,g,b overflow the range of the color (more that 1.0), which make the result image white. and alpha channel remain very very low.
So is the algorithm correct?
Otherwise the issue come from my source.
In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:
With Paperjs, I try to subtract a path from a circle, but it is not working as expected. Here is my code:
// Create circle
var c1 = new Path.Circle(new Point(100, 70), 50);
c1.fillColor = 'red';
// Create path
var eraser = new paper.Path({strokeColor: 'black', strokeWidth: 20, strokeCap: 'round'});
eraser.add(new paper.Point(20, 20));
eraser.add(new paper.Point(100, 80));
eraser.add(new paper.Point(150, 150));
eraser.fillColor = 'white';
eraser.opacity = 0.6;
// Subtract
result = c1.subtract(eraser);
result.selected = true;
result.opacity = 0.8;
result.fillColor = 'pink';
It seems the path is seen as a polygone, not lines when subtracted:
Here is a jsFiddle : https://jsfiddle.net/Imabot/785ergpy/35/
Yes, this is because Paper.js do the boolean operation with the paths fill geometry, ignoring the stroke.
This is more obvious if you remove the stroke from your example (see this sketch).
What you need to do, if you want to subtract the stroke, is turning it into a path first.
Unfortunately, Paper.js doesn't have this feature yet, even if it's planned for a long time and exist as an experimental version (see this issue).
So you have to either use this experimental feature or use a vectorial drawing software like Adobe Illustrator, and export your stroke path as SVG for example, before using it with Paper.js.
I have a sprite for a png file. ( Dimensions of the png file is 432x10 ). The png file is in drawable-xxhdpi folder. When i run on emulator with hdpi density mySprite.getWidth() returns 432. ( mySprite.getWidthScaled() also returns 432.) But the png file is looked just about 200 pixel width. Which method gives right value. ( not the width of the png file.) The value that how many pixel the png file is monitorized in? Thank you very much.
Note : My English is insufficient, sorry.
`public Engine onLoadEngine () {
....
SCR_WIDTH = getResources().getDisplayMetrics().widthPixels;
SCR_HEIGHT = getResources().getDisplayMetrics().heightPixels;
MyCamera = new Camera (0, 0, SCR_WIDTH, SCR_HEIGHT);
......
}`
Suppose I want to take one picture, move all of its pixels one pixel to the right and one to the left, and save it. I tried this code:
my $image_file = "a.jpg";
my $im = GD::Image->newFromJpeg($image_file);
my ($width, $height) = $im->getBounds();
my $outim = new GD::Image($width, $height);
foreach my $x (1..$width)
{
foreach my $y (1..$height)
{
my $index = $im->getPixel($x-1,$y-1);
my ($r,$g,$b) = $im->rgb($index);
my $color = $outim->colorAllocate($r,$g,$b);
$outim->setPixel($x,$y,$color);
}
}
%printing the picture...
That doesn't do the trick; it draws all pixels, except those in which x=0 or y=0, in one color. Where am I going wrong?
Look in the docs:
Images created by reading JPEG images will always be truecolor. To
force the image to be palette-based, pass a value of 0 in the optional
$truecolor argument.
It's not indexed. Try adding a ,0 to your newFromJpeg call.
From the comments, it seems your next problem is the number of colors to allocate. By default, the indexed image is 8-bit, meaning a maximum number of 256 unique colors (2^8=256). The "simple" workaround is of course to use a truecolor image instead, but that depends on whether you can accept truecolor output.
If not, your next challenge will be to come up with "an optimal" set of 256 colors that will minimize the visible defects in the image itself (see http://en.wikipedia.org/wiki/Color_quantization). That used to be a whole topic in itself that we seldom have to worry about today. If you still have to worry about it, you are probably better off offloading that job to some specialized tool like Imagemagik or similar, rather than try to implement it yourself. Unless you like challenges of course.
Here's a solution using Imager because it's a very nice module and I'm more familiar with it, and it handles image transformations nicely.
use Imager;
my $image_file = "a.jpg";
my $src = Imager->new(file => $image_file) or die Imager->errstr;
my $dest = Imager->new(
xsize => $src->getwidth() + 1,
ysize => $src->getheight() + 1,
channels => $src->getchannels
);
$dest->paste(left => 1, top => 1, src => $src);
$dest->write(file => "b.jpg") or die $dest->errstr;
Try reversing the direction of x and y - not from 1 to max but from max to 1. You are not sliding the colors but copying the same again and again.
I realize that this is an old post, but this is a piece of code that I use GD:Thumb for creating resized images.
sub png {
my ($orig,$n) = (shift,shift);
my ($ox,$oy) = $orig->getBounds();
my $r = $ox>$oy ? $ox / $n : $oy / $n;
my $thumb = GD::Image->newFromPng($ox/$r,$oy/$r,[0]);
$thumb->copyResized($orig,0,0,0,0,$ox/$r,$oy/$r,$ox,$oy);
return $thumb, sprintf("%.0f",$ox/$r), sprintf("%.0f",$oy/$r);
}