Im trying to make a script that takes and input image and fades it out. So far this is my script
imgObject = im.open(imageName)
toAppend = []
for i in range(int(256)):
imgObject.putalpha(i)
toAppend.append(imgObject)
#imgObject.save('images/'+str(i)+'.png', 'PNG')
imgObject.save('finished.gif', save_all=True, append_images=toAppend)
When this is run, the output gif is just a still of the input with no changes. But if i save each image as a png, then the transparency works! It saves 255 different images where you can see it fade out. I've also tried stitching these photos together after the fact, but the same or similar problems occurred.
I've also tried this, this, this, and this. All producing the same effect.
Read more into the docs and found this
imgObject = im.open(imageName)
bpiObject = im.open(backroundName)
toAppend = []
for i in range(100):
imgObject = im.blend(imgObject, bpiObject, i/100)
toAppend.append(imgObject)
imgObject.save('finished.gif', save_all=True, append_images=toAppend, loop = 0)
This worked for me, they do have to have the same dimensions though
Related
I am creating a document using MATLAB's mlreportgen.dom.*;
I would like to be able to set the first and last page of a document to have no margins. This way I can get images to fit right across the page.
I am having difficulties with this, see example code
import mlreportgen.dom.*;
d = Document('myreport', 'pdf');
open(d);
currentLayout = d.CurrentPageLayout;
pdfheader = PDFPageHeader();
p = Paragraph('Sample Traffic Data in Austin');
p.Style = [p.Style, {HAlign('left'), Bold(true), FontSize('12pt')}];
append(pdfheader, p);
currentLayout.PageHeaders = pdfheader;
currentLayout.PageMargins.Gutter = '0.0in';
currentLayout.PageMargins.Left = '0.0in';
currentLayout.PageMargins.Right = '0.0in';
close(d);
rptview(d.OutputPath);
So far, I have naively tried to add a page break and redefine margins with no success. It appears to use the margins that come last in the document.
In pyBullet, I have struggled a bit with generating a dataset. What I want to achieve is to get pictures of what the camera is seeing: img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
Basically: to get the images that are seen in Synthetic Camera RGB data and Synthetic Camera Depth Data (especially this one), which are the camera windows you can see in the following picture on the left.
p.resetDebugVisualizerCamera(cameraDistance=0.5, cameraYaw=yaw, cameraPitch=pitch, cameraTargetPosition=[center_x, center_y, 0.785])
img = p.getCameraImage(224, 224, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgbBuffer = img[2]
depthBuffer = img[3]
list_of_rgbs.append(rgbBuffer)
list_of_depths.append(depthBuffer)
rgbim = Image.fromarray(rgbBuffer)
depim = Image.fromarray(depthBuffer)
rgbim.save('test_img/rgbtest'+str(counter)+'.jpg')
depim.save('test_img/depth'+str(counter)+'.tiff')
counter += 1
I already run the following, so I don't know if it is related to the settings. p.configureDebugVisualizer(p.COV_ENABLE_DEPTH_BUFFER_PREVIEW, 1)
I have tried several methods because the depth part is complicated. I don't understand if it needs to be treated separately because of the pixel color information or if I need to work with the project matrixes and view matrixes.
I need to save it as a .tiff because I get some cannot save F to png errors. I tried playing a bit with the bit information but acomplished nothing. In case you asked,
# depthBuffer[depthBuffer > 65535] = 65535
# im_uint16 = np.round(depthBuffer).astype(np.uint16)
# depthBuffer = im_uint16
The following is an example of the the .tiff image
And to end, just to remark that these depth images keep changing (looking at all of them, then to the RGB and passing again to the depth images, shows different images regardless of being the same image. I have never ever seen something like this before.
I thought "I managed to fix this some time ago, might as well post the answer found".
The data structure of img has to be taken into account!
img = p.getCameraImage(224, 224, shadow = False, renderer=p.ER_BULLET_HARDWARE_OPENGL)
rgb_opengl = (np.reshape(img[2], (IMG_SIZE, IMG_SIZE, 4)))
depth_buffer_opengl = np.reshape(img[3], [IMG_SIZE, IMG_SIZE])
depth_opengl = far * near / (far - (far - near) * depth_buffer_opengl)
seg_opengl = np.reshape(img[4], [IMG_SIZE, IMG_SIZE]) * 1. / 255.
rgbim = Image.fromarray(rgb_opengl)
rgbim_no_alpha = rgbim.convert('RGB')
rgbim_no_alpha.save('dataset/'+obj_name+'/'+ obj_name +'_rgb_'+str(counter)+'.jpg')
# plt.imshow(depth_buffer_opengl)
plt.imsave('dataset/'+obj_name+'/'+ obj_name+'_depth_'+str(counter)+'.jpg', depth_buffer_opengl)
# plt.show()
Final Images:
string = pytesseract.image_to_string(res,lang ='eng',config = config)
I am getting an error as:
pytesseract.pytesseract.TesseractError: (255, '')
i am cropping the images and performing some image processing tasks. After that I want to do ocr, on running the ocr i am getting the error.
string = pytesseract.image_to_string(res,lang ='eng',config = config)
expected the ocr result. but tesseract is throwing an error and stops executing
Some times the text may lie on the border of of the images. So padding the images along side the border resolves the issue
I am trying to build a support structure around a cylinder in openscad, but I cannot seem to make the angled part of the structure "manifold"
inner_slide_tube_inner_radius=14.9/2;
leadpipe_wall_thickness=14.9/2;
leadpipe_length=200;
mouthpiece_receiver_large_radius=0.546*25.4/2;
NoSpokes = 4;
SpokesWide = 3;
SpokesHigh = 3;
SpokesLong = leadpipe_length/2*0.75;
SpokesLong2 = leadpipe_length/2;
//if I comment out this section, then I can render a single support angle part when NoSpokes=1
for (i=[1:NoSpokes])
rotate([0,0,360/NoSpokes*i])
translate([mouthpiece_receiver_large_radius+leadpipe_wall_thickness,-SpokesWide/2,0])
cube([SpokesLong, SpokesWide, SpokesHigh]);
//
for (i=[1:NoSpokes])
rotate([0,0,360/NoSpokes*i])polyhedron(
points=[
[mouthpiece_receiver_large_radius+SpokesLong+leadpipe_wall_thickness-SpokesHigh, -SpokesWide/2, SpokesHigh],
[mouthpiece_receiver_large_radius+SpokesLong+leadpipe_wall_thickness-SpokesHigh, SpokesWide/2, SpokesHigh],
[inner_slide_tube_inner_radius, SpokesWide/2, SpokesLong2],
[inner_slide_tube_inner_radius, -SpokesWide/2, SpokesLong2],
[mouthpiece_receiver_large_radius+SpokesLong+leadpipe_wall_thickness, -SpokesWide/2, SpokesHigh],
[mouthpiece_receiver_large_radius+SpokesLong+leadpipe_wall_thickness, SpokesWide/2, SpokesHigh],
[inner_slide_tube_inner_radius, SpokesWide/2, SpokesLong2+SpokesHigh],
[inner_slide_tube_inner_radius, -SpokesWide/2, SpokesLong2+SpokesHigh]],
faces=[[1,0,3,2],
[1,5,4,0],
[2,3,7,6],
[1,5,6,2],
[0,4,7,3],
[4,5,6,7]
]);
I know that this is really naive question, but I am rather stuck as I keep getting the warning WARNING: Object may not be a valid 2-manifold and may need repair!
Any help would be greatly appreciated to get rid of the warning.
The reason your design is not manifold is that some of your polygon don't have the correct winding order. In OpenSCAD, if you preview your design using F2 (Thrown Together), such wrongly winded polygons will be highlighted in pink.
Suppose I want to take one picture, move all of its pixels one pixel to the right and one to the left, and save it. I tried this code:
my $image_file = "a.jpg";
my $im = GD::Image->newFromJpeg($image_file);
my ($width, $height) = $im->getBounds();
my $outim = new GD::Image($width, $height);
foreach my $x (1..$width)
{
foreach my $y (1..$height)
{
my $index = $im->getPixel($x-1,$y-1);
my ($r,$g,$b) = $im->rgb($index);
my $color = $outim->colorAllocate($r,$g,$b);
$outim->setPixel($x,$y,$color);
}
}
%printing the picture...
That doesn't do the trick; it draws all pixels, except those in which x=0 or y=0, in one color. Where am I going wrong?
Look in the docs:
Images created by reading JPEG images will always be truecolor. To
force the image to be palette-based, pass a value of 0 in the optional
$truecolor argument.
It's not indexed. Try adding a ,0 to your newFromJpeg call.
From the comments, it seems your next problem is the number of colors to allocate. By default, the indexed image is 8-bit, meaning a maximum number of 256 unique colors (2^8=256). The "simple" workaround is of course to use a truecolor image instead, but that depends on whether you can accept truecolor output.
If not, your next challenge will be to come up with "an optimal" set of 256 colors that will minimize the visible defects in the image itself (see http://en.wikipedia.org/wiki/Color_quantization). That used to be a whole topic in itself that we seldom have to worry about today. If you still have to worry about it, you are probably better off offloading that job to some specialized tool like Imagemagik or similar, rather than try to implement it yourself. Unless you like challenges of course.
Here's a solution using Imager because it's a very nice module and I'm more familiar with it, and it handles image transformations nicely.
use Imager;
my $image_file = "a.jpg";
my $src = Imager->new(file => $image_file) or die Imager->errstr;
my $dest = Imager->new(
xsize => $src->getwidth() + 1,
ysize => $src->getheight() + 1,
channels => $src->getchannels
);
$dest->paste(left => 1, top => 1, src => $src);
$dest->write(file => "b.jpg") or die $dest->errstr;
Try reversing the direction of x and y - not from 1 to max but from max to 1. You are not sliding the colors but copying the same again and again.
I realize that this is an old post, but this is a piece of code that I use GD:Thumb for creating resized images.
sub png {
my ($orig,$n) = (shift,shift);
my ($ox,$oy) = $orig->getBounds();
my $r = $ox>$oy ? $ox / $n : $oy / $n;
my $thumb = GD::Image->newFromPng($ox/$r,$oy/$r,[0]);
$thumb->copyResized($orig,0,0,0,0,$ox/$r,$oy/$r,$ox,$oy);
return $thumb, sprintf("%.0f",$ox/$r), sprintf("%.0f",$oy/$r);
}