How can I prevent GD from running out of memory? - perl

I'm not sure if memory is the culprit here. I am trying to instantiate a GD image from data in memory (it previously came from a database). I try a call like this:
my $image = GD::Image->new($image_data);
$image comes back as undef. The POD for GD says that the constructor will return undef for cases of insufficient memory, so that's why I suspect memory.
The image data is in PNG format. The same thing happens if I call newFromPngData.
This works for very small images, like under 30K. However, slightly larger images, like ~70K will cause the problem. I wouldn't think that a 70K image should cause these problems, even after it is deflated.
This script is running under CGI through Apache 2.0, on OS 10.4, if that matters at all.
Are there any memory limitations imposed by Apache by default? Can they be increased?
Thanks for any insight!
EDIT: For clarification, the GD::Image object never gets created, so clearing out the $image_data from memory isn't really an option.

GD library eats many bytes per byte of image size. It's a well over a 10:1 ratio!
When a user uploads an image to our system, we start by checking the file size before loading it into a GD image. If it's over a threshold (1 Megabyte) we don't use it but instead report an error to the user.
If we really cared we could dump it to disk, use the command line "convert" tool to rescale it to a sane size, then load the output into the GD library and remove the temporary file.
convert -define jpeg:size=800x800 tmpfile.jpg -thumbnail '800x800' -
Will scale the image so it fits within an 800 x 800 square. It's longest edge is now 800px which should safely load. The above command will send the shrunk .jpg to STDOUT. The size= option should tell convert not to bother holding the huge image in memory, but just enough to scale to 800x800.

I've run into the same problem a few times.
One of my solutions was simply to increase the amount of memory available to my scripts. The other was to clear the buffer:
Original Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
Edited Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
imagedestroy($src_img);
By clearing out the memory of the first src_image, it freed up enough to handle more processing.

Related

How to use "Easy edge trace" and "edge trace distances" in ImageJ?

I have already installed the both plugins but don't know how to use them for pod analysis. Need help in that as i don't have programming background. Also can we use it for batch processing of images, in case i have more than 100 images?
Another approach per specially coded ImageJ-macro gives reasonable estimates of the widths and lengths of all pods in the sample image. You can access the macro code from here. Unzip the zip-archive and drop the file "plantPodDimensions.ijm" onto the ImageJ main window. Then open the sample image and run the macro. The estimated pod dimensions appear in a table.
Specimen [right to left] Mean Pod Width [cm] Pod Length [cm]
OHiI7_pod-1 0.70±0.11 23.6
OHiI7_pod-2 0.59±0.09 22.3
OHiI7_pod-3 0.64±0.05 20.7
OHiI7_pod-4 0.41±0.04 20.5
OHiI7_pod-5 0.66±0.07 22.9
OHiI7_pod-6 0.68±0.10 24.4
OHiI7_pod-7 0.60±0.07 20.5
Of course it couldn't be tested, if the macro works as expected for other images than the sample image.
With the download of the plugins come documents that explain the use of the plugins. Batch processing is possible, if the starting points of the traces are known or can somehow be determined by additional pre-processing steps (not trivial). Both plugins are macro-recordable. In any case, batch processing will require some macro code.
For the use case in question I would recommend to perform the analyses via the GUI, not per batch processing. The coding of a suitable macro would take more time than the processing of 100+ images.

SB37 JCL error to small a size?

The following space allocation is giving me an sB37 JCL error. The cobol size of the output file is 100 bytes and the lrecl size is 100 bytes. What do you think is causing this error? I have tried increase the size to 500,100 and still get the same error.
Code:
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)
Try to increase not only the space, but the volume as well.
Include VOL=(,,,#) in your DD. # is the numbers of values you want to allocate
Ex: SPACE=(CYL,(10,5),RLSE),VOL=(,,,3) - includes 3 volumes.
Additionally, you can increase the size, but try to stay within reasonable limits :)
The documentation for B37 says the application programmer should respond as indicated for message IEC030I. The documentation for IEC030I says, in part...
Probable user error. For all cases, allocate as many units as volumes
required.
...as noted in another answer. However, be advised that the documentation for the VOL parameter of the DD statement says...
If you omit the volume count or if you specify 1 through 5, the system
allows up to five volumes; if you specify 6 through 20, the system
allows 20 volumes; if you specify a count greater than 20, the system
allows 5 plus a multiple of 15 volumes. You can override the maximum
volume count in data class by using the volume-count subparameter. The
maximum volume count for an SMS-managed mountable tape data set or a
Non-managed tape data set is 255.
...so for DASD allocations you are best served specifying a volume count greater than 5 (at least).
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DCB=(LRECL=100,BLKSIZE=,RECFM=FBM),
// SPACE=(CYL,(10,5),RLSE)
Try this instead. Notice that the secondary will take advantage of a large dataset whereas without that parameter the largest secondary that makes any sense is < 300. Oh, and if indeed it is from a COBOL program make sure that the FD says "BLOCK 0"!!!!! If it isn't "BLOCK 0" then you might not even need to change your JCL because it wasn't fixed block machine. It was merely fixed and unblocked so the space would almost never be enough. And finally you may wish to revisit why you have the M in the RECFM to begin with. Notice also that I took out the LRECL, the BLKSIZE and the RECFM. That is because the FD in the COBOL program is all you need and putting it in the JCL is not only redundant but dangerous because any change will have to be now done in multiple places.
//OUTPUT1 DD DSN=A.B.C,DISP=(NEW,CATLG,DELETE),
// DSNTYPE=LARGE,UNIT=(SYSALLDA,59),
// SPACE=(CYL,(10,1000),RLSE)
There is a limit of 65,535 tracks per one volume. So if you will specify a SPACE that exceeds that limit - system will simply ignore it.
You can increase this limit to 16,777,215 tracks by adding DSNTYPE=LARGE paramter.
Or you can specify that your dataset is a multi volume by adding VOL=(,,,3)
You can also use DATACLAS=xxxx paramter here, however first of all you need to find it. Easy way is to contact your local Storage Team and ask for one. Or If you are familiar with ISPF navigation, you can enter ISMF;4 command to open a panel
use bellow paramters before hitting enter.
CDS Name . . . . . . 'ACTIVE'
Data Class Name . . *
It should produce a list of all available data classes. Find the one that suits you ( has enougth amount of volume count, does not limit primary and secondary space

three.js toDataURL PNG is black

I'm trying to grab a screenshot with renderer.domElement.toDataURL("image/png"), and save it to a file.
The image is the right size, but it's black.
I have preserveDrawingBuffer turned on.
I think I'm decoding and saving the file correctly, because when I hexdump it I can see the correct initial characters for the PNG format, as well as the IHDR and IDAT chunk headers. However the closing IEND is missing.
Any known issues here? Hints? Windows 7/Firefox up to date if it matters.
Thanks... (Sorry if this is dumb, I'm very new to three.js)
I had somewhat similar problems with Windows 7/Firefox. PNG Data URL's would be randomly truncated or something, much shorter than a successful PNG export. Trying to set that data url as image src resulted in "Image corrupt" exception or something in FF. As little sense it makse, setting a small window.setTimeout (10ms) between rendering and getting the data URL helped in my case. Maybe Firefox needs a rest from the JS engine before it refreshes some canvas internal state or something.. weird.
I switched to JPG format (smaller files => truncation less of an issue?) and still saw it not working, then I tried this tip which I found here
If you want to save data that is derived from a Javascript
canvas.toDataURL() function, you have to convert blanks into plusses.
If you do not do that, the decoded data is corrupted:
<?php
$encodedData = str_replace(' ','+',$encodedData);
$decodedData = base64_decode($encodedData);
?>
This worked. Thanks, Mekal.
This tip seems to apply to JPGs only. I saw PNGs decoding correctly without the + replacement, and corruptly with it. I can use JPGs so my personal problem is solved. However I never saw a PNG that wasn't black even when decoded correctly and not truncated.
Kind of a lousy situation either way, I feel like. What is up with the +'s?
A black texture is a sign that you did not indicate the texture needs to be updated.
Also, you do not need to use canvas.toDataURL(). You can pass in the canvas reference to the THREE.Texture object.
var canvas = document.getElementById('#myCanvas');
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
// Now render the scene

Perl Image::Magick get in-memory contents

I'm using Image::Magick to modify my images. I am then using an HTTP::Request to send the image content to an API.
HTTP::Request has a content method which allows you to set the content for the request, but obviously this requires that you have the content in memory.
I know that I can read the content of the image into a variable by opening a file and reading it. However, since Image::Magick already has the content of the image in memory, is there any way that I can get it via my Image::Magick object? Thanks!
Image::Magick keeps the image in a custom format in memory to make it more simple and efficient to manipulate. It has to compress the data to JPEG, PNG, or whatever format you request before it is written to a file, so you cannot just access the image in memory as it stands.
However, the module's ImageToBlob method will provide an in-memory copy of the data it would have written to disk, to save you writing it out and reading it back again.
Note that it returns a list of images to allow for an object that contains more than one frame, so if you have only a single frame you must write
my #blobs = $image->ImageToBlob;
$request->content($blobs[0]);
or
my ($blob) = $image->ImageToBlob;
$request->content($blob);

C#: Take Out Image Portion of JPEG to Backup Metadata?

This will be a little backwards from the typical approach.
I've used ExifTool for metadata manipulation before, but I really want to keep the best metadata backup I can before I make anything permanent.
What I want to do is remove the compressed image portion of a JPEG file to leave everything else intact. That's backing up EXIF, Makernotes, IPTC, XMP, etc whether at the beginning or end of the file.
What I've tried so far is to strip all metadata from a copy of the original JPEG, and use it as a basis of what bytes will be taken out of the original. After looking at the raw data, it doesn't seem like the stripped copy is contiguous in the original copy. There may be some header information still remaining in the stripped version. I don't really know. Not a good way to do it, I suppose.
Are there any markers that will absolutely tell me where the compressed JPEG image data starts and ends? I understand that JPEG files have 0xFFD8 and 0xFFD9 to mark the start and end of the image, but have come to find out that metadata is actually between those markers.
I'm using C#.
Thank you.
To do this properly you need to fully parse the JPEG/JFIF format and discard anything you don't want. Metadata is all kept in APP segments or trailers after the JPEG EOI, so presumably you will toss everything else. Full parsing of a JPEG/JFIF is not trivial, and for this I refer you to the JPEF/JFIF specification.
You can use the JpegSegmentReader class from my MetadataExtractor library to retrieve specific segments from a JPEG image.