three.js toDataURL PNG is black - png

I'm trying to grab a screenshot with renderer.domElement.toDataURL("image/png"), and save it to a file.
The image is the right size, but it's black.
I have preserveDrawingBuffer turned on.
I think I'm decoding and saving the file correctly, because when I hexdump it I can see the correct initial characters for the PNG format, as well as the IHDR and IDAT chunk headers. However the closing IEND is missing.
Any known issues here? Hints? Windows 7/Firefox up to date if it matters.
Thanks... (Sorry if this is dumb, I'm very new to three.js)

I had somewhat similar problems with Windows 7/Firefox. PNG Data URL's would be randomly truncated or something, much shorter than a successful PNG export. Trying to set that data url as image src resulted in "Image corrupt" exception or something in FF. As little sense it makse, setting a small window.setTimeout (10ms) between rendering and getting the data URL helped in my case. Maybe Firefox needs a rest from the JS engine before it refreshes some canvas internal state or something.. weird.

I switched to JPG format (smaller files => truncation less of an issue?) and still saw it not working, then I tried this tip which I found here
If you want to save data that is derived from a Javascript
canvas.toDataURL() function, you have to convert blanks into plusses.
If you do not do that, the decoded data is corrupted:
<?php
$encodedData = str_replace(' ','+',$encodedData);
$decodedData = base64_decode($encodedData);
?>
This worked. Thanks, Mekal.
This tip seems to apply to JPGs only. I saw PNGs decoding correctly without the + replacement, and corruptly with it. I can use JPGs so my personal problem is solved. However I never saw a PNG that wasn't black even when decoded correctly and not truncated.
Kind of a lousy situation either way, I feel like. What is up with the +'s?

A black texture is a sign that you did not indicate the texture needs to be updated.
Also, you do not need to use canvas.toDataURL(). You can pass in the canvas reference to the THREE.Texture object.
var canvas = document.getElementById('#myCanvas');
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
// Now render the scene

Related

Change brightness of markers in Flutter

I made PNGs for custom markers on my GoogleMap view. By using e.g.:
BitmapDescriptor bikeBlack = await BitmapDescriptor.fromAsset(const ImageConfiguration(), "assets/images/bike_black.png")
I obtain an object that I can use as a marker directly. However, I need to be able to change the brighness as well for about 50 markers of 4 different types, during runtime. The only possible solution I have come up with so far is creating 1024 different PNGs. This will increase app size by about 2MB but it might be a lot of work to do..
I cannot really afford using await statements since they slow the app down considerably. But if I have to, I can force myself to live with that.
As far as I can tell, a marker icon has to be a BitmapDescriptor. But I cannot find a way to change the brightness of such a BitmapDescriptor.
I'm close to just giving up and just writing a python script that will generate the 1,024 PNGs for me. But there must be a nicer and more efficient solution. If you have one, please let me know.
[EDIT]:
I went with creating 1024 images. For anybody in the same situation, this is the script I used:
from PIL import Image, ImageEnhance
img = Image.open("../img.png")
enhancer = ImageEnhance.Brightness(img)
for i in range(256):
img_output = enhancer.enhance(i / 255)
img_output.save("img_{}.png".format(i), format="png")

Remove PdfImageOject from a PDF

I have 1000th of PDF generated from emails containing .png (I am not owner of the generator). For some reasons, those PDF are very very slow to render with the Imaging system I am using (I am not the developer of that system and may not change it).
If I use iTextSharp and implement a IRenderListener to count the Images to be rendered, there are thousands per page (99% being 1 or 2 pixels only). But if I count the Images in the resources of the PDF, there are only a few (~tens).
I am counting the images in the resources, per page, with the code here after
var dict = pdfReader.GetPageN(currentPage)
PdfDictionary res = (PdfDictionary)PdfReader.GetPdfObject(dict.Get(PdfName.RESOURCES));
PdfDictionary xobj = (PdfDictionary)PdfReader.GetPdfObject(res.Get(PdfName.XOBJECT));
if (xobj != null)
{
foreach (PdfName name in xobj.Keys)
{
PdfObject obj = xobj.Get(name);
if ((obj.IsIndirect()))
{
PdfDictionary tg = (PdfDictionary)PdfReader.GetPdfObject(obj);
PdfName subtype = (PdfName)PdfReader.GetPdfObject(tg.Get(PdfName.SUBTYPE));
if (PdfName.IMAGE.Equals(subtype))
{
Count++
And my IRenderListener looks like this:
class ImageRenderListener : IRenderListener
{
public void RenderImage(iTextSharp.text.pdf.parser.ImageRenderInfo renderInfo)
{
PdfImageObject image = renderInfo.GetImage();
if (image == null) return;
var refObj = renderInfo.GetRef();
if (refObj == null)
Count++; // but why no ref ??
else
Count++;
}
I just started to learn about PDF specification and iTextSharp this evening, to analyze my PDF and understand what could be wrong... if I am correct, I see that many images to be rendered that are not referencing a resource (refObj == null) and that they are .png (image.streamContentType.FileExtension = "png"). So, I think those are the images making the rendering so slow...
For testing purpose, I would like to delete those images from the PDF but don't find how to proceed.
I only found code samples to remove image that are in the resources... but the images I want to delete are not :/
Is there any code sample somewhere to help me ? I did google on "iTextSharp remove object", etc... but there was nothing similar to my case :(
Let me start with the blunt observation that you have a shitty PDF.
The image you see when opening the PDF in a PDF viewer seems to be composed of several small 1- or 2-pixel images. The drawing operations to show these pixels one by one is suboptimal, no matter which imaging system you use: you are faced with a bad PDF.
In your first snippet, I see that you loop over all of the indirect objects stored in the the XObject resources of each page in search of images. You count these images, resulting in a number of Image XObjects stored in the PDF. If you add up all the Count values for all the pages, this number can be higher than the actual number of Image XObject stored in the PDF as you don't take into account that some images can be reused on different pages.
You do not count the inline images that are stored in the content streams. I'm biased. In the ISO committees for PDF, I'm on the side of the group of people saying that "inline images are evil" and "inline images should die". For now, we didn't succeed in getting rid of inline images, but we introduced some substantial limitations that should reduce the (ab)use of inline images in PDF that conform to ISO-32000-2 (the PDF 2.0 spec that is due in 2016).
You've already discovered that your PDF has inline images. Those are the images where refObj == null. They are not stored as indirect objects; they are stored inline, in the content stream of the page. As you can imagine based on my feelings towards inline images, I consider your PDF being a bad PDF for this reason (although it does conform to ISO-32000-1).
The presence of inline images is a first explanation why you have a different image count: when you loop over the indirect objects you only find part of the images. When you parse the document for images, you also find the inline images.
A second explanation could be the fact that the Image XObject are used more than once. That's the whole point of not using inline images. For instance: if you have an image that represents a logo that needs to be repeated on every page, one could use inline images. That would be a bad idea: the same image bytes would be present in the PDF as many times as there are pages. One should use an Image XObject. In this case, the image bytes of the logo are stored only once in an indirect object. There's a reference to this object from every page, so that the image bytes are stored in the document only once. In a 10-page document, you can see 10 identical images on 10 pages, but when looking inside the document, you'll find only one image that is referenced from every page.
If you remove Image XObjects by removing the indirect objects containing the image stream objects, you have to be very careful: are you sure you're not corrupting your document? Because there's a reference to the Image XObject in the content stream of your page. This reference points to an entry in the /XObjects entry of the page's /Resources. This /XObject references to the stream object with the image bytes. If you remove that indirect object without removing the references (e.g. from the content stream), you break your PDF. Some viewers will ignore those errors, but at some point in time some tool (or some body) is going to complain that your PDF is corrupt.
If you want to remove inline images, you have to parse all the content streams in your PDF: page content streams as well as Form XObject content streams. You have to rewrite all these streams and make sure all inline images are removed. That is: all objects that that start with the BI operator (Begin Image) and end with the EI operator (End Image).
That's a task for a PDF specialist who knows both iTextSharp and ISO-32000-1 inside-out. The solution to your problem probably doesn't fit into an answering window on StackOverflow.
I'm the original author of iText. From a certain point of view, iText is like a sharp knife. A sharp knife is a very good tool that can be used for many good things. However, you can also seriously cut your fingers when you're not using the knife in a correct way. I hope you'll be careful and that you're not going to create a whole series of damaged PDF files.
For instance: you assume that some of the files in the PDF are PNGs because iText suggests to store them as PNGs. However: PNG is not supported by ISO-32000-1, so your assumption that your PDF contains PNGs is wrong. I honestly worry when I see questions like yours.

JPEG encoder super slow, how to Optimize it?

I'm building an App with actionscript 3.0 in my Flash builder. This is a followup question this question.
I need to upload the bytearray to my server, but the function i use to convert the bitmapdata to a ByteArray is super slow, so slow it freezes up my mobile device. My code is as follows:
var jpgenc:JPEGEncoder = new JPEGEncoder(50);
trace('encode');
//encode the bitmapdata object and keep the encoded ByteArray
var imgByteArray:ByteArray = jpgenc.encode(bitmap);
temp2 = File.applicationStorageDirectory.resolvePath("snapshot.jpg");
var fs:FileStream = new FileStream();
trace('fs');
try{
//open file in write mode
fs.open(temp2,FileMode.WRITE);
//write bytes from the byte array
fs.writeBytes(imgByteArray);
//close the file
fs.close();
}catch(e:Error){
Is there a different way to convert it to a byteArray? Is there a better way?
Try to use blooddy library: http://www.blooddy.by . But i didn't test it on mobile devices. Comment if you will have success.
Use BitmapData.encode(), it's faster by orders of magnitude on mobile http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#encode%28%29
You should try to find a JPEG encoder that is capable of encoding asynchronously. That way the app can still be used while the image is being compressed. I haven't tried any of the libraries, but this one looks promising:
http://segfaultlabs.com/devlogs/alchemy-asynchronous-jpeg-encoding-2
It uses Alchemy, which should make it faster than the JPEGEncoder from as3corelib (which I guess is the one you're using at the moment.)
A native JPEG encoder is ideal, asynchronous would be good, but possibly still slow (just not blocking). Another option:
var pixels:ByteArray = bitmapData.getPixels(bitmapData.rect);
pixels.compress();
I'm not sure of native performance, and performance definitely depends on what kind of images you have.
The answer from Ilya was what did it for me. I downloaded the library and there is an example of how to use it inside. I have been working on getting the CameraUI in flashbuilder to take a picture, encode / compress it, then send it over via a web service to my server (the data was sent as a compressed byte array). I did this:
by.blooddy.crypto.image.JPEGEncoder.encode( bmp, 30 );
Where bmp is my bitmap data. The encode took under 3 seconds and was easily able to fit into my flow of control synchronously. I tried async methods but they ultimately took a really long time and were difficult to track for things like when a user moved from cell service to wifi or from tower to tower while an upload was going on.
Comment here if you need more details.

C#: Take Out Image Portion of JPEG to Backup Metadata?

This will be a little backwards from the typical approach.
I've used ExifTool for metadata manipulation before, but I really want to keep the best metadata backup I can before I make anything permanent.
What I want to do is remove the compressed image portion of a JPEG file to leave everything else intact. That's backing up EXIF, Makernotes, IPTC, XMP, etc whether at the beginning or end of the file.
What I've tried so far is to strip all metadata from a copy of the original JPEG, and use it as a basis of what bytes will be taken out of the original. After looking at the raw data, it doesn't seem like the stripped copy is contiguous in the original copy. There may be some header information still remaining in the stripped version. I don't really know. Not a good way to do it, I suppose.
Are there any markers that will absolutely tell me where the compressed JPEG image data starts and ends? I understand that JPEG files have 0xFFD8 and 0xFFD9 to mark the start and end of the image, but have come to find out that metadata is actually between those markers.
I'm using C#.
Thank you.
To do this properly you need to fully parse the JPEG/JFIF format and discard anything you don't want. Metadata is all kept in APP segments or trailers after the JPEG EOI, so presumably you will toss everything else. Full parsing of a JPEG/JFIF is not trivial, and for this I refer you to the JPEF/JFIF specification.
You can use the JpegSegmentReader class from my MetadataExtractor library to retrieve specific segments from a JPEG image.

How can I prevent GD from running out of memory?

I'm not sure if memory is the culprit here. I am trying to instantiate a GD image from data in memory (it previously came from a database). I try a call like this:
my $image = GD::Image->new($image_data);
$image comes back as undef. The POD for GD says that the constructor will return undef for cases of insufficient memory, so that's why I suspect memory.
The image data is in PNG format. The same thing happens if I call newFromPngData.
This works for very small images, like under 30K. However, slightly larger images, like ~70K will cause the problem. I wouldn't think that a 70K image should cause these problems, even after it is deflated.
This script is running under CGI through Apache 2.0, on OS 10.4, if that matters at all.
Are there any memory limitations imposed by Apache by default? Can they be increased?
Thanks for any insight!
EDIT: For clarification, the GD::Image object never gets created, so clearing out the $image_data from memory isn't really an option.
GD library eats many bytes per byte of image size. It's a well over a 10:1 ratio!
When a user uploads an image to our system, we start by checking the file size before loading it into a GD image. If it's over a threshold (1 Megabyte) we don't use it but instead report an error to the user.
If we really cared we could dump it to disk, use the command line "convert" tool to rescale it to a sane size, then load the output into the GD library and remove the temporary file.
convert -define jpeg:size=800x800 tmpfile.jpg -thumbnail '800x800' -
Will scale the image so it fits within an 800 x 800 square. It's longest edge is now 800px which should safely load. The above command will send the shrunk .jpg to STDOUT. The size= option should tell convert not to bother holding the huge image in memory, but just enough to scale to 800x800.
I've run into the same problem a few times.
One of my solutions was simply to increase the amount of memory available to my scripts. The other was to clear the buffer:
Original Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
Edited Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
imagedestroy($src_img);
By clearing out the memory of the first src_image, it freed up enough to handle more processing.