I am developping a Qt application loading pictures with PIL, modifying colors and alpha channels, then converting them as QImage.
Here is the problematic piece of code: normal repeated usage of the ImageQt function: # memory is filled around 7 mB/s
if name == 'main':
while True:
im = Image.open('einstein.png') #small picture
imQt = QtGui.QImage(ImageQt.ImageQt(im)) # convert to PySide.QtGui.QImage
imQt.save('outtest.png')# -> rendered picture is correct
#del(imQt) and del(im) does not change anything
time.sleep(0.02)
The problem here is the crazy memory filling, when the picture is supposed to be erased by the garbage collector. I checked with gc.collect(), but it did not change anything.
This example shows what happends with the imageQt function, but in fact, I noticed this is a problem caused by QImage: if you repeatedly use the QImage constructor with data, the memory used by python process increases: im= Image.load('mypic.png').convert('RGBA')
data = im.toString('raw','RGBA')
qIm = QtGui.QImage(data,im.size[0],im.size[1],QtGui.QImage.Format_ARGB32)
qIm.save('myConvertedPic.png')# -> picture is perfect
If you put this code in a loop, memory will increase, as 1st example. From there i am a bit lost because this is a PySide problem...
I tried to use a workaround, but it does not work either:
#Workaround, but not working ....
if name == 'main':
while True:
im = Image.open('einstein.png') #small picture
imRGBA = im.convert('RGBA') # convert to RGBA
imRGBA.save('convtest.png') # ->picture is looks perfect
imBytes = imRGBA.tostring('raw','RGBA')
#print("size %d %d" % (imRGBA.size[0],imRGBA.size[1]))
qImage = QtGui.QImage(imRGBA.size[0],imRGBA.size[1],QtGui.QImage.Format_ARGB32) # create new empty picture
qImage.fill(QtCore.Qt.blue) # fill with blue, otherwise it catches pieces of the picture still in memory
loaded = qImage.loadFromData(imBytes,'RGBA') # load from raw data
print("success %d" % loaded)# -> returns 0
qImage.save('outtest.png')# -> rendered picture is blue
time.sleep(0.02)
I am really stuck here, if you could help find a solution with this workaround ? Because I'm really stuck here!
Also I would like to discuss the QImage problem. Is there any reliable way to free this memory ? Could the fact I am using python3.2(32bits) be a problem in this case ? Am I the only one in this case ?
The imports I am using in case of:
import time
import sys
import PySide
sys.modules['PyQt4'] = PySide # this little hack allows to solve naming problem when using PIL with Pyside (instead of PyQt4)
from PIL import Image, ImageQt
from PySide import QtCore,QtGui
After further unsuccessful searching, I noticed, that the PIL function image.tostring() associated with a QImage constructor caused this problem
im = Image.open('einstein.png').convert('RGBA')
data = im.tostring('raw','RGBA') # the tostring() function is out of the loop
while True:
imQt = QtGui.QImage(data,im.size[0],im.size[1],QtGui.QImage.Format_ARGB32)
#imQt.save("testpic.png") #image is valid
time.sleep(0.01)
#no memory problem !I think I am really close to find what is wrong, but I cannot point it out.
It definitely has something to do with the data variable being held in memory.
Related
I'm working on reading a STEP file into my C++ application, translating it into OCCT shapes, and displaying them using VTK. Everything seems to be working fine EXCEPT the scale. The shape I'm importing is just under 20 mm in diameter according to a 3D viewer (which matches what it was when I exported it from my app), but when I import it, it displays as the expected shape but at just under 20 meters in diameter. The program uses meters internally; OCCT appears to be ignoring units during import.
My program runs the following on initialization:
STEPControl_Controller::Init();
Interface_Static::SetCVal("xstep.cascade.unit","M");
The import code is:
STEPControl_Reader rdr;
IFSelect_ReturnStatus ret = rdr.ReadFile(filname);
if (ret == IFSelect_RetDone)
{
int navail = rdr.NbRootsForTransfer();
debugPrint("Found ",std::to_string(navail)," roots available");
int nroots = rdr.TransferRoots();
debugPrint("Transferred ",std::to_string(nroots)," roots");
TopoDS_Shape s = rdr.OneShape();
// Process the returned shape for display after this...
}
I've tried changing the value of "xstep.cascade.unit", and it has no effect on the size of the imported shape. It works as expected when exporting OCCT shapes to a STEP file, but not importing. Is there some parameter or initialization step I'm missing? We're using OCCT version 7.6.0.
ADDENDUM: I did check inside the STEP file, and the units are recorded with lines like:
#68 = ( LENGTH_UNIT() NAMED_UNIT(*) SI_UNIT(.MILLI.,.METRE.) );
So it shouldn't be a matter of not knowing what it's translating from.
I made PNGs for custom markers on my GoogleMap view. By using e.g.:
BitmapDescriptor bikeBlack = await BitmapDescriptor.fromAsset(const ImageConfiguration(), "assets/images/bike_black.png")
I obtain an object that I can use as a marker directly. However, I need to be able to change the brighness as well for about 50 markers of 4 different types, during runtime. The only possible solution I have come up with so far is creating 1024 different PNGs. This will increase app size by about 2MB but it might be a lot of work to do..
I cannot really afford using await statements since they slow the app down considerably. But if I have to, I can force myself to live with that.
As far as I can tell, a marker icon has to be a BitmapDescriptor. But I cannot find a way to change the brightness of such a BitmapDescriptor.
I'm close to just giving up and just writing a python script that will generate the 1,024 PNGs for me. But there must be a nicer and more efficient solution. If you have one, please let me know.
[EDIT]:
I went with creating 1024 images. For anybody in the same situation, this is the script I used:
from PIL import Image, ImageEnhance
img = Image.open("../img.png")
enhancer = ImageEnhance.Brightness(img)
for i in range(256):
img_output = enhancer.enhance(i / 255)
img_output.save("img_{}.png".format(i), format="png")
I writing my code within a Jupyter notebook in VS Code. I am hoping to play some of the audio within my data set. However, when I execute the cell, the console reports no errors, produces the widget, but the widget displays 0:00 / 0:00 (see below), indicating there is no sound to play.
Below, I have listed two ways to reproduce the error.
I have acquired data from the hub data store. Looking specifically at the spoken MNIST data set, I cannot get the data from the audio tensor to play
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()
display(Audio(data=sample, rate = 8000, autoplay=True))
The second example is a test (copied from another post) that I ran to see if it was something wrong with the data or something wrong with my console, environment, etc.
# Same imports as shown above
# Toy Function to play beats in notebook
def beat_freq(f1=220.0, f2=224.0):
max_time = 5
rate = 8000
times = np.linspace(0,max_time,rate*max_time)
signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)
display(Audio(data=signal, rate=rate))
return signal
v = interactive(beat_freq, f1=(200.0,300.0), f2=(200.0,300.0))
display(v)
I believe that if it is something wrong with the data (this is a well-known data set so, I doubt it), then only the second one will play. If it is something to do with the IDE or something else, then neither will work, as is the case now.
Apologies for the late reply! In the future, please tag the questions with activeloop so it's easier to sort through (or hit us up directly in community slack -> slack.activeloop.ai).
Regarding the Free Spoken Digit Dataset, I managed to track the error with your usage of activeloop hub and audio display.
adding [:,0] to 9th line will help fixing display on Colab as Audio expects one-dimensional data
%matplotlib inline
import hub
from IPython.display import display, Audio
from ipywidgets import interactive
# Obtain the data using the hub module
ds = hub.load("hub://activeloop/spoken_mnist")
# Create widget
sample = ds.audio[0].numpy()[:,0]
display(Audio(data=sample, rate = 8000, autoplay=True))
(When we uploaded the dataset, we decided to upload the audio as (N,C) where C is the number of channels, which happens to be 1 for the particular dataset. The added dimension wasn't added automatically)
Regarding the VScode... the audio, unfortunately, would still not work (not because of us, but VScode), but you can still try visualizing Free Spoken Digit Dataset (you can play the music there, too). Hopefully this addresses your needs!
Let us know if you have further questions.
Mikayel from Activeloop
I am having trouble using the output from PiCamera capture function (directed in a BytesIO stream) and opening it using the PIL library. Here is the code (based on the PiCamera basic examples):
#Camera stuff
camera = PiCamera()
camera.resolution = (640, 480)
stream = io.BytesIO()
sleep(2)
try:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
frame.seek(0)
image = Image.open(frame) //THIS IS WHERE IS CRASHES
#OTHER STUFF THAT IS NON IMPORTANT GOES HERE
frame.truncate(0)
finally:
camera.close()
stream.close()
The error is : PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0xaa01cf00>
Any help would be greatly appreciated :)
Have a nice day!
The problem is simple but I am wondering why the io library works that way.
One simply needs to seek back the stream to 0 after truncating it or seek to 0 and then simply call truncate with no parameter (all after you are done opening the image). Like so:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
stream.seek(0)
image = Image.open(stream)
#Do stuff with image
stream.seek(0)
stream.truncate()
Basically when you open the image and do some operation on it, the pointer of the BytesIO can move around and end up somewhere else than the zero position. After that when you call truncate(0) it does not move the pointer back to zero as I thought it would (seems logical to me to move the pointer back to where the truncation occurs). When to code runs once more, the capture writes in the stream but this time it does not start writing at the beginning and everything breaks after that.
Hope this can help someone in the future :)
I'm not sure if memory is the culprit here. I am trying to instantiate a GD image from data in memory (it previously came from a database). I try a call like this:
my $image = GD::Image->new($image_data);
$image comes back as undef. The POD for GD says that the constructor will return undef for cases of insufficient memory, so that's why I suspect memory.
The image data is in PNG format. The same thing happens if I call newFromPngData.
This works for very small images, like under 30K. However, slightly larger images, like ~70K will cause the problem. I wouldn't think that a 70K image should cause these problems, even after it is deflated.
This script is running under CGI through Apache 2.0, on OS 10.4, if that matters at all.
Are there any memory limitations imposed by Apache by default? Can they be increased?
Thanks for any insight!
EDIT: For clarification, the GD::Image object never gets created, so clearing out the $image_data from memory isn't really an option.
GD library eats many bytes per byte of image size. It's a well over a 10:1 ratio!
When a user uploads an image to our system, we start by checking the file size before loading it into a GD image. If it's over a threshold (1 Megabyte) we don't use it but instead report an error to the user.
If we really cared we could dump it to disk, use the command line "convert" tool to rescale it to a sane size, then load the output into the GD library and remove the temporary file.
convert -define jpeg:size=800x800 tmpfile.jpg -thumbnail '800x800' -
Will scale the image so it fits within an 800 x 800 square. It's longest edge is now 800px which should safely load. The above command will send the shrunk .jpg to STDOUT. The size= option should tell convert not to bother holding the huge image in memory, but just enough to scale to 800x800.
I've run into the same problem a few times.
One of my solutions was simply to increase the amount of memory available to my scripts. The other was to clear the buffer:
Original Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
Edited Script:
$src_img = imagecreatefromstring($userfile2);
imagecopyresampled($dst_img,$src_img,0,0,0,0,$thumb_width,$thumb_height,$origw,$origh);
imagedestroy($src_img);
By clearing out the memory of the first src_image, it freed up enough to handle more processing.