Blank output for image subtraction when using raspicam library - raspberry-pi

I am using raspberry pi with raspicam to run a project. I have downloaded the raspicam library from http://sourceforge.net/projects/raspicam/files/?
I am trying to run a code for image subtraction but not getting results. Here is my code
raspicam::RaspiCam_Cv Camera;
Camera.set(cv::CAP_PROP_FRAME, CV_8UC1);
if(!Camera.open())
{
std::cerr<<"cannot open camera"<<std::endl;
}
Camera.grab();
Camera.retrieve(frame1);
Camera.grab();
Camera.retrieve(frame2);
Camera.grab();
Camera.retrieve(frame3);
while (True)
{
frame1=frame2;
frame2=frame3;
Camera.grab();
Camera.retrieve(frame3);
absdiff(frame2,frame1,d1);
imshow("result1",d1);
absdiff(frame2,frame3,d2);
imshow("result2",d2);
}
when I run this code I get blank frames of result1 and result2 as output. This is just a part of my code ignore if i have missed something.

Well, inside your loop
frame1=frame2;
...
absdiff(frame2,frame1,d1);
that'll be sort of identically zero...
Also, have you considered the timing here? You're grabbing images very close together in the time-domain, so naturally they'll be mostly identical (bar noise and fast motion) so the difference will be near-zero.
Cheers,

Related

Reading STEP file into Open Cascade loses size information

I'm working on reading a STEP file into my C++ application, translating it into OCCT shapes, and displaying them using VTK. Everything seems to be working fine EXCEPT the scale. The shape I'm importing is just under 20 mm in diameter according to a 3D viewer (which matches what it was when I exported it from my app), but when I import it, it displays as the expected shape but at just under 20 meters in diameter. The program uses meters internally; OCCT appears to be ignoring units during import.
My program runs the following on initialization:
STEPControl_Controller::Init();
Interface_Static::SetCVal("xstep.cascade.unit","M");
The import code is:
STEPControl_Reader rdr;
IFSelect_ReturnStatus ret = rdr.ReadFile(filname);
if (ret == IFSelect_RetDone)
{
int navail = rdr.NbRootsForTransfer();
debugPrint("Found ",std::to_string(navail)," roots available");
int nroots = rdr.TransferRoots();
debugPrint("Transferred ",std::to_string(nroots)," roots");
TopoDS_Shape s = rdr.OneShape();
// Process the returned shape for display after this...
}
I've tried changing the value of "xstep.cascade.unit", and it has no effect on the size of the imported shape. It works as expected when exporting OCCT shapes to a STEP file, but not importing. Is there some parameter or initialization step I'm missing? We're using OCCT version 7.6.0.
ADDENDUM: I did check inside the STEP file, and the units are recorded with lines like:
#68 = ( LENGTH_UNIT() NAMED_UNIT(*) SI_UNIT(.MILLI.,.METRE.) );
So it shouldn't be a matter of not knowing what it's translating from.

Cannot identify image file io.BytesIO on raspberry Pi using PiCamera library and PIL

I am having trouble using the output from PiCamera capture function (directed in a BytesIO stream) and opening it using the PIL library. Here is the code (based on the PiCamera basic examples):
#Camera stuff
camera = PiCamera()
camera.resolution = (640, 480)
stream = io.BytesIO()
sleep(2)
try:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
frame.seek(0)
image = Image.open(frame) //THIS IS WHERE IS CRASHES
#OTHER STUFF THAT IS NON IMPORTANT GOES HERE
frame.truncate(0)
finally:
camera.close()
stream.close()
The error is : PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0xaa01cf00>
Any help would be greatly appreciated :)
Have a nice day!
The problem is simple but I am wondering why the io library works that way.
One simply needs to seek back the stream to 0 after truncating it or seek to 0 and then simply call truncate with no parameter (all after you are done opening the image). Like so:
for frame in camera.capture_continuous(stream, format = "jpeg", use_video_port = True):
stream.seek(0)
image = Image.open(stream)
#Do stuff with image
stream.seek(0)
stream.truncate()
Basically when you open the image and do some operation on it, the pointer of the BytesIO can move around and end up somewhere else than the zero position. After that when you call truncate(0) it does not move the pointer back to zero as I thought it would (seems logical to me to move the pointer back to where the truncation occurs). When to code runs once more, the capture writes in the stream but this time it does not start writing at the beginning and everything breaks after that.
Hope this can help someone in the future :)

gem5 cache statistics - reset and dump

I am trying to get familiar with gem5 simulator.
To start, I wrote a simple program with
int main()
{
m5_reset_stats(0, 0);
m5_dump_stats(0, 0);
return 0;
}
I compiled it with util/m5/m5op_x86.S and ran it using...
./build/X86/gem5.opt configs/example/se.py --caches -c ~/tmp/hello
The m5out/stats.txt shows (among other things)...
system.cpu.dcache.ReadReq_hits::total 881
system.cpu.dcache.WriteReq_hits::total 917
system.cpu.dcache.ReadReq_misses::total 54
system.cpu.dcache.WriteReq_misses::total 42
Why is an empty function showing so much hits and misses? Are the hits and misses caused by libc? If so, then what is the purpose of m5_reset_stats() and m5_dump_stats()?
I would check in the stats.txt file if there are two chunks of
---Begin---
---End-----
because as you explained it, the simulator is supposed to dump the stats at dump_stats(0,0) and at the end of the run. So, it seems like you either are looking at one of those intervals (and I would expect the other interval to have 0 for all stats); or there was a bug in the simulation and the dump_stats() (or reset_stats())didn't actually do anything. That actually happened to me plenty of times, but I am not really sure as to the source of this bug.
If you want to troubleshoot further, you could do the following:
Look at the disassembly of your code and find the reset_stats.w and dump_stats.w
Dump a trace from gem5 and see if it ends up executing the dump and reset instructions and also what instructions (and how many) are executed before/after.
Hope this helps!

GPUImageMovieWriter frame presentationTime

I have a GPUImageColorDodgeBlend filter with two inputs connected:
A GPUImageVideoCamera which is getting frames from the iPhone video camera.
A GPUImageMovie which is an (MP4) video file that I want to have laid over the live camera feed.
The GPUImageColorDodgeBlend is then connected to two outputs:
A GPUImageImageView to provide a live preview of the blend in action.
A GPUImageMovieWriter to write the movie to storage once a record button is pressed.
Now, before the video starts recording, everything works OK 100% of the time. The GPUImageVideo is blended over the live camera video fine, and no issues or warnings are reported.
However, when the GPUImageMovieWriter starts recording, things start to go wrong randomly. About 80-90% of the time, the GPUImageMovieWriter works perfectly, there are no errors or warnings and the output video is written correctly.
However, about 10-20% of the time (and from what I can see, this is fairly random), things seem to go wrong during the recording process (although the on-screen preview continues to work fine).
Specifically, I start getting hundreds & hundreds of Program appending pixel buffer at time: errors.
This error originates from the - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex method in GPUImageWriter.
This issue is triggered by problems with the frameTime values that are reported to this method.
From what I can see, the problem is caused by the writer sometimes receiving frames numbered by the video camera (which tend to have extremely high time values like 64616612394291 with a timescale of 1000000000). But, then sometimes the writer gets frames numbered by the GPUImageMovie which are numbered much lower (like 200200 with a timescale of 30000).
It seems that GPUImageWriter is happy as long as the frame values are increasing, but once the frame value decreases, it stops writing and just emits Program appending pixel buffer at time: errors.
I seem to be doing something fairly common, and this hasn't been reported anywhere as a bug, so my questions are (answers to any or all of these are appreciated -- they don't all need to necessarily be answered sequentially as separate questions):
Where do the frameTime values come from -- why does it seem so arbitrary whether the frameTime is numbered according to the GPUImageVideoCamera source or the GPUImageMovie source? Why does it alternative between each -- shouldn't the frame numbering scheme be uniform across all frames?
Am I correct in thinking that this issue is caused by non-increasing frameTimes?
...if so, why does GPUImageView accept and display the frameTimes just fine on the screen 100% of the time, yet GPUImageMovieWriter requires them to be ordered?
...and if so, how can I ensure that the frameTimes that come in are valid? I tried adding if (frameTime.value < previousFrameTime.value) return; to skip any lesser-numbered frames which works -- most of the time. Unfortunately, when I set playsAtActualSpeed on the GPUImageMovie this tends to become far less effective as all the frames end up getting skipped after a certain point.
...or perhaps this is a bug, in which case I'll need to report it on GitHub -- but I'd be interested to know if there's something I've overlooked here in how the frameTimes work.
I've found a potential solution to this issue, which I've implemented as a hack for now, but could conceivably be extended to a proper solution.
I've traced the source of the timing back to GPUImageTwoInputFilter which essentially multiplexes the two input sources into a single output of frames.
In the method - (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex, the filter waits until it has collected a frame from the first source (textureInput == 0) and the second, and then forwards on these frames to its targets.
The problem (the way I see it) is that the method simply uses the frameTime of whichever frame comes in second (excluding the cases of still images for which CMTIME_IS_INDEFINTE(frameTime) == YES which I'm not considering for now because I don't work with still images) which may not always be the same frame (for whatever reason).
The relevant code which checks for both frames and sends them on for processing is as follows:
if ((hasReceivedFirstFrame && hasReceivedSecondFrame) || updatedMovieFrameOppositeStillImage)
{
[super newFrameReadyAtTime:frameTime atIndex:0]; // this line has the problem
hasReceivedFirstFrame = NO;
hasReceivedSecondFrame = NO;
}
What I've done is adjusted the above code to [super newFrameReadyAtTime:firstFrameTime atIndex:0] so that it always uses the frameTime from the first input and totally ignores the frameTime from the second input. So far, it's all working fine like this. (Would still be interested for someone to let me know why this is written this way, given that GPUImageMovieWriter seems to insist on increasing frameTimes, which the method as-is doesn't guarantee.)
Caveat: This will almost certainly break entirely if you work only with still images, in which case you will have CMTIME_IS_INDEFINITE(frameTime) == YES for your first input'sframeTime.

FMOD runs out of channels, FMOD_CHANNEL_FREE seems to not to work

I am initializing FMOD with 32 channels and playing short samples (1 second) with the following code:
result = system->init(32, FMOD_INIT_NORMAL , NULL);
// here I load the sounds //
result = system->playSound(FMOD_CHANNEL_FREE, grid[_sound], false, &channel);
It works as intended, overlapping sounds, but now I realized that when I have played 32 samples (not at the same time), only one sound can be played at a time. It looks like FMOD_CHANNEL_FREE behaves like an incremental counter and when it hits 32, it stays there, stopping the last sound while it's still playing to play the new one.
Do I have to remove sounds when they have stopped playing? How? I feel like I am missing something basic
Thanks!
Marc
I had the same problem. Turns out that I forgot to call system->update() every frame. Once I put that in, it worked fine.
It sounds like the channels are still playing (but silent), can you check Channel::isPlaying and see if they are still going?
Perhaps post some more of your code if that doesn't help.
can u verify that u initializing fmod system with more than one max channels?
try to use following code for init your fmod system :
System->init(32, FMOD_INIT_NORMAL, 0);
or you forgot to call
System->Update();