I'm having an issue with my GigE camera whenever I'm doing image acquisition with MATLAB IMAQ. It is clearly connected as indicated by the gigecamlist and gigecam function calls, but the issue is I'm always having black images or you can say no image at all. I figured it might be an issue of the frame size.Also, my network adapter doesn't provide the option of choosing jumbo frames, so I'm not sure what I can do about it now. The camera model under question is JAI Pulnix 1405-GE and I'm using MATLAB 2014b.
If any of you have any clue about what the problem might be, please share it with me. If you can give me some clue about what I can do solve this issue, it will be much appreciated.
Thank you
Did you try the JAI SDK software to see if the camera works properly?
Once you are sure about it working, copy the parameters shown in JAI SDK to your Matlab camera object.
On a side note, GigE cameras work better with Labview.
Related
As a beginner on Unity, I'm looking for a solution to a rather annoying problem. I've been through a lot of videos and articles, but I still can't solve it.
On a blank Scene, in a Canvas containing a Panel and an Image. I'm trying to display this image correctly. It is a pixel art image.
The problem is that it remains blurred and badly arranged according to the resolutions.
I try to find a way to display it correctly while keeping its pixel art aspect.
(I looked at the pixel perfect cameras, the stretch settings and others, I set the sprite parameters to Point and No compression and others. But nothing works).
I don't know how to propose different types of zoom according to the resolution and that the image doesn't blur
If someone has a little time, and can make me a scene just with his camera, and therefore the canvas, panel and image, with good setting so that I can understand my error. It would be a great help for me !
Thanks for reading !
The background picture :
Try selecting the image in the assets and change the Resize algorithm to Point(no filter)
Hop, I found the solution thanks to this threed :
Official - Pixel Perfect Preview Package
By looking correctly at the settings apply. It came from the resizing of the Canvas and the management of the Camera.
Thanks for your messages !
And thanks to the author of the threed : rustum !
I have the, in my opinion, simple problem of disabling image detection with the AR Camera. I have the problem, that my app detects an image from the image library and spawns an object etc. everything according to plan.
But the problem is that if move the camera over another detectable image, it recognizes it. This is bad not because it spawns something additionaly but because you can "collect" the images in my app, so it unlocked the other detected one even though it shouldn´t.
So how can I disable image detection without turning off the AR-Camera?
I so far tried to simply disable the "ARManager" and the "ARTrackedImageManager" script (.enabled=false), but it didn´t solve my problem, because the app still detects other images.
Hope I could explain what my question and problem is properly. Any help is appreciated!
It really depends on what library you're using to detect the image. Generally, most marker tracking libraries will create a marker object in your Unity scene. You can disable these marker objects after you find one, and only leave the marker you're interested in. Make sure you also set the number of tracked images to 1 so you won't accidentally find two markers in one frame.
I'm doing a project that requires acquiring real world coordination from a camera. The first thing that I need to do is calibrate my camera. I use Camera Calibrator from MATLAB Toolbox, and about 40 samples for calibrating. All the samples was taken by Logitech C922. But after calibrate, the result seems so wrong, as you can see in the image below.
It is more distortion than the original image. I have also tried to calibrate using OpenCV but the result is the same. Anyone know what wrong and why does this happen ?
I am sorry if those questions are really beginner level, camera calibration is very new to me and I was not able to find my answers.
Thank you in advance!
Firstly, you really need to figure out what does 'calibration' means.
it's clear that the picture is showing the undistorted picture,since the lines on the chessboard and those on the background are quite straight. Without undistortion the chessboard in the center would look like squeezed in radial directions. Check the button 'show original' on the bottom-left corner of your picture, click it, and find the difference between these two pics.
What this calibrator does is that it calculates the intrinsic/extrinsic parameters, distortion coefficients and, if you wish, undistort the pictures you gave to her. She already did the job.
I have a question on the Raspberry Pi cam. I am using openCV on a raspberry Pi 2 to make a line-follower for a robot.
Basically the idea is to find the direction of a line in the image using derivatives and color segmentation.
However, I'm found some strange behaviour when I compare the results on an ordinary PC webcamera and the picam. The algorithm works well on the PC webcam, and the direction indicator sits spot on the line. On the picam there is a strange scale and offset which I don't understand.
On both platforms I have tried both the cap.set(CV_CAP_PROP_FRAME_WIDTH/HEIGHT) to rescale the image, as well as the resize function. Both of them still produce the strange offset. I use the circle(...) and line(...) methods in openCV to overlay the line and circles on the captured image.
Could anyone help to explain this behaviour? See the links below for a visual caption.
picam
webcam
Regards
I couldn't add the pictures directly because of the policies of Stackexchange, so had to provide links instead.
I eventually discovered the solution to the problem, and it involved changing the order of the taps of a derivative filter for the Windows and Linux versions of the program. Exactely why this is the case is a mystery to me, and may involve differences in compiler optimization (Visual Studio 13 vs g++ 4.6.3), or maybe a silly error on my part.
On the the PC I use {1 0 -1} filter taps, on the RP2 I have to use {-1 0 1} instead.
The filter runs on a S8 (-127..127) image, so there is no issue of a wraparound.
At any rate, I consider the issue closed.
See the picture below. It's a flash game from a well known website :)
http://imageshack.us/photo/my-images/837/poolu.jpg/
I'd like to capture the images, frame by frame, using Matlab, and then lenghten the line that goes from the 8 ball, the short one, so i can see exactly where it will go. And display another window, in which the exact pool table will appear but with longer lines for the paths :)
I know, or can easily find out, how to capture the screen and whatnot, the problem is that i'm not sure how to start detecting those lines, to see the direction they are heading towards. Can anyone suggest an idea on how to accomplish this? Any image processing techniques i could use to at least filter out everything except those lines.
Not sure where to even start looking, or for WHAT.
And yeah, it's a cheat i know. But i got programming skills, why not put them in practice? :D Help me out people, it's a fun project :)
Thanks.
I would try using the Hough transform in the Matlab Image Processing Toolbox.
EDIT1:
Basically the Hough transform is a technique for detecting linear structures (lines) in an image.