I am trying to get mapping to work with ROS. I have a Raspberry Pi 3B+ with a Picamera v2 running ROS Noetic. It is running cv_camera_node to broadcast the camera images. I also have an Ubuntu 20.04.1 running ROS Noetic along with ORB-SLAM2. I was able to get the debug_image to display the grayscale image with the green lines. I can't get it to pass the initialization stage. The debug image keeps on saying "Trying to initialize" and the output of the ORB Slam launch file says, "Map point vector is empty!" I was able to get it to work twice but I don't know how. How do I properly calibrate it?
I have faced this issue before. I was able to get around this by applying this patch and trying different image sizes to see what worked best.
You need a lot of points, about 300, to get it to pass this stage. Try to find a thing with an intricate design like a fan that has the fancy blade guard or a messy desk. That will have close to 300 points needed for calibration.
Related
I have this quad in the 3D scene:
I need to get the local positions of all painted (non transparent) pixels of this quad. Already tried to use GetPixels() and filter the result by the alpha value to get only pixels with a valid color in it. But then I noticed that it isn't possible to get the pixels' local positions using this method, cause it returns a Color array, which doesn't offer a way to retrieve that information. Already tried to google and nothing came up, maybe the only way to get what I want is to build something at shader level, but I don't know much about this subject either. I can offer more context to my doubt if needed, but I'm trying to keep things short here. Also, there's no code to show except for the wrong one using GetPixels(), which doesn't work for my case as far as I know.
Any help is appreciated!
I am building a GUI in MATLAB and I want to display point clouds in a figure inside this GUI. The GUI plays a 3D recording and enables me to pause/play, change speeds and change the video I am playing.
So far I've used pcplayer to display point clouds. For example:
player = pcplayer(xlimits, ylimits, zlimits, 'MarkerSize', 100);
view(player,point_cloud);
However this opens up a new figure. I've tried using pcshow:
pcshow(point_cloud, 'Parent', axes_to_plot);
This worked, but only for the first frame of the video. Afterwards I receive an error:
Property assignment is not allowed when the object is empty. Use subscripted assignment to create an array element.
This is not a problem with the clouds that I am trying to draw: they are not empty, and in addition trying to draw the same cloud twice results in the same error. There is something happening there that I do not understand.
Does anyone know how to solve my problem?
I am using MATLAB version 2016a.
Hi I have the same problem with the exact same error. This started since I have started to use matlab 2016b, did not have this problem with 2015b. I don't have exact solution, but what I did is that I put the pcshow to try catch environment to suppress the error. It works for me because pcshow makes the figure and then throws the error.
try
pcshow(point_cloud, 'Parent', axes_to_plot);
catch
end
Again this is not the solution to resolve the error, but could make your code work, it did mine.
I would like to use three kinects v2 running on three computers and then gather their data in one computer (real time 3d reconstruction using unity3d). Is it possible to do so and how ? Thank you.
So what you're asking is very do-able, it just takes a lot of work.
For reference I'm referring to the frames of the 3D point cloud gathered by the kinect as your image.
All you need is to set up a program on each of your kinect-computers that runs them as a client. With the other computer you can run that as a server and have the clients sending packets of images with some other data attached.
The data you'll need at a minimum will be angle and position from 'origin'.
For this to work properly you need to be able to reference the data in all your kinects to each other. The easiest way to do this is to have a known point and measure the distance from that point and angle the kinects are facing vs North and sea level.
Once you have all that data you can take each image from each computer and rotate the bit clouds using trigonometry, then combine all the data. Combing the data is something you'll have to play with as there are loads of different ways to do it and it will depend on your application.
I am using this example from a Computer Vision Made Easy" Matlab Webinar I watched, since I intend to use Computer Vision for my research in order to count cars and/or other types of vehicles.
Although I have changed some of the filter parameters and the detection works quite well, the problem is that the script displays ALL moving objects in the video. I would like to count vehicles from a specific road but my video screenshot includes many roads (screenshot here).
1) Is there a way to set the area of the video that I would like to detect cars? For example, only the "green arrow" road, and leave out the rest? I tried to crop the video but it is not a good solution since a part of another road always appears(screenshot here).
2) Moreover, in which part of the code can I add a counter in order to have an output on how many vehicles passed through the specific segment of the road? Any ideas on that?
If you know ahead of time where the road is, you can create a binary mask image, where the road is marked with 1's, and everything else has the value of 0. Then you can simply check whether or not a moving object is inside your region of interest.
Once you get comfortable with this example, check out a more advanced version, which not only detects moving objects, but also tracks them using the Kalman filter.
I am attempting to do some face recognition and hallucination experiments and in order to get the best results, I first need to ensure all the facial images are aligned. I am using several thousand images for experimenting.
I have been scouring the Internet for past few days and have found many different programs which claim to do so, however due to Matlabs poor backwards compatibility, many of the programs no longer work. I have tried several different programs which don't run as they are calling onto Matlab functions which have since been removed.
The closest I found was using the SIFT algorithm, code found here
http://people.csail.mit.edu/celiu/ECCV2008/
Which does help align the images, but unfortunately it also downsamples the image, so the result ends up quite blurry looking which would have a negative effect on any experiments I ran.
Does anyone have any Matlab code samples or be able to point me in the right direction to code that actually aligns faces in a database.
Any help would be much appreciated.
You can find this recent work on Face Detection, Pose Estimation and Landmark Localization in the Wild. It has a working Matlab implementation and it is quite a good method.
Once you identify keypoints on all your faces you can morph them into a single reference and work from there.
The easiest way it with PCA and the eigen vector. To found X and Y most representative data. So you'll get the direction of the face.
You can found explication in this document : PCA Aligment
Do you need to detect the faces first, or are they already cropped? If you need to detect the faces, you can use vision.CascadeObjectDetector object in the Computer Vision System Toolbox.
To align the faces you can try the imregister function in the Image Processing Toolbox. Alternatively, you can use a feature-based approach. The Computer Vision System Toolbox includes a number of interest point detectors, feature descriptors, and a matchFeatures function to match the descriptors between a pair of images. You can then use the estimateGeometricTransform function to estimate an affine or even a projective transformation between two images. See this example for details.