Find Camera id in Simple cv? - raspberry-pi

I want to use two cameras for my raspberry pi. So I want to know camera index number of each cameras. So what is the way to find the camera index no in raspbian OS?(The index numbers use to make camera object in Simple CV)

From what I can tell, referencing the book Practical Computer Vision with Simple CV...
On Linux, all peripheral devices have a file created for them
in the /dev directory. For cameras, the file names start with video and end with the camera ID, such as /dev/video0 and /dev/video1 . The number at the end equals the camera ID.
And as far as telling which camera number goes to which device (also from the book)...
from SimpleCV import Camera
# First attached camera
cam0 = Camera(0)
# Second attached camera
cam1 = Camera(1)
# Show a picture from the first camera
img0 = cam0.getImage()
img0.drawText("I am Camera ID 0")
img0.show()
# Show a picture from the first camera
img1 = cam1.getImage()
img1.drawText("I am Camera ID 1")
img1.show()

Related

Unity AR Rotate Scene to match reference point

How to Match a reference point in 2 different AR scenes by position and rotation?
Here are some details about my project:
I have 2 scenes: "new scan" and "load scan". In the "new scan" scene I instantiate a 3d cube and make all the other points relative to it. This is my reference point. Then I instantiate some more points and finally save all the data to the device (my phone).
Next, in "load scan" I load the scene again and instantiate the cube in the exact same world position. For now, I managed to set the right position for each point but the axis is rotated because I start the scene from a different real-world location and different phone rotation.
Based on the cubes which are instantiated in the same place, I need to match the rotation and the position of the scene so the points will appear in the same relative position as the first cube.
Note: one can assume that the cube will instantiate with the user standing in the same direction as the desired position. But Do NOT assume that the user starts the "load scan" scene in the same direction as the "new scan" scene (which effect the whole scene rotation).
Here is a visualization of the problem:
Image of New Scan:
Image of Load Scan:
Thanks
If you want to make sure that the cube will appear in the same position/rotation in every AR session you have several options:
Use an Image marker
Use ARWorldMap (iOS exclusive)
Use a Cloud Tracking solution (google cloud anchors / Azure spatial anchors)
Of course you can also try to make the user place the cube correctly themselves, or redesign your app to work without these restrictions.
So I've found a solution but this is not the ultimate solution:
First, make a class with public static parameter so we can pass it through other scripts and scenes. Something like that:
public static class SceneStage
{
public static int ResetScene = 0;
}
Now, every time the camera turns on, check the SceneStage.ResetScene state. If value == 0 don't do anything, otherwise ask the user to stand facing the desire direction and then press a button, which call the function ResetScene:
private void ResetScene(int _scene)
{
var xrManagerSettings = UnityEngine.XR.Management.XRGeneralSettings.Instance.Manager;
xrManagerSettings.DeinitializeLoader();
SceneManager.LoadScene(_scene); // reload current scene
xrManagerSettings.InitializeLoaderSync();
}
Here I send the scene build index to the function with:
ResetScene(SceneManager.GetActiveScene().buildIndex);
So basically, the flow is like that: for the first time we open the scene (when SceneStage.ResetScene = 1) -> change the value to 0, and reset the scene. The second time don't do anything, but when we leave the scene set the value back to 1 so the next scene will reset too (because the ARPose driver still tracking the environment).

How to convert a position into starting point for app using unity and ARKit?

Trying to develop an AR app.When the app is opened,at that point the device location is (0,0,0) that is if I print or display my coordinates it will be (0,0,0).I want to create a starting point like at the entrance of a door.When other users using my app can open the app anywhere.
What I am trying to do is I already kept an AR at the entrance of the door.Users open app at random position which become their starting point, all AR objects will appear.When they pass through the AR object ,I want the coordinates of their device be(0,0,0).But if I run the below code in unity editor it takes the camera to the position where app has started.I am looking to convert the entrance position into App starting point.
Camera.main.transform.position=new Vector3(0,0,0);
From what I understand if we change the position of camera in app it can show glitches
Your question is very unclear -.-
but IF what you try doing is reset the camera to (0,0,0) while keeping all game objects at the same relative position to it you could try:
var localToCamera = Camera.main.transform.worldToLocalMatrix;
var obj = GameObject.FindSceneObjectsOfType(typeof(GameObject));
foreach (var o in obj)
{
var go = (o as GameObject);
go.transform.FromMatrix(localToCamera * go.transform.localToWorldMatrix);
}
_
EDIT:
what the UGLY code above is supposed to do:
it's going to crawl through all GameObjects and reposition them in such a way that if you set the camera to position (0,0,0) and facing (0,0,1), they will remain at the same position and orientation relatively to the camera.
notice that the camera itself WILL get repositioned to (0,0,0) and facing (0,0,1) after this code is executed because
Camera.main.transform.worldToLocalMatrix * localToWorldMatrix == identity
EDIT:
I can't put this code in the comments because it's too long.
try
// code from the top of my head here, syntax or function names might not be exact
var forward = arcamera.transform.forward;
forward.y = 0;
arcamera.transform.rotation = Quaternion.LookDirection(forward);
// now the camera forward should be in the (x,z) plane
UnityARSessionNativeInterface.GetARSessionNativeInterface().SetWorldOrigin(arcamera.transform);
// since you did not tilt the horizontal plane, hopefully the plane detection still detected vertical and horizontal planes

Optical lense distance from an object

I am using a Raspberry PI camera and the problem in hand is how to find the best position for it in order to fully see an object.
The object looks like this:
Question is how to find the perfect position given that the camera is placed in the centre of the above image. Perfectly the camera will be able to catch the object only, as the idea is to get the camera as close as possible.
Take a picture with you camera, save it as a JPG, then open it in a viewer that allows you to inspect the EXIF header. If you are lucky you should see the focal length (in mm) and the sensor size. If the latter is missing, you can probably work it out from the sensor's spec sheet (see here to start). From the two quantities you can work out the angle of the field of view (HorizFOV = atan(0.5 * sensor_width / focal_length), VertFOV = atan(0.5 * sensor_height / focal_length). From these angles you can derive an approximate distance from your subject that will keep it fully in view.
Note that these are only approximations. Nonlinear lens distortion will produce a slightly larger effective FOV, especially near the corners.

How do I interpret an Intel Realsense camera depth map in MATLAB?

I was able to view and capture the image from the depth stream in MATLAB (using the webcam from the Hardware Support Package) from an F200 Intel Realsense camera. However, it does not look the same way as it does in the Camera Explorer.
What I see from MATLAB -
I have also linked Depth.mat that contains the image in the variable "D".
The image is returned as a 3 dimensional array of uint8. I assumed that the depth stream is a larger number that is broken in bits in each plane and tried bitshifting each plane and adding it to the next while taking care of the datatypes. Then displayed it using imagesc, but did not get a proper depth image.
How do I properly interpret this image? Or, is there an alternate way to capture images in MATLAB?

Matlab: accessing both image sequences from a 3D video file

I have recorded a 3D video using a Fujifilm Finepix Real 3d w3 camera. The resulting video file is a single AVI, so the frames from both lenses must somehow be incorporated within the single file.
I now wish to read in the video into Matlab such that I have two image sequences, each corresponding to either the left lens or the right lens.
So far I have played back the AVI file using the computer vision toolbox functions (vision.VideoFileReader etc), however it ignores one of the lenses and plays back only a single lens' image sequence.
How do I access both image sequences within Matlab?