Raspicam library's frame rate and image - raspberry-pi

I use raspicam library from here. I can change frame rate at src/private/private_impl.cpp file. After the frame rate to 60, I can receive the frame rate 60, but the object size in the image is changed. I attached two images one is captured using 30fps and another one is captured using 60fps.
Why I have bigger object size using 60fps and how can I have normal object size (same as using 30fps)?
The first image is usign 30fps and second image is using 60fps.

According to description here, the higher frame rate modes require cropping on the sensor for 8M pixel camera. At the default 30fps the GPU code will have chosen the 1640x922 mode, so gives full field of view (FOV). Exceed 40fps and it will switch to the cropped 1280x720 mode. In either case the GPU will then resize it to the size you requested. Resize a smaller FOV to the same size and any object in the scene will use more pixels.Can use 5M pixel camera if no cropping is required.
I should use Field of view, zoom or cropping rather than object size is bigger.

It is also possible to keep the image the same size at higher frame rates by explicitly choosing a camera mode that does "binning" (which combines multiple sensor pixels into one image pixel) for both lower- and higher-rate capture. Binning is helpful because it effectively increases the sensitivity of your camera.
See https://www.raspberrypi.org/blog/new-camera-mode-released/ for details when the "new" higher frame rates were announced.
Also, the page in the other answer has a nice picture with the various frame sizes, and a good description of the available camera modes. In particular, modes 4 and higher are binning, starting with 2x2 binning (so 4 sensor pixels contribute to 1 image pixel) and ending with 4x4 (so 16 sensor pixels contribute to 1 image pixel).
Use the sensor_mode parameter to the PiCamera constructor to choose a mode.

Related

CameraX ImageAnalysis set TargetResolution smaller than 640x480

I am trying to improve face detection rate by givining a 480x360 image to the ImageAnalysis of CameraX. However the following code produces 640x480 image, which reduces detection to 10 fps. If I give 480x360 I can improve rate to 20.
How can I get smaller target resolution and the defualt
Is there away to show the image I got for image anaysis as the prweview. As oppose to previews usecase. This is so that face detection overaly will not have big lag with the preview.
ImageAnalysis imageAnalysis =
builder
.setTargetResolution(new Size(360, 480))
.setTargetRotation(rotation)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
How can I get smaller target resolution and the defualt
The default as per the docs should be 640x480.
As to how to get smaller target resolutions, there are three possibilities I could imagine.
You are incorrectly referencing the imageAnalysis object somewhere down the line and it is ignoring your builder one, and defaulting to the default resolution of 640x480.
Your camera does not support a Size(360,480) resolution and the nearest supported one is 640x480.
You are referencing the Size in the wrong order i.e. Size(360, 480) may result in a different selected resolution than Size(480, 360). (You reference them in both orders in your question).
As per the docs
The target resolution attempts to establish a minimum bound for the image resolution. The actual image resolution will be the closest available resolution in size that is not smaller than the target resolution, as determined by the Camera implementation. However, if no resolution exists that is equal to or larger than the target resolution, the nearest available resolution smaller than the target resolution will be chosen.
So, I'd try a few smaller sizes, e.g. Size(200, 200) and see what smaller resolutions are supported and scale up from there.
Is there a way to show the image I got for image anaysis as the prweview. As oppose to previews usecase. This is so that face detection overaly will not have big lag with the preview.
I'm not sure why you assume that would be faster, as it would seem this would serialize the operations rather than doing them synchronously.
If you want further help on this, please provide all your code surrounding the creation of your ImageAnalysis instance.

optical aspect ratio not consistent with supportedPictureSizes

I have discovered that certain tablets (e.g. Samsung SM-T210 - Galaxy Tab 3) have equal horizontal and vertical angles of view (implying aspect ratio = 1), while NONE of their camera1.parameters supported picture sizes have an aspect ratio of 1 (closest being 1.33). What's going on? Are pixel pitches different in the x and y directions? Is come kind of cropping always applied?
I am testing an augmented reality app, and I need to have a very clear understanding of how an optical point maps onto the sensor and then onto the screen or image.
On devices supporting Camera2, one can discover the actual physical sensor size, the pixel sensor size, and the pixel size actually being used, which would enable me to answer this question. But this device seems to have older hardware.
I would have thought that the largest supported pixel size would be (close to) the actual usable pixel array size. But the largest size has Aspect ratio of 1.33, not 1. Does this mean cropping is happening? Is there a scaler in play? What's going on??

is there a limit to the size of a SKSpriteNode() texture? (4000 x 4000)

I have been adding textures to SKSpriteNode() and also getting the texture from nodes in order to change them.
When adding textures I can't add a texture over 4000 wide or high without it resulting in a black SKSpriteNode() (the texture exists, its just black)
When getting a texture from a node I have to make sure the result is within 4000 width or height by scaling the node before getting the texture otherwise it is blank again.
This is all fine for my game at the moment but I am wondering if there is an inbuilt limit of 4000, just so I can allow for it.
(there is a reason why I am using such large textures...so it is possible that I might go over 4000 width occasionally)
Check out this helpful chart from Apple:
https://developer.apple.com/metal/limits/
It has a lot of information about graphical limitations. If you want to know the maximum texture size for iOS, find the entry for "Maximum 2D texture width and height".
It depends on what operating systems you are targeting. For example, if you want to support iOS 8 and higher you are restricted to the iOS 8 limit for 2D textures of 4096 x 4096 pixels even though later versions of iOS can support larger textures.

Automatic assembly of multiple Kinect 2 grayscale depth images to a depth map

The project is about measurement of different objects under the Kinect 2. The image acquisition code sample from the SDK is adapted in a way to save the depth information at the whole range and without limiting the values at 0-255.
The Kinect is mounted on a beam hoist and moved in a straight line over the model. At every stop, multiple images are taken, the mean calculated and the error correction applied. Afterwards the images are put together to have a big depth map instead of multiple depth images.
Due to the reduced image size (to limit the influence of the noisy edges), every image has a size of 350x300 pixels. For the moment, the test is done with three images to be put together. As with the final program, I know in which direction the images are taken. Due to the beam hoist, there is no rotation, only translation.
In Matlab the images are saved as matrices with the depth values going from 0 to 8000. As I could only find ideas on how to treat images, the depth maps are transformed into images with a colorbar. Then only the color part is saved and put into the stitching script, i.e. not the axes and the grey part around the image.
The stitching algorithm doesn’t work. It seems to me that the grayscale-images don’t have enough contrast to be treated by the algorithms. Maybe I am just searching in the wrong direction?
Any ideas on how to treat this challenge?

OpenGL ES 1.1 2D (iPhone): Resizing a group of textures/rectangular objects

I'm trying to resize whats being displayed by OpneGL ES on the screen by a factor equally.
More clearly, i'm trying to resize a layer by a factor, so that all the objects associated with that layer are resized by that factor.
-Suppose i have 2 images:image_1 of size:100x100 and image_2 of size:50x50. Both in layer_1.
-I set the layer_1 size to 0.5.
-The image_1 and image_2 should resize to 50x50 and 25x25 respectively.
-The images should be drawn in a new resized position.
I've been able to achieve this effect by doing some calculations on the CPU.
I would like to know if there is a way to do it on the GPU. Something like drawing to an empty texture. Is it possible with OpenGL ES 1.1? I'm quite new to OpenGL and graphics.
I don't think you will be able to calculate the scale on the GPU. So you will have to do calculations on the CPU as far as I know.
For the most part with OpenGL the drawing is handled with the GPU while most calculations are handled with the CPU.