is there a limit to the size of a SKSpriteNode() texture? (4000 x 4000) - sprite-kit

I have been adding textures to SKSpriteNode() and also getting the texture from nodes in order to change them.
When adding textures I can't add a texture over 4000 wide or high without it resulting in a black SKSpriteNode() (the texture exists, its just black)
When getting a texture from a node I have to make sure the result is within 4000 width or height by scaling the node before getting the texture otherwise it is blank again.
This is all fine for my game at the moment but I am wondering if there is an inbuilt limit of 4000, just so I can allow for it.
(there is a reason why I am using such large textures...so it is possible that I might go over 4000 width occasionally)

Check out this helpful chart from Apple:
https://developer.apple.com/metal/limits/
It has a lot of information about graphical limitations. If you want to know the maximum texture size for iOS, find the entry for "Maximum 2D texture width and height".
It depends on what operating systems you are targeting. For example, if you want to support iOS 8 and higher you are restricted to the iOS 8 limit for 2D textures of 4096 x 4096 pixels even though later versions of iOS can support larger textures.

Related

CameraX ImageAnalysis set TargetResolution smaller than 640x480

I am trying to improve face detection rate by givining a 480x360 image to the ImageAnalysis of CameraX. However the following code produces 640x480 image, which reduces detection to 10 fps. If I give 480x360 I can improve rate to 20.
How can I get smaller target resolution and the defualt
Is there away to show the image I got for image anaysis as the prweview. As oppose to previews usecase. This is so that face detection overaly will not have big lag with the preview.
ImageAnalysis imageAnalysis =
builder
.setTargetResolution(new Size(360, 480))
.setTargetRotation(rotation)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
How can I get smaller target resolution and the defualt
The default as per the docs should be 640x480.
As to how to get smaller target resolutions, there are three possibilities I could imagine.
You are incorrectly referencing the imageAnalysis object somewhere down the line and it is ignoring your builder one, and defaulting to the default resolution of 640x480.
Your camera does not support a Size(360,480) resolution and the nearest supported one is 640x480.
You are referencing the Size in the wrong order i.e. Size(360, 480) may result in a different selected resolution than Size(480, 360). (You reference them in both orders in your question).
As per the docs
The target resolution attempts to establish a minimum bound for the image resolution. The actual image resolution will be the closest available resolution in size that is not smaller than the target resolution, as determined by the Camera implementation. However, if no resolution exists that is equal to or larger than the target resolution, the nearest available resolution smaller than the target resolution will be chosen.
So, I'd try a few smaller sizes, e.g. Size(200, 200) and see what smaller resolutions are supported and scale up from there.
Is there a way to show the image I got for image anaysis as the prweview. As oppose to previews usecase. This is so that face detection overaly will not have big lag with the preview.
I'm not sure why you assume that would be faster, as it would seem this would serialize the operations rather than doing them synchronously.
If you want further help on this, please provide all your code surrounding the creation of your ImageAnalysis instance.

Scaling Object turns the textures white (Unity3D)

I'm trying to figure out why my Object's textures keep turning white once I scale the object down to 1% (or less) of its normal size.
I can manipulate the objects realtime with my fingers and there is a threshold where all the textures (except a few) turn completely ghost white, as shown below:
https://imgur.com/wMykeFw
Any input to fix is appreciated!
One potential cause of this issue is due to how certain shaders can miscalculate how to render textures when scales are set to low values.
To be able to render this asset so small using the same shader, re-import the mesh with a smaller scale factor (in the mesh import settings), and that may fix it.
select ARCamera then camera, in the inspector, select the cameras clipping plane and increase it(you want to find the minimum possible clipping that works to save on memory, so start at 20000, and work your way backwards til it stops working, then back up a notch).
next (still in the cameras inspector), select Rendering Path and set it to Legacy Vertex Lit
this should clear it up for you

Raspicam library's frame rate and image

I use raspicam library from here. I can change frame rate at src/private/private_impl.cpp file. After the frame rate to 60, I can receive the frame rate 60, but the object size in the image is changed. I attached two images one is captured using 30fps and another one is captured using 60fps.
Why I have bigger object size using 60fps and how can I have normal object size (same as using 30fps)?
The first image is usign 30fps and second image is using 60fps.
According to description here, the higher frame rate modes require cropping on the sensor for 8M pixel camera. At the default 30fps the GPU code will have chosen the 1640x922 mode, so gives full field of view (FOV). Exceed 40fps and it will switch to the cropped 1280x720 mode. In either case the GPU will then resize it to the size you requested. Resize a smaller FOV to the same size and any object in the scene will use more pixels.Can use 5M pixel camera if no cropping is required.
I should use Field of view, zoom or cropping rather than object size is bigger.
It is also possible to keep the image the same size at higher frame rates by explicitly choosing a camera mode that does "binning" (which combines multiple sensor pixels into one image pixel) for both lower- and higher-rate capture. Binning is helpful because it effectively increases the sensitivity of your camera.
See https://www.raspberrypi.org/blog/new-camera-mode-released/ for details when the "new" higher frame rates were announced.
Also, the page in the other answer has a nice picture with the various frame sizes, and a good description of the available camera modes. In particular, modes 4 and higher are binning, starting with 2x2 binning (so 4 sensor pixels contribute to 1 image pixel) and ending with 4x4 (so 16 sensor pixels contribute to 1 image pixel).
Use the sensor_mode parameter to the PiCamera constructor to choose a mode.

High quality zoom in unity terrains

I have an application that uses the Unity terrain engine to view the terrain (and models on the terrain) with a few different fields of view. This is essentially a camera with telescopic zoom that transitions from 1x to 3x then to 9x.
The problem I'm having is that the various detail roll off settings (Detail Distance, Tree Distance, Billboard start etc.) are all based on the distance from the camera to the 'detail'. At 3x and 9x zoom the view starts at 200 units, and goes out to 2000 units. The landscapes look pretty rubbish, none of the grass shows up, and the trees are all billboarded (like a mid 90s game :-))
I'm trying to set a min & max range for detail based on what I can see in my viewport, not how far the camera is from that detail.
Has anybody got a suggestions as to how I can ramp up distant details when I have my tighter FOVs?
Thanks in advance.
Try add mipmaps for all textures on scene and disable lods on objects and check new details!
Make sure you are using standard size for your objects (1 unit ~ 1 meter)
Make sure all textures sizes are power of two and have mipmaps enabled.
Go to quality settings and Make sure Max LOD level is 0
Change LOD bias and see if it helps (1 is more details, 0 is less details)
Check detail distance in terrain itself and make sure it is far enough.

How many maximum triangles can be drawn on ipad using opengl es in 1 frame?

How many maximum triangles can be drawn on ipad in a single frame. Also, is there a limit to the number of gl calls used to draw those triangles?
The only limit on total triangles that you'll run into on the iPad is in terms of memory size and how quickly you wish for this to render. The more vertices you send, the more memory your application will use, and the slower it will render.
For example, in my benchmarks I was able to push over 1,800,000 triangles per second on an iPad 1 using OpenGL ES 1.1 smooth shading, a single light source, geometry stored in vertex buffer objects (VBOs), and vertices represented by GLshorts in order to minimize total size. The iPad 2 is significantly faster than that, especially when you start doing more complex operations in your fragment shaders. From that number, I can estimate that I'd want to have fewer than 30,000 triangles in my scene geometry if I wanted to render at 60 FPS on the iPad 1.
OpenGL ES 2.0 shaders make things more complicated because of their varying complexity, but they enable new effects and may allow you to use fewer triangles to achieve the same image quality as the fixed function pipeline.
For another example, in this question Davido has a model with about 900,000 triangles that he's able to render at nearly 10 FPS on an iPad 2. I also present some geometry optimization techniques in my answer there that I've found to have a significant impact on OpenGL ES 1.1 rendering when you are maxing out tiler utilization on the device.