I can use AVCaptureDevice as background.contents of SCNScene. And it works, but there is one problem with it. I would like to use video format that has resolution 1920x1080, and 60 FPS. But I can clearly see, that different format is used, it is 30 FPS. I am configuring used device before applying it to background, but somehow SCNScene is changing it. SceneView itself works in 60 FPS, but camera preview is a different story. Can I somehow force SCNScene to use video format I choose?
I know I could just add layer with camera preview under SceneView, but I have reasons why this approach is not working correctly, so I need to use this background property od the scene.
Sample project is here: https://www.dropbox.com/s/b820wxlg8voya58/SampleApp.zip?dl=1
In terminal you can clearly see, that after starting SceneView active format for device changes:
Selected format is:
<AVCaptureDeviceFormat: 0x282d58d00 'vide'/'420v' 1280x 720, { 3- 60 fps}, HRSI:4096x2304, fov:58.632, supports vis, max zoom:120.00 (upscales #2.91), AF System:2, ISO:23.0-736.0, SS:0.000013-0.333333>
2018-10-10 14:47:35.009890+0200 SampleApp[6799:1110910] [SceneKit] Error: Could not get pixel buffer (CVPixelBufferRef)
Format after 3 seconds is:
<AVCaptureDeviceFormat: 0x282d58fb0 'vide'/'420v' 1920x1080, { 3- 30 fps}, HRSI:4096x2304, fov:58.632, supports vis, max zoom:16.00 (upscales #1.94), AF System:2, ISO:23.0-736.0, SS:0.000013-0.333333>
Related
I am loading a 300mb 3d model of extension .dae, converted into .scn, with 4.4 million vertices, 1.5 million polygons, which is a 3d model of a building, created in 3DS MAX by an artist, like so:
let sceneToLoad = SCNScene(named: "art.scnassets/building1.scn")!
(It is loaded in a SCNView default viewer in the app so that the user can view, rotate it etc., by SCNView.allowsCameraControl = true)
Xcode will immediately crash when it reads that line, with only compiler info "unexpectedly found nil while unwrapping an optional value".
The memory does not go up at all when it runs to that line - suggesting it refuses to read it , and crashes instead. The 3d model is perfectly loaded and vieweable, rotateable etc in the XCODE Scenekit editor graphical viewer. When I replace it to point to another file name of a smaller 3d model it works fine, and even when I remove the model SCNNode in the same file (in the same "building1.scn" file) and replace with a smaller SCNNode of another random object, then miraculously it also works and loads fine.
I have not found anything similar on SO - in other similar answers iOS tries to load the model even if it's huge, but in none it crashes immediately finding a nil value.
Have tried all workarounds, remove/delete file and add again, load it as .dae in its original form, load the scene without unwrapping and unwrap later when searching for a node - nothing works, always crashes in the same way. The same thing happens when I try to load it in an ARKIT scene - it crashes at the above line that tries to just load the file.
Has anyone come across this, or knows of any workaround?
Many thanks
When load a 3D model with 1.5M polygons into SceneKit/ARKit, into RealityKit, or into AR Quick Look you'll always fail. That's because a robust number of polygons per 3D model must be not greater that 10K (with UV-texture having max resolution 2Kx2K, or with a regular texture rez 1Kx1K), and a maximum number of polygons per 3D scene must be not greater that 100K. You have exceeded the "unspoken" AR limit in 15 times.
Game engines and AR frameworks, like SceneKit, RealityKit and AR Quick Look, are incapable of rendering such a huge number of polygons using 60 fps framerate on iOS device (even most desktop computers fail to do this). The best solution for an ARKit/RealityKit applications is to use an optimized low-poly models. The most preferred format for working with AR on mobile platform is Pixar USDZ. A USDZ file is a no compression, unencrypted zip archive of USD file.
Look at this low-poly model from Turbosquid. It has just 5K polygons and it looks fine, doesn't it?
P.S.
You can convert obj, fbx or abc in usdz using command line tools. Read about it HERE.
I have been adding textures to SKSpriteNode() and also getting the texture from nodes in order to change them.
When adding textures I can't add a texture over 4000 wide or high without it resulting in a black SKSpriteNode() (the texture exists, its just black)
When getting a texture from a node I have to make sure the result is within 4000 width or height by scaling the node before getting the texture otherwise it is blank again.
This is all fine for my game at the moment but I am wondering if there is an inbuilt limit of 4000, just so I can allow for it.
(there is a reason why I am using such large textures...so it is possible that I might go over 4000 width occasionally)
Check out this helpful chart from Apple:
https://developer.apple.com/metal/limits/
It has a lot of information about graphical limitations. If you want to know the maximum texture size for iOS, find the entry for "Maximum 2D texture width and height".
It depends on what operating systems you are targeting. For example, if you want to support iOS 8 and higher you are restricted to the iOS 8 limit for 2D textures of 4096 x 4096 pixels even though later versions of iOS can support larger textures.
I use raspicam library from here. I can change frame rate at src/private/private_impl.cpp file. After the frame rate to 60, I can receive the frame rate 60, but the object size in the image is changed. I attached two images one is captured using 30fps and another one is captured using 60fps.
Why I have bigger object size using 60fps and how can I have normal object size (same as using 30fps)?
The first image is usign 30fps and second image is using 60fps.
According to description here, the higher frame rate modes require cropping on the sensor for 8M pixel camera. At the default 30fps the GPU code will have chosen the 1640x922 mode, so gives full field of view (FOV). Exceed 40fps and it will switch to the cropped 1280x720 mode. In either case the GPU will then resize it to the size you requested. Resize a smaller FOV to the same size and any object in the scene will use more pixels.Can use 5M pixel camera if no cropping is required.
I should use Field of view, zoom or cropping rather than object size is bigger.
It is also possible to keep the image the same size at higher frame rates by explicitly choosing a camera mode that does "binning" (which combines multiple sensor pixels into one image pixel) for both lower- and higher-rate capture. Binning is helpful because it effectively increases the sensitivity of your camera.
See https://www.raspberrypi.org/blog/new-camera-mode-released/ for details when the "new" higher frame rates were announced.
Also, the page in the other answer has a nice picture with the various frame sizes, and a good description of the available camera modes. In particular, modes 4 and higher are binning, starting with 2x2 binning (so 4 sensor pixels contribute to 1 image pixel) and ending with 4x4 (so 16 sensor pixels contribute to 1 image pixel).
Use the sensor_mode parameter to the PiCamera constructor to choose a mode.
I have separate manifest files for each level of my game. So when user is playing on Level #1, files for Level#2 are being preloaded at the same time using LoadQueue (PreloadJS).
I've noticed some strange behaviour. Stage FPS is set to 24. But FPS raises (it can be noticed visually) while its preloading next files. It comes to normal FPS when files are preloaded..
How do I fix it? FPS raises at least 1.5x times.
EDIT: I'm using Ticker (RAF activated) and its set to 24 frames per second. Also each sprite has its own 'framerate' property. In most cases its set to 24 as well, but sometimes it has 16 (every 3 frame has been cut and hence framerate set to 16).
EDIT2: Here is an example to check the issue. There are 2 manifest files (for Level1 and Level2). Level1 starts while level2 is being loaded in the background. You can notice that the sprite in Level1 played on higher FPS. It gets to normal FPS when Level2 loading is completed.
It's better to download it and test locally, otherwise FPS changes are less noticable when downloading speed is not as high as in local or on WiFi.
Link: http://www.filedropper.com/preloadjsfps
The game was created by support cocos2d 0.99.5 and Box2d.
Iphone SDK 4.3
We have a character. When a character moves quickly, it looks blurred (fuzzy // unfocused). On a simulator and on device (iPhone 3G).
To move a character using mouseJoint (dampingRatio = 0 // frequencyHz = -1).
In the screenshot image clearly. link
The character is focused. The screenshot not transfer problems.
All the time 60 fps.
Tried params:
use kCCDirectorProjection2D // 3D
alies // antialies to texture params
CC_COCOSNODE_RENDER_SUBPIXEL 1 and 0
Video sample: link
How to get a clear image of the character during the move?
I also had a problem like this and fixed it by changing this line in ccConfig.h:
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 0
to
#define CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL 1
This is the comment for this define, maybe it helps someone.
/** #def CC_FIX_ARTIFACTS_BY_STRECHING_TEXEL
If enabled, the texture coordinates will be calculated by using this formula:
- texCoord.left = (rect.origin.x*2+1) / (texture.wide*2);
- texCoord.right = texCoord.left + (rect.size.width*2-2)/(texture.wide*2);
The same for bottom and top.
This formula prevents artifacts by using 99% of the texture.
The "correct" way to prevent artifacts is by using the spritesheet-artifact-fixer.py or a similar tool.
Affected nodes:
- CCSprite / CCSpriteBatchNode and subclasses: CCLabelBMFont, CCTMXTiledMap
- CCLabelAtlas
- CCQuadParticleSystem
- CCTileMap
To enabled set it to 1. Disabled by default.
#since v0.99.5
*/
I am pretty sure that what you are describing is an optical illusion. LCDs, especially lower-quality LCDs, have a finite response time. If this response time is too slow, it can cause ghosting, i.e. the moving object looks smeared. Basically what's happening is the previous frame's (or several frames') pixels take a long time to actually "turn off" and you see fainter versions of your sprite left behind as it moves.
With regards to your comment:
For the experiment, I took a pencil and put it to a sheet of paper
began to move quickly. Eyes see a pencil in focus, then problem is not
an optical effect, a code problems
Looking at a moving object in the real world is not the same as looking at a moving object on the screen, with or without a poor display response time. The real-world object moves continuously, but the screen object moves in discrete steps. Your eye can follow the pencil exactly and keep the image sharp on your retina. If you follow a screen image, however, your eye moves smoothly, while the screen image "jumps" from place to place. This can cause a "juddering" effect for sufficiently fast-moving objects, even at high framerates. If 60fps is still juddery, there is basically no way around this; it is a limitation of current technology.