How to make a 3D viewer in a mixed QML + C++ application - qtquick2

I am writing an application that will let users pick stuff from a scrollable list of objects(represented as icons in the list). Then drop them to a 3D Viewer. Then the user clicks ‘Play’ and these objects move.
Its for a physics simulation where the objects to be added are in the list. So the interface is quite simple and I wanted to create it in QML. The simulation classes are in C++ (Bullet Physics). I am planning to make a 3D Viewer where the objects can be seen moving(following the laws of physics), as a QML Extension Plugin implemented in C++.
The interface is this:
The middle light gray area is where the 3D viewer will come in. The scene will be quite simple for now with max 15 to 20 objects scattered about.
I would like to know the best way to do this. Here are the options I have found so far:
Write a QQuickFramebufferObject based class: http://qt-project.org/doc/qt-5/qquickframebufferobject.html#details
This provides a FBO already and I have to implement the renderer in a QQuickFramebufferObject ::Renderer class: http://qt-project.org/doc/qt-5/qtquick-scenegraph-textureinsgnode-example.html
Extend QQuickItem http://qt-project.org/doc/qt-5.0/qtquick/scenegraph-openglunderqml.html
But this means I have to deal with Qt's scene graph and also create a FBO for some reason. I am not sure if this will result in my rendering going into the correct area of the window(gray area)
Extend from QQuickPaintedItem and implement paint()
Extend QDeclarativeItem which seems to be deprecated.
I would like to keep the drawing efficient and as fast as possible.

Related

Unity shader to render objects with same material to subsequent GrabPasses

Overview
I'm working on a shader in Unity 2017.1 to enable UnityEngine.UI.Image components to blur what is behind them.
As some of the approaches in this Unity forum topic, I use GrabPasses, specifically a tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(<uv with offset>)) call to look up the pixels that I use in my blur summations. I'm doing a basic 2-pass box blur, and not looking to optimize performance right now.
This works as expected:
I also want to mask the blur effect based on the image alpha. I use tex2D(_MainTex, IN.uvmain) to look up the alpha color of the sprite on the pixel I am calculating the blur for, which I then combine with the alpha of the blur.
This works fine when working with just a single UI.Image object:
The Problem
However when I have multiple UI.Image objects that share the same Material created from this shader, images layered above will cut into the images below:
I believe this is because objects with the same material may be drawn simultaneously and so don't appear in each other's GrabPasses, or at least something to that effect.
That at least would explain why, if I duplicate the material and use each material on its own object, I don't have this problem.
Here is the source code for the shader: https://gist.github.com/JohannesMP/8d0f531b815dfad07823d44bc12b8112
The Question
Is there a way to force objects of the same material to draw consecutively and not in parallel? Basically, I would like the result of a lower object's render passes to be visible to the grab pass of subsequent objects.
I could imagine creating a component that dynamically instantiates materials to force this, or using render textures, but I would really like a solution that doesn't require adding components or creating multiple materials to swap out.
I would love a solution that is entirely self-contained within one shader/one material but is unsure if this is possible. I'm still only starting to get familiar with shaders so I'm positive there are some features I am not familiar with.
It turns out that it was me re-drawing what I grabbed from the _GrabTexture that was causing the the issue. By correctly handling the alpha logic there I was able to get exactly the desired behavior:
Here is the updated sourcecode: https://gist.github.com/JohannesMP/7d62f282705169a2855a0aac315ff381
As mentioned before, optimizing the convolution step was not my priority.

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

iPad SDK for 2D graphics and gui elements?

Im looking for some sort of SDK or library on top of iOS, which might help me produce iPad/IPHONE games.
The sort of functionality Im looking for is..
GUI elements, skinnable buttons, lists, dialog boxes etc
Any routines to help with tile based games
Functions to paint and move sprites
Any vector libs to help with rotation, skew etc
Im confident I could write all this from scratch, but Im guessing theres some libraries already out there. Im not afraid of getting my hands dirty in code, so please dont slate me for asking for prebuilt stuff :)
Thanks
The defacto answer is cocos2d. Open source, MIT licensed, sprite library (including tiling map support baked in).
As for UI - Cocos has some helper utilities for dealing with UI elements, however its not very hard to skin UIKit (though the more customization you do the more drawing code you end up with).
most of the bits are already available in UIKit, but http://www.cocos2d-iphone.org/ is a framework worth looking at for game dev.
Two notes about cocos-2d:
- cocos has its own implementation of UI control called MenuItem. It can be used to easily emulate the behaviors of Buttons. There is also a simple layout algorithm to dispose this items in columns, rows, grids. There are also other controls that allow you to display text on the screen (labels). No text editors though, AFAIK.
- cocos can be easily integrated with the rest of UIKit, so it is simple to show standard "message boxes", some UIKit elements on top or behind it. I was able to use the cocos view as the child view of a UIImagePicker, for example.

How do I create a custom page curl Core Animation?

I'm trying to create a "page curl" animation of an image in my iPhone application. I t UIViewAnimationTransitionCurlUp, and it's undocumented Core Animation siblings, however the image I need to animate is a transparent PNG, with "uneven" (some alpha pixels) outlines. When using the aforementioned pre-made transition, those alpha pixels are painted black as soon as the animation starts, which looks terribly ugly.
Therefore, I seek to create a Core Animation of my own. I have tried to research the subject, but have been unable to find a good overview of the techniques involved. The implementation would of course have to be more complex than a single property change, I get the feeling that even CATransform3D would be to limited for this purpose, as the image needs to have different 3D transformations applied in different parts of it - changing over time. How would one then go about this subject? I'm very grateful for any thoughts or ideas!
Best,
Eli
As Corey points out, you'll probably need to go with OpenGL ES for this one. Core Animation exposes the ability to work with layers, even in 3-D, but all layers are just rectangles and they are manipulated as such. You can animate the flipping of a layer about an axis, even with a perspective distortion, but the kind of curving you want to do is more complex than you can manage using the Core Animation APIs.
You might be able to split your image up into a mesh of tiny layers and manipulate each using a CATransform3D to create this curving effect, but at that point you might as well be using OpenGL ES to create the same effect.
The book Core Animation for Mac OS X and the iPhone: Creating Compelling Dynamic User Interfaces from Pragmatic Programmer may help you write custom Core Animation animations.

How should I organize OpenGL ES 1.x 2D layer tree?

I'm developing a cute puzzle app - http://gotoandplay.freeblog.hu/categories/compactTangram/ - , and for performance reasons I decided to render the view with OpenGL. I started to learning it, I'm ok with buffers, vertices, textures in a really basic way.
The situation:
In the game user manipulates 7 puzzlePiece, each has 5 sublayers to get some pretty lighting feel. Most of the textures are 256x256. The user manipulates only one piece at a time, so the rest is unchanged during play. A skeleton of app without any graphic here: http://gotoandplay.freeblog.hu/archives/2009/11/11/compactTangram_v10_-_puzzle_completement_test/
The question:
How should I organize them? Is it a good idea to "predraw" the actual piece states in separate framebuffers(?)/textures(?) or I can simply redraw every piece/layers (1+7*5=36 sprite) in a timestep? If I use "predraw", then what should I do? Drawing to a puzzePiece framebuffer? Then how can I draw it into the scene framebuffer? Or is there a simplier way to "merge" textures?
Hope you can understand my question, if it seems too dim please take a look at my idea on how render an actual piece in my blog (there is a simple flash implemetation of what I'm gonna do) here: http://gotoandplay.freeblog.hu/archives/2010/01/07/compactTangram_072_-_tan_rendering_labs/
A common way of handling textures is to pack all your images into a 'texture atlas' at the start of the game/level.
Your maximum texture size is 1024x1024 and you can have about three of them in memory on the iPhone.
When you have all the images in these 'super textures' you can just draw the relevant area of the large texture. This has the advantage that you have to bind textures less often and you gain better performance, as well as cutting out any excess space used by the necessity to put small images in power-of-two size textures.