GLSL: Built-in attributes not accessible for iPhone Apps? - iphone

I am getting really desperate here. I working with Xcode, trying to implement some OpenGL stuff on the iPhone. I have to write a Shader for Phong-Lighting. I got as far as declaring my geometry (vertices, indices, calculating etc.) and passing the respective arguments as attributes to the shader (written in GLSL). Using these attributes works fine, some really basic shader programs compile correctly and give the expected output.
Now I'm trying to start with some more advanced calculations, for which I need to use some of the built-in attributes of GLSL, namely i.e. the "gl_NormalMatrix", but whenever I try that, the program crashes and I get an "ERROR: 0:3: Use of undeclared identifier 'gl_NormalMatrix'".
It happens identically whenever use any of the built-in attributes such as gl_Vertex, gl_Normal etc.
Are these attributes not available in GLSL on the iPhone or am I missing something? Maybe I haven't fully understood how this works. As mentioned, passing my own vertex attributes into the shader works fine, and I am also wondering how the program should "know" the correct values "on its own" - so the whole concept of built-in attributes is still a bit confusing to me. But whenever I try to run some shaders I found online to see if something happens, I get these errors thrown, even though everyone else seems to use built-in attributes extensively when writing shaders.
I really hope someone here can shed some light on this. Thousand thanks in advance!
Julia

Those attributes aren't available in GL ES, whether on the iPhone or elsewhere. And the same goes for WebGL. You need to write your own matrix code, or use GLKit if you're supporting only iOS 5+, and supply attributes for yourself. See Kronos' reference card for an incredibly concise summary of what made it into ES 2.0 — amongst other things it lists all the available built-in special variables as:
gl_Position
gl_PointSize
gl_FragCoord
gl_FrontFacing
gl_PointCoord
gl_FragColor
gl_FragData[n]
GLKit's maths stuff is really good because it inlines and uses the ARM's NEON SIMD unit. I consider it a sufficient reason to specify iOS 5 as a minimum in all new GL projects.

Related

HoloLens 2 Research Mode with Unreal - How?

I'm new here, so feel free to give tips where needed. I am running into trouble using the Unreal engine combined with the HoloLens 2.
I would like to access the special black/white cameras of the HoloLens, for tracking purposes. These are normally not accessible. However, they can be activated by using the “perceptionSensorsExperimental” capability. This should be possible, since it also works with Unity: https://github.com/doughtmw/HoloLensForCV-Unity
I have tried to add the capability in the Unreal Project Settings: Config\HoloLens\HoloLensEngine.ini” -> “+RescapCapabilityList=perceptionSensorsExperimental”. The project still builds as expected, but I noticed that it doesn’t matter what I add here. Even something random like “+abcd=efgh” doesn’t break the build.
However, if I add “+CapabilityList=perceptionSensorsExperimental”, I get “Packaging (HoloLens): ERROR: The 'Name' attribute is invalid - The value 'perceptionSensorsExperimental' is invalid according to its datatype 'http://schemas.microsoft.com/appx/manifest/types:ST_Capability_Foundation' - The Enumeration constraint failed.”. I conclude: 1.) I’m making the changes in the right file. 2.) The right scheme needs to be configured in order for “+RescapCapabilityList=perceptionSensorsExperimental” to work as expected.
My question is how do I add the right schema to my Unreal project? (like in the Unity example referenced above, which uses “http://schemas.microsoft.com/appx/manifest/foundation/windows10/restrictedcapabilities”), I cannot find any example and I cannot find any proper place to put it. Not in the settings, not in the xml/ini files. Clearly, I am missing something.
Any thoughts are much appreciated!
Updated. We released HoloLens-ResearchMode-Unreal plugin

Problems getting OSVR to initialise the HMD Display with Oculus DK2

I am using a Oculus DK2 (v0.8) and OSVR SDK. I'm having a problem getting the HMD to run/display anything.
The Oculus samples and the OSVR samples do work however, so the osvr_server seems to run fine.
My application itself renders a test scene just fine when not using a HMD.
I tried two approaches:
First, just creating a osvr context and creating a DisplayConfig object. This seems to work, but DisplayConfig::checkStartup() fails (I do this in a loop, calling update on the context when the checkStartup call is failing). I used the OpenGLSample.cpp as a guide for this
Second, I tried using a RenderManager, but the call to createRenderManager results in a crash within the RenderManager.dll. I get the same crash wether I create the graphics lib object myself or if I let the library create it.
I am quite stuck now, since the demos and examples do work, I have no idea where to look for the error on my side. Creating the context works, querying interfaces as well, but the crash with createRenderManager is beyond me.
Does anyone have any hints or ideas what the problem could possibly be?
Regards and thanks in advance
pettersson
RenderManager should not crash during open. There have been a couple of bug fixes recently to avoid that happening, and the latest RenderManager binaries, libraries and header files are available with the SDK download from http://osvr.github.io/using/ along with updated copies of the example programs.
When something goes wrong in RenderManager, it usually reports that to standard error. We're moving that to a logging interface, but for now it should show up on the console. Posting an output of that as an issue at https://github.com/sensics/OSVR-RenderManager/issues is a good way to let the developers know that there is a problem. Of course, providing the same sort of information you provided here will be helpful as well.

Unity3d, Kinect + Zigfu

i have zigfu running in Unity3d on my Mac OS10.7/all the examples in the ZDK work nicely EXCEPT when i try to substitute the enclosed models-- i tried many other models (my own/blender imports, other mixamo, mixamo-unity fbx.exports, etc-all load beautifully in the encloed scenes, i target all the skeleton bones on the zigfu script, set user(s) + model target depending on the zigfu example on the correct scripts, and still nothing works. no error messages.can't figure this out! any ideas? tips, suggestions, experience with this? thanks.

Flash XFL to iOS/XCODE/Interfacebuilder

I am thinking of starting an ambitious project of writing some type of converter to convert the new Flash CS5 XFL format to some type of iOS readable format for building pages. A lot of what I do convert old Flash course over to native applications. They are usually very simple with some basic animations. Some are more complicated than others, obviously.
1) I recently found the XFL format and was wondering if anyone was doing this type of conversion?
2) Has adobe published this file specs yet? I haven't been able to find them, yet.
3) Is this even possible? Has anyone tried and been unsuccessful or would like to work together on this?
I thought of this and wonder if it is a good idea that we insert the code directly in the flash file. That means the tool is written in ActionScript, for example a utility class called SceneExporter. When you want to export a MovieClip, extends it with SceneExporter and when the Flash file runs, it exports the content of that MovieClip to a objective C code text file. Not professional way, but it should work since we have access to all child elements of that MovieClip.

Interfacing blurring routine with UIImage or CGImageRef... (iPhone)

I found some blurring code at http://incubator.quasimondo.com/processing/stackblur.pde. Any ideas how to feed it, and get back, a UIImage or CGImageRef or something usable on the iPhone?
I'm not sure what format their BImage file is (Bitmap?) and what corresponds to it in Cocoa Touch.
Thanks.
That code is in JVM-based Processing language. There are some attempts to port Processing on iPhone, but, I guess, at this stage, you'll either have to port that code by hands, digging in the entrails of the Processing implementation, or have to find yourself some another reference.
Update: On the second glance, they seem to be working with the plain low-level RGB data. So the code should be straightforward to port. Processing is close enough to Java, Java is close enough to C++, and you may compile C++ code as Objective C (just use .mm extension). Just copy-paste the code, fix syntax errors, and run it on your RGB data. Chances are good that you'll be able to get away with just that.
Dig into CGImage docs for information on how to get raw RGB data.
Update 2: The code you've linked to appears to be the stack blur. Author's page says there is a MIT-licensed C++ port of it in the Fog library (search here for Fog::Raster_C - StackBlur).