Can Renderscript use Pixel Visual core? - renderscript

The new Pixel Visual core seems promising for imaging applications. Official blog mentions halide and TensorFlow to program this special processor but not renderscript. Will Renderscript (given its original goal) leverage this processor?
https://www.blog.google/products/pixel/pixel-visual-core-image-processing-and-machine-learning-pixel-2/

Since renderscript runs on DSPs if available, it should run on the pixel visual core since its activation on Android 8.1

Related

Official Kinect SDK and Unity3d

Does anyone know anything about using Kinect input for Unity3d with the official SDK? I've been assigned a project to try and integrate these two, but my super doesn't want me to use the open Kinect stuff. Last news out of the Unity site was that Kinect SDK requires 4.0 .Net and Unity3D only takes 3.5
Workarounds? Point me toward resources if you know anything about it please.
The OpenNI bindings for Unity are probably the best way to go. The NITE skeleton is more stable than the Microsoft Kinect SDK, but still requires calibration (PrimeSense mentioned that they'll have a calibration-free skeleton soon).
There are bindings to OpenNI from the Kinect SDK, that make the Kinect SDK work like SensorKinect, this module also exposes The KinectSDK calibration-free skeleton as an OpenNI module:
https://www.assembla.com/code/kinect-mssdk-openni-bridge/git/nodes/
Because the KinectSDK also provides ankles and wrists, and OpenNI already supported it (even though NITE didn't support it) all the OpenNI stuff including Unity character rigs that had included the ankles and wrists just all work and without calibration. The KinectSDK bindings for OpenNI also support using NITE's skeleton and hand trackers, with one caveat, it seems like the NITE gesture detection aren't working with the Kinect SDK yet. The work-around when using the KinectSDK with NITE's handGenerator is to use skeleton-free tracking to provide you with a hand point. Unfortunately you lose the ability to just track hands when your body isn't visible to the sensor.
Still, NITE's skeleton seems more stable and more responsive than the KinectSDK.
How much of the raw Kinect data do you need? For a constrained problem, like just getting limb articulation, have you thought about using an agnostic communication schema like a TcpClient. Just create a simple TCP server, in .net 4.0, that links to the Kinect SDK and pumps out packets w/ the info you need every 30ms or something. Then just write a receiving client in Unity. I had a similar problem with a different SDK. I haven't tried the Kinect though so maybe my suggestion is overkill.
If you want real-time depth/color data you might need something a bit faster, perhaps using Pipes?

Smile Detection (Any alternative other than OpenCV ?)

Is there any library alternative to OpenCV which detects smile.
I dont want to use OpenCV as it sometimes fails to detect faces due to background.
Any one knw other library ? other than OpenCV ?
I would recommend having a look at The Machine Perception Toolbox (MPT Library).
I had a chance to play with it a bit at an Openframeworks OpenCV workshop at Goldsmiths and there is a c++ smile detection sample available.
I imagine you can try the MPT Library for iPhone with openframeworks or simply link to the library from an iphone project.
sometimes fails to detect faces due to
background.
An ideal lighting setup will guarantee better results, but given that you want to use this on a mobile device, you must inform your users that smile detection might fail under extreme conditions (bad lighting)
HTH
How are you doing smile detection? I can't see a smile-specific Haar dataset in the default OpenCV face detection cascades. I suspect your problem is training data rather than OpenCV itself.
Egawer is a good starting point if you need a working app to begin with.
https://github.com/Atrac613/egawer-iOS
I checked the training images of smileD_haarcascade_v0.05, an found that they include the full face. So, it seems to be a "smiling face" detector rather than a smile detector alone. While this seems easier, it can also be less accurate.
The best is to create your own Haar Cascade XML file, but admittedly most of us developers don't have time for that. You can improve the results considerably by equalizing the brightness of the image.
iOS 7 now has native support of simile detection in CoreImage. Here is the API diff:
For iOS 7, Yes, now you can do it with CoreImage.
Here is the API diff in iOS 7 Beta 2:
CoreImage
CIDetector.h
Added CIDetectorEyeBlink
Added CIDetectorSmile

What's the right way to utilize ARM SIMD on iPhone for Game vector/matrix operation?

I'm making an vector/matrix library for Game which utilizes SIMD unit on iPhone (3GS or later).
How can I do this?
I searched about this, now I know several options:
Accelerate framework (BLAS+LAPACK+...) from Apple (iPhone OS 4)
OpenMAX implementation library from ARM
GCC auto-vectorization feature
What's the most suitable way for vector/matrix library for game?
You should assume GCC won't auto-vectorize your code, because it sounds like that is very unlikely to happen!
Like Paul said, to get the most performance from your iPhone you should write your own ARM Assembly code using NEON SIMD instructions for as much of it as you can. But that assumes you understand ARM assembly language as well as NEON, timing delays, etc. So if you don't want to learn ARM Assembly language, then Apple's Accelerate framework and ARM's OpenMAX libraries both have numerous functions that are already written in ARM assembly language with NEON SIMD instructions.
So either Accelerate or OpenMAX should be very good if you can use them. I haven't compared the 2 to see which one is actually faster, but I assume ARM's OpenMAX is slightly faster than Apple's implementation since ARM designed the NEON specs! But they should both run extremely fast.
With time come new answers:
The bullet physics engine is now optimized for NEON SIMD by Apple. http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?t=8490
To do it well you will probably need to write your own SIMD routines. Use the Neon C intrinsics in gcc rather than assembler to ease the pain of doing this.
I created a couple of NEON optimized Mat*Mat and Mat*Vec routine using inline ASM. They are part of the Oolong Engine, but they are under the MIT license, so you can use them as you like:
http://code.google.com/p/oolongengine/source/browse/trunk/Oolong%20Engine2/Math/neonmath/neon_matrix_impl.cpp
Apple now has <simd/simd.h> which is a library of optimized math routines for small vectors, matrices, and quaternions as part of the Accelerate framework you mention.
Seems like that's probably the easiest way today.
https://developer.apple.com/documentation/accelerate/simd?language=objc

Engine for use with PC and Iphone?

Are there any engines that allow me to develop for pc and iphone at the same time? My preferred language would be c#, but that probably won't happen, so I probably will learn c++ or java.
I want a 2d engine, by the way.
No experience with it but...
http://www.torquepowered.com/products/torque-2D/
C# and Java aren't allowed on the iPhone, so a C or C++ engine would be your best bet. I'm guessing you're doing a game--if so, you'll probably want to use OpenGL. I don't know any specifically, but here's a full list.
Working with pure Core Animation layers can yield cross-platform 2-D drawing and animation between iPhone, iPad, and Mac. As an example of this, the Core Plot framework runs on Mac and iPhone from the same codebase. Core Animation lets you do some pretty complex animations and layout in 2-D.
It is against the rules in OS 4 to write an iPhone app in something other than Apple's dev tools. However, IANAL but I'd expect that it is in theory ok to take the app you made there and then try to run it on an emulation layer etc.. on the /other/ platform(s). Not sure about direct solutions, but check out the GNU Objective-C runtime / GNUStep, they might be a helpful starting point.

OpenGL for Android and iPhone

In discussion with some colleagues we were wondering whether OpenGL work developed for Android or iPhone are effectively interchangeable given that both support the spec.
Or is the reality of sharing OpenGL between the two platforms more a case of quirks, tweaks and not as easy as one might have hoped.
An OpenGL implementation normally consists of two parts:
1. Platform specific part. This has function usually related to creating and displaying surfaces.
2. The OpenGL API. This part is the same on all platforms for the specific implementation of OpenGL, in the case of Android, OpenGLES 1.0.
What this means is that the bulk of your OpenGL code should be easy to port.
In C, you might have glLoadIdentity();
In Java on Android, something like gl.glLoadIdentity();
So for the bulk of your code you can cut and paste, and then search and replace prefixes like 'gl.'
Now for the fun part: you really need to be careful what version you are coding against. OpenGL for the desktop has APIs which don't exist in OpenGLES. There are also some OpenGL data types specific to each platform. In addition, you have 1.0 (e.g. Android) 1.1 (e.g. iPhone) 2.0 (e.g. iPhone GS) to deal with. The differences in API often have to do with additional hardware capability, so it's not like you can write some easy wrapper code to emulate 2.0 features in 1.0/1.1.
OpenGL ES on Android is done according to Khronos Java GLES spec JSR239 , and wraps GL calls in something like glinst.glBindBuffer(FloatBuffer.wrap(data) ... )
OpenGL ES on iPhone is done using stock GL.h files and the same call will just look like glBindBuffer(data...)
The code will not be interchangeable and will cause many quirks, even before you get into the whole mess of differences between 1.0 1.1 and 2.0 APis.
Both platforms use OpenGL ES, but Wikipedia claims that Android uses 1.0 while the iPhone uses 1.1 (original and 3g) an 2.0 for the 3gs link. It's likely that at least some programs will use api functions not included in 1.0, so there won't be full compatibility between the 2 (well 3).