Keynote/Powerpoint in Unity - unity3d

Situation:
I am working on a project that allow the user to practice presentations in a VR-Room. This includes the use of Powerpoint/Keynote, which is displayed on a plane. Image display is easy possible, just as video's.
Problem:
There's the problem. Images don't contain movement, but a powerpoint/keynote file often does. Since Unity does not support the file extension of powerpoint and keynote. Exporting to HTML and programming our own phaser for the json files and apply the animations doesnt seem worth the effort.
Current situation:
At this moment we converted all sheets to textures. Not using the animations.
Request:
In the past there used to be some plugins to display HTML on a plane (flat surface). But these seem to be outdated. Is there anyone out there who has a solution for this problem?
Thanks in advance.

Although this answer doesn't address the specific request of displaying HTML on a quad (plane, whatever) in Unity, it is a solution that may be worth considering if it fits your scenario.
If the presentations are linear, why not record them as video? You can easily the play the video on a quad in Unity using RenderTexture and pause it at the right moments to wait for the user to trigger the next slide/animation, whereupon the video can be played again until the next stop point.
This will require little programming on your part, but isn't the most flexible solution as it requires a linear slideshow and for you to create pause-points in the video playback at the correct timings to match the points where the slideshow naturally awaits a mouseclick from the user.

Related

Building custom Mixed Reality (Augmented Reality) setup in Unity

I'm tasked with developing an application, which would emulate augmented reality in a virtual reality application. We are using Google Cardboard (Google VR), and want to show the camera images (don't mind the actual camera setup, say I already have the images) to the user.
I'm wondering about the ways to implement it. Some ideas I had:
Substituting the images rendered for each eye by my custom camera images.
Here I have the following problems: I don't know how to actually replace the images that are rendered to the screen, let alone to each eye. And how to afterwards show some models overlayed on top of the image (I would assume by using the Stencil Buffer?).
Placing 2 planes in from of the camera with custom images rendered onto it
In this case, I'm not sure about the whole "convenience" of the user experience, as the planes would most likely be placed really close, so you only see one plane with one eye, and not the other. Seems like it might put some strain onto your eyes, because they would close on something that is really close to you.
Somehow I haven't found a project that would try to achieve something like that, and especially with all the Windows Mixed Reality related stuff polluting the search results.
You can use Vuforia digital eyewear, here is the documentation for it.
And a simple tutorial on YouTube.

3D AR Markers with Project Tango

I'm working on a project for an exhibition where an AR scene is supposed to be layered on top of a 3D printed object. Visitors will be given a device with the application pre-installed. Various objects should be seen around / on top of the exhibit, so the precision of tracking is quite important.
We're using Unity to render the scene, this is not something that can be changed as we're already well into development. However, we're somewhat flexible on the technology we use to recognize the 3D object to position the AR camera.
So far we've been using Vuforia. The 3D target feature didn't scan our object very well, so we're resorting to printing 2D markers and placing them on the table that the exhibit sits on. The tracking is precise enough, the downside is that the scene disappears whenever the marker is lost, e.g. when the user tries to get a closer look at something.
Now we've recently gotten our hands on a Lenovo Phab 2 pro and are trying to figure out if Tango can improve on this solution. If I understand correctly, the advantage of Tango is that we can use its internal sensors and motion tracking to estimate its trajectory, so even when the marker is lost it will continue to render the scene very accurately, and then do some drift correction once the marker is reacquired. Unfortunately, I can't find any tutorials on how to localize the marker in the first place.
Has anyone used Tango for 3D marker tracking before? I had a look at the Area Learning example included in the Unity plugin, by letting it scan our exhibit and table in a mostly featureless room. It does recognize the object in the correct orientation even when it is moved to a different location, however the scene it always off by a few centimeters, which is not precise enough for our purposes. There is also a 2D marker detection API for Tango, but it looks like it only works with QR codes or AR tags (like this one), not arbitrary images like Vuforia.
Is what we're trying to achieve possible with Tango? Thanks in advance for any suggestions.
Option A) Sticking with Vuforia.
As Hristo points out, Your marker loss problem should be fixable with Extended Tracking. This sounds definitely worth testing.
Option B) Tango
Tango doesn't natively support other markers than the ARTags and QRCodes.
It also doesn't support the Area Learnt scene moving (much). If your 3DPrinted objects stayed stationary you could scan an ADF and should have good quality tracking. If all the objects stay still you should have a little but not too much drift.
However, if you are moving those 3D Printed objects, it will definitely throw that tracking off. So moving objects shouldn't be part of the scanned scene.
You could make an ADF Scan without the 3D objects present to track the users position, and then track the 3D printed objects with ARMarkers using Tangos ARMarker detection. (unsure - is that what you tried already?) . If that approach doesn't work, I think your only Tango option is to add more features/lighting etc.. to the space to make the tracking more solid.
Overall, Natural Feature tracking by Vuforia (or Marker tracking for robustness) sounds more suited to what I think your project is doing, as users will mostly be looking at the ARTag/NFT objects. However, if it's robustness is not up to scratch, Tango could provide a similar solution.

How to create a Flashlight in unreal that can be used in Android?

The player carries the flashlight so it is moving all the time. I'm aware of using a spotlight to make a flashlight when developing for the PC but it doesn't work for android. I have tried searching about it and all i have come across is creating a dynamic material that's applied to a certain area to give the illusion of a flashlight and it doesn't look good at all. So i would like to know if there is any other way to achieve this.
I think one option would be to use post processing. I am not sure is this better than using materials, but it is different (perhaps easier) way.
Here is example that I made quickly (obviously you would need to fine tune it):
fake light GIF
This contains ambient light and post processing effect.
Yellow area in middle that you see as light is not point light, it is just effect.
It is possible to change "light" area/intensity/color/... as well as overall darkness.
Also, worth mentioning, I made this quickly since I had already somewhat similar post processing effect, I just adjusted it so it look like flashlight.
You can find more information here:
Post Process Effects
Post Process Materials

Dynamic animation using mic volume as trigger

I am new to iPhone development, and I am looking for someone with experience
to simply tell me whether I am on the right path or to perhaps point me to it
regarding what I am trying to acheive.
I am trying to develop a character animation that reacts to the volume of
the microphone input. Something like Talking Tomcat, except that instead of
having just a face react to the volume, an entire character´s body is involved.
The character has been created in illustrator and it is image based. So
this will be a 2D animation. I have created numerous frames for the different
kinds of reactions the character will have depending on the volume.
For my animation I am using UIViews using its Animation resources and adding
UIImageViews as subviews. I am also using CGAffineTransforms for rotating
images.
I am also relying on Timers to control the different stages of the
animation.
To allow for a more flexible animation, I have created UIViews for the head,
arms, torso and legs of the character. These have been severed into their own
images and I am manipulating these images individually through UIViews.
I can go over my code in more detail if necessary, although any help will be greatly appreciated even if it is something off the bat.
Seeing that you're not an advanced ios developer I would suggest starting with cocos2d.
It has everything you need to start with animation.
Web page and forums has plenty of information on how to do it and... what's most important [to me at least], you've got gurus like Ray Wenderlich writing excellent tutorials like:
http://www.raywenderlich.com/1271/how-to-use-animations-and-sprite-sheets-in-cocos2d

3D free rotation of object

I have a 3D CAD file of a set of products. I want to create a viewer so that the user can freely rotate the object in 3D.
How would I best go about this?
1) I had thought about exporting a series of 360 degree images every 30 degrees around the image, but that would be around 360 images per product. Then right the code to handle the matrix that would be required to handle rotation of the object. Seems very excessive, but doable.
2) OpenGL - I have never done any 3d animation using this, though.
We are using LightWave 3D, if that helps.
I'd recommend going with the 3-D rendering route, even though it might require more upfront work than the multiple sliced images approach. It will provide much greater flexibility over the long run, and I think you'll be able to generate a more pleasing experience in the end (small application binary size, smoother rotation, etc.). Also, once you have the display code done, you'll be able to pull in arbitrary models to add on to the ones you started with, and make tweaks to those models more easily.
This question points out a number of ways that you might be able to import LightWave models into formats usable by an OpenGL ES application. It looks like you'll probably need to pass through Blender or another intermediary to accomplish this.
Once you have the model in a form that you can work with, you can build off of several open source 3-D rendering applications for the iPhone / iPad, such as my Molecules application. My application is built for displaying 3-D molecular structures, but people have modified it to support rendering other models for their own needs, so I know that's possible. I go into detail on how this application works in the video for the OpenGL ES session of my class on iTunes U.
OpenGL ES may seem intimidating at first, but it only took me three weeks of nights-and-weekends development to build the initial version of Molecules, and I had no real OpenGL experience before starting that project. There are many great resources out there now, so it's easier than ever to get started.