I am creating a multi-mobile (iOS/Android) app with React Native.
The app needs to embed & launch a Unity game.
It will need to send information to the game, and also receive information from the game. A function passing a JSON string would be sufficient.
Six years ago I embedded native iOS code within a Unity app and it was rather a dark art.
What is the state of play in 2018?
Presumably it is going to involve separate iOS and Android codebases and a ReactNative component to wrap these, providing a single JavaScript interface. At the Unity end, I'm not sure if it will require separate per-platform coding.
Related
I would like to make an AR iPhone app in unity that places an object in the real world which you can then interact with it on your iPhone. like you have a bar at the bottom of your screen and you can drag the objects into the ar world and interact with them with the ability of hand tracking. This will work kind of like the meta 2 interface https://www.youtube.com/watch?v=m7ZDaiDwnxY which you can grab things and drag them. it uses hand tracking to do this.
I have done some research on this but, I need some help doing this because I don't know where to start and how to accomplish what I am trying to do.
I don't have any code.
You can email me at jaredmiller219#gmail.com for any comments and questions. also, you can email me to help me with this. thanks so much for your support!
To get started in mobile AR in Unity, I would recommend starting with Unity's resources:
https://unity.com/solutions/mobile-ar
Here's a tutorial resource for learning ARKit:
https://unity3d.com/learn/learn-arkit
As for hand tracking, obviously the Meta 2 has specialized hardware to execute its features... you shouldn't necessarily be expecting to achieve the same feature set with only a phone driving your experience. Leap Motion is the most common hand tracker I've seen integrated into VR and AR setups and it works well, but if you really need hand tracking with just a phone, you could check out ManoMotion which seeks to bring hand tracking and gesture recognition to ARKit, although I haven't personally worked with it.
I usually using firebase for syncing every player for my multiplayer game but this time I can't because this time I want create a desktop game and firebase only support mobile.
can I use Gundb as alternative to store the player position and animation. and every client automatically syncing the data
#alucard555 yes, there is a very very simple example of a browser-based game (Asteroids in 250LOC!) that could work in a desktop app via Electron or something:
https://github.com/amark/gun/blob/master/examples/game/space.html
You can play the game (arrow keys to move, space to fire a shockwave, doesn't work on mobile or small screens) here:
http://gunjs.herokuapp.com/game/space.html
With regards to Unity3D specifically, you would need a JavaScript bridge. I myself have not done Unity3D development myself, but I have (?) heard (?) it supports JavaScript? Or some variant of it?
GUN by itself is plain vanilla JS, the only porting UnityScript may need is changing the default localStorage and WebSocket adapters (these are modular and can easily be switched out for something Unity supports).
However I do not have enough Unity3D experience to speak on this matter. (I just looked up Firebase's Unity support, and noticed that it is not JS based, it is C++. This may mean JS is incompatible with Unity?)
I would like to incorporate QuickBlox or Twilio WebRTC chat and A/V calling into the same Angular apps running on a web page or inside a Cordova/Crosswalk app, as a Construct 2 game. I would like to have an audio/video chat running during game play.
Can I embed Construct 2 games into an Ionic view or simple DOM element and then render the video chat over it? Or, should I be integrating the WebRTC chat sessions into Construct 2? Or can I simply display both canvases in the same page?
Thanks in advance.
See: https://quickblox.com/developers/Sample-webrtc-cordova
Junior, here's an answer from the Twilio Video team.
We aren’t investing time in Cordova/Crosswalk right now, although some customers have been asking for it on our GitHub project (https://github.com/twilio/twilio-video.js/issues/85).
twilio-video.js can be integrated into an Angular app easily today. We have a minimal framework test in our GitHub project showing how to set it up (https://github.com/twilio/twilio-video.js/tree/master/test/framework/twilio-video-angular). This isn't a full-fledged application; instead, it's meant to ensure we retain compatibility with Angular as we develop twilio-video.js. It might be nice if we had a more full-fledge Angular Quickstart application in the future, but it gets difficult to support and maintain the various different front-end frameworks (Angular, React, Ember, Meteor, Vue, etc.).
I don’t know much about Construct 2, although it looks like a commercial game engine built on JavaScript/HTML5.
Can I embed Construct 2 games into an Ionic view or simple DOM element and then render the video chat over it?
Yes, this would work.
Or, should I be integrating the WebRTC chat sessions into Construct 2?
This might work, too, assuming Construct 2 allows arbitrary JavaScript inside the game engine.
Or can I simply display both canvases in the same page?
Yes, this would work.
The technique used will depend on how much interaction between the game and the video chat needs to take place. For example, if the lifecycle of the video chat should correspond in some way to in-game elements, then it should be created within Construct 2. If the video chat serves more like a commentary on the game, separate from the gameplay mechanics, then either overlaid or alongside in the same page should work.
I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.
I'm making an app with CoronaSDK for custom camera button, frame and layout like "Leme Camera" in app store.
However, they don't support camera buffer so we can't use custom layout to control capturing camera which is not acceptable.
What I found out is PhoneGap also lack that feature. They show native camera window when I tap
Can anyone recommend other cross platform framework for that feature?
With the PhoneGap platform you are able to use a combination of objective C and the PhoneGap standard javascript library.
From what you have described, it sounds like you would want to go native for something like this! Most of all of these third party SDKs do not get very good performance but if you were to write the small computationally intensive camera part in objective C you get all the resources you need and are able to cut back on the amount of code you will need to port when you try to support additional platforms.