Is it possible to stream in Unity? - unity3d

I am working in a vr game for Android in unity and it has a quiet large map and many objects to render and my phone doesn't has enough power to run my game so i was thinking that is it possible to stream my game made for desktop pc to my android device inside unity without using any other external software. And also i need to use my phone's gyroscope to control the game.

What you are talking about is called remote rendering. As you said you don't want to use any other external software you will have to decide and implement everything by yourself. The trickiest part in this will be the streaming of the game view to the phone: there are implementations that send every single frame over a TCP connection. If you want to use something external you could investigate into WebRTC implementations like this.
By the way, what you are trying to achieve is already available as a product, e.g. here

Related

Methods to test Unity WebGL multiplayer game server scalability

I am looking for a method to test the scalability of my WebGL multiplayer game (built in Unity 3D). The game is currently based on the PUN (Photon Unity Network) library and cloud service for multiplayer communication. I would like to know how to efficiently find out if the server hardware and architecture can support, let's say, 20, 50, or even 100 players in one room, with the limited number of computers at my disposal. Ideally I would like to know the frame rate each player will experience. I do have access to some powerful servers. A preliminary idea I have now is to run a bunch of virtual machines on these servers, each of which runs a browser tab with the game. Just want to know the industry practice or what you think would work. Thanks!
Framerate is hardware dependent and if you are sending data to the server EVERY frame you are doing it wrong.
Basic premise on doing it right would be, a player shoots a projectile..
Get its spawnpoint, direction and speed and then pass that to the other players/server to then process without the need to send data each frame.
This is a simplified example but gets the point across.

Is it possible to develop a virtual world that is accessible via both a PC and a virtual reality headset?

For example, if you have a virtual reality headset, you can interact with this virtual world in VR (i.e. WebVR); however, if you don't have a VR headset and/or WebVR compatibility can still access and explore this virtual world (i.e. like Runescape) and interact with characters, whether they are VR or web in the same virtual world?
The A-Frame framework handles that for you automatically, or you can roll it yourself, if you're using another framework. Either way, the different control schemes require a fair amount of thought.
You could also take a look at React360 (https://facebook.github.io/react-360/), it is a WebVR based framework from Facebook and can handle most 3D media out of the box. It performs quite well and has the advantage of progressive enhancement, i.e. if you view it from a desktop/tablet you get a 2D experience, if you are on a 3D capable device you get a full VR experience.
It's also cross platform so will run on android/iOS/Windows/MacOS/Oculus/Vive. Samples are included with it which should be enough to give you an idea of its capabilities.
Depending on the complexity of the game you are trying to develop and the graphical control required A-Frame is another option to look at.

Hololens - create app on desktop

I can't figure out the wording to research the following idea.
With the Holograms app, I can set the hologram and see the other apps front window or use web browser at the same time.
How can I create an app that does not occupy the whole system but only run on the desktop with browser and others?
EDIT: I am trying to run an hologram within the shell.
https://developer.microsoft.com/en-us/windows/holographic/hololens_shell_overview
Mostly, this is for 2D apps, but the Holograms app runs 3D holograms, so is it possible to duplicate this?
3D Hololens apps do not currently support running side by side with other applications. As of the May release you are able to run multiple "flat apps" (UWP apps) side by side:
https://developer.microsoft.com/en-us/windows/holographic/release_notes_-_may_2016
Currently there has not been any announcement about running multiple 3d apps side by side. I optimistically hope that this is coming in a future OS release.
If I understand you correctly, you are interested in creating an application for the Hololens that rather than having it "appear in the world around you", would run in a Window (Desktop) that would appear in the spatial world (like Edge). If this is correct, any UWP application can run Windowed and be placed in you virtual space. You don't need a specific Hololens project for that. If however you want to pull in spatial data, you would need to wire-up the required events so that spatial data callbacks handle the incoming data. I'm not 100% sure Unity is a best fit for this problem (at least currently) since they are focusing attention on Spatial hologram applications rather than desktop UWP applications that consume spatial data.
Just to note, it is worth stating that a single 3D app (ie Unity developed) takes over the whole HoloLens device and experience. If you bloom (windows key in emulator) you can change to the UWP app view but you can't see other apps at the same time.

Google Cardboard controlling PC

When I saw the Google Cardboard for Unity, I assumed this meant that you would be able to make a Unity PC game and use your phone as a screen/controller. All I can see is it wanting me to make an android app which is all well and good, but it doesn't allow for input from the keyboard.
Is there a way to stream the Unity PC project to the device and retrieve input (i.e. Headtracking, NFC magnet)?
The problem with such a solution is latency. In VR latency is a big deal. The overall latency from input to photons reaching your eyes should be 20ms or lower. Regular games have 30-60 ms latency by themselves. Add to that the gyro latency, the phone display latency... If you want to add another 25ms or more ping to your VR experience, that's gonna be painful and may even make you sick. If you want to read more on why latency is such a big deal in VR, Michael Abrash wrote an excellend blogpost about it: post on latency
If you want to necessairly use a keyboard for navigation, consider using a bluetooth keyboard that can be used with android devices. Also keep in mind that with the current technology, especially without a dedicated headset, really dynamic vr experiences probably won't work very well and can make some people uncomfortable or sick. For a good read on designign virtual reality experiences, please refer to this guide from the Oculus Rift: http://static.oculus.com/sdk-downloads/documents/Oculus_Best_Practices_Guide.pdf
There's nothing in the Cardboard SDK for talking with a PC-hosted Unity game. You could adapt the code from the Unity Remote 4 project:
https://www.assetstore.unity3d.com/en/#!/content/18106
We are developing the app what you want except it uses GearVR instead of Cardboard. Please check the link below.
http://challengepost.com/software/airvr
Streaming from your PC to your phone's Cardboard is possible using third-party apps, such as Trinus VR (the client app on your phone) and Vireio (the streaming app on your computer). The two apps will then communicate via your home network (Wi-Fi or other) to stream the images.

iPhone peer-to-peer voice chat

I see that Game Kit allows you to develop games with voice chat.
I want to build a more general, peer-to-peer voice chat application, that does not have to live in the Game Center. So a couple questions:
1. What peer to peer system/technologies could be used for this?
2. If I wanted to allow voice chat with a Flash client (i.e. iPhone app <--> Server <---> Flash client on PC) would options for 1 work for this?
I have some experience with RTFMP for Flash to Flash client chat, and no iPhone dev experience, so just want to test out some ideas.
Maybe one idea: build using the Ribbit Platform - they have both Objective-C and Flash SDKs, but this looks more like traditional\SIP calling.
Anyway, would appreciate anything that points me in the right direction.
Thanks.
Now that flash has access to raw Microphone data, you could roll your own client and server; yet, since, currently, it doesn't have UDP sockets in AIR for mobile, you would be forced into considering audio quality vs lagg with even tighter restrictions then usual.
You can now roll your own native extension to make this work; yet, I am assuming you want something that only requires coding in AS3.
Therefore, considering your restrictions, the only real bet would be to use Flash's built-in communications capabilities (e.g. RTMP).
With the above being said, there are opensource alternatives to the array of Adobe's own flash communication servers:
the red5 server, and rtmpd.
IMHO Ribbit's services are kind of pointless.