LoadRawTextureData vs LoadImage - unity3d

I am trying to understand the expected performance difference in speed, and RAM&GPU consumption of these two in Unity. Or is it limited to LoadRawTextureData works with compressed textures whist LoadImage can't.
I was asked and I'm developing on iOS, iPadOS, Android and WebGL but I'm looking for a general answer to help me research each further later.
The use case is a user uploading multiple high-res images to the server, and then a mobile client downloading these back again.

Related

Does Unity3D supports different pixel densities like native android?

Native android supports different pixel densities like ldpi, mdpi, hdpi, xhdpi, xxhdpi and so on. This feature balances app quality and app size.
Currently I'm facing this issue in Unity mobile games (iOS & android) when I use,
high quality graphics it increases crashes and lagging in low-end devices.
low quality graphics it looks blurry and pixelated in high-end devices like iPad Pro etc.
I can use 2 different quality images but again it increases app size as the low-end devices end up downloading the HD images too.
How to solve this issue?
I suggest looking at the Unity Addressables system available in Unity 2020 LTS and beyond. This is a whole new tool that you'll have to investigate so I cannot provide a quick class or line of code to solve your problem. However, the Unity Addressables system is available in the Package Manager with docs available here.
Using this system will likely make it easier to run hi-res assets on lower-end devices. Since assets are streamed in when needed your texture memory is going to be significantly reduced as textures are unloaded as soon as you're done with them.
Addressables can also be used to load in assets remotely which would reduce your total file size. However, depending on how far you are in development this could be a big change.
You may also want to look at splitting the application binary if changing over to Addressables is too much work. If you split the binary, you can reduce the initial download of the application and have users opt-in to hi-res textures. There are a variety of other solutions provided by the Unity docs on Android builds here.
Good luck on getting your game to a shipped state!

Does Agora.io for Unity provide these features?

I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.

Best Recommendation for Capturing Video in a Meteor App on iOS devices

I ran into this problem in Safari where it appears that WebRTC is not fully supported. So when I call
navigator.webkitGetuserMedia()
I get an undefined error.
So my question to the community is what is the best way to write a Meteor app that captures Video on a mobile device and saves it on the said device.
If you have done this, I would appreciate it very much if you could share with me and the community how you went about this.
Specific Answer
The modern API is: navigator.mediaDevices.getUserMedia(constraints). See the docs here.
In the past, I've been unsuccessful with getUserMedia on iOS, but according to this post it can be done on iOS 11.
As for saving it, you can write to the browser's file system, but that API is only supported in Chrome. If you want to write to the camera roll, you'd need native code in the mix.
General Advice
I've spent several years of my life dealing with recording, uploading, and processing video using meteor. If you are doing anything more than trivial web recording, these observations may save you some time:
Chrome (on everything but iOS) has the best API for web recording. If you can require chrome for recording, that's ideal. Firefox is a close second, only because it doesn't support the file system API.
If you need to record and upload long videos on iOS, build a native app. Don't consider any kind of hybrid - that's a serious trap. The number of corner cases and things you need to check is pretty astounding, and the only way to get over those hurdles is with native code.

How to Play High Quality Video in Unity

I'm using MovieTexture now, but when a video file is added to unity Project, it will automatically be imported and converted to Ogg Theora format. and the quality is really bad.
I have tried changing the quality setting and even on the highest setting the video is still pretty bad quality, I have tried it in multiple file formats like .mov, .avi. .mpeg4 etc. I have even tried converting it to .ogv to try and get around unity converting it itself, and still the quality is poor. The platform is PC, and in the build the quality is the same as in the editor.
so the question is ,how to play high quality video in unity no matter using MovieTexture or anything else like some plugins?
Unity player on Windows only supports OGG, which is why Unity is transcoding your videos.
I have use the Renderheads AVPRo Quicktime plugin on Windows to play very high quality videos in kiosk setups. (They also have one for Windows Media format, but I used Quicktime).
Link: Renderheads AVPro (Quicktime)
I am not affiliated with them in any way, just a very happy customer, and here is the review I posted on the Unity Asset store:
Great work on your plugin! I've used so many plugins that don't work well over multiple platforms, or require switching between platforms, or manual steps, or manual licensing, or DLL hell, etc. I have to say you nailed it.
I develop on a Mac (and your plugin runs in the Unity Editor), then deploying on Windows. It all worked well straight forward and as documented. Even the events to detect when a video has loaded and is ready to play just what I needed (as we are loading a large video file).
Additionally, the error messages are very precise and pin-point a problem (missing file, bad format, etc) which means less time debugging.

Personalized Video / Facebook App - What is the best approach?

I want to build a facebook app featuring a personalized video which imports content assets from the user's facebook profile and their extended social graph and integrates these assets within the timeline. I am thinking of using Flash however a key stipulation is that the app works on mobile - and so I would need to use HTML5. My question is: Can I use Flash to build the application and then compile the app as HTML5 - or is there an alternative solution in the form of a HTML5 video toolkit with a programming layer that would allow me to build a web app / access the Facebook API?
I have done this a few times over the years and yes flash was the easiest however there are a few options which you have available to you that I know of which will be purely HTML5 based, personally I'd stay away from flash here as it will end up just getting int he way:
1- The cleanest method is to use a video compositing tool on the server side which can be programmed to accept variables. Personally I have only ever done this using ffmpeg however there a couple of alternatives which are out there.
The basic process would be to grab the media from FB then to composite them at certain point on top/below/around a base video which is sitting on the server using a shell script which you then pass the media assets to as variables. There are so many options as to how you might want this to be done, probably best id to have a look at some of these examples:
http://broadcasterproject.wordpress.com/2010/05/18/how-to-layerremix-videos-with-free-command-line-tools/
http://graphcomp.com/ffmpeg/
ffmpeg watermark without vhook?
note that last time I did this I used vhooks and custom filters, vhooks are now deprecated
This method will mean a reasonably heavy server load if your app is popular but it's probably the most robust across devices etc.
2- Use Popcorn.js, and let the processing be done on the client side. you could hard code it using css/js/html but popcorn is pretty stable although I havent seen how it runs on devices but in theory it should work (all standardized technologies). Basically the process would be to use javascript to fire the display of images overlayed on the video base file at preset cue points. Popcorn has all of the methods and means for you to do this already.
Hope this helps a bit. Good luck, sounds fun.
we realised some interactive video apps and one recent project was quite like your question describes.
We used adobe flash to track the motion - and published the project via create.js. You could have an image sequence from within create.js or put a video in a layer behind. This video would then control the player head time of the create.js motion tracked sequence via jquery.
worked fine - here a link to a testsetup with an image sequence.
Video Integration would be the next step.
http://www.jungeroemer.net/projekte/testpersvid/elftest01.html
(German text, sorry but it's nothing important to read there.
Just click the images and go for it)
you can download the sources from the link, if you need i can also upload the flash file to show you the motion tracking.