Streaming video of wifi access point camera to a remote computer - unity3d

After spending weeks of searching the forums and trying different approaches, I didn't find a solution for my quite specific problem. I'm thankful for every hint you might provide.
I purchased a Kodak Pixpro 360 camera which offers a view-finder function over wifi (i.e. a live video stream). Now I'm trying to use this camera as a surveillance cam that can be accessed from anywhere (not just the local network). An ODROID will be connected to the camera via wifi and use a second wifi dongle to connect to the LAN. The incoming video stream should be forwarded in real-time to the client (there will be only one at a time). The received 360 degree content is then viewed in an application written in Unity3d.
So far I have managed to grab the cam's MJPEG stream and serve it as JPEGs unsing a NodeJS server. The JPEGs are then rendered via the WWW.LoadImageIntoTexture method. As you might imagine, GET requests for each frame are horribly slow and result in about 0.5 frames per second.
A colleague of mine pointed me to WebRTC and the Janus gateway as a more elegant solution. This simple peer chat uses SocketIO and is working just fine with my webcam but I can not figure out how to change this code to use the video stream coming from the PIXPRO instead of my local device. Rendering the content should be fun, too, as you need a browser for WebRTC and I am not sure how much of that can be embedded in Unity3d.
Unfortunatelly, the camera can not connect to a LAN by itself but rather acts as a wifi access point. This makes all the solutions I found for ip cams obsolete.
I found a similar project that managed to forward their video stream via Janus and WebRTC but I am not sure if and how I can apply their methods.
https://babyis60.wordpress.com/2015/02/04/the-jumping-janus/
UPDATE
Ok guys, I managed to narrow down my problem by myself. The PIXPRO has no RTP support, so I am stuck with the JPEG Stream. Now I am trying to speed up the paparazzo.js implementation that reads the camera's TCP responses and returns JPEGs by searching for the boundary between the frames. These JPEGs are then served via a http response. I would like to speed up this process by using SocketIO to push these frames to the client and render them there.
The strange thing is that the data seems to be just fine on serve side (I get a valid JPEG image when I export it via fs.writeFileSync('testimage.jpg', buffer, 'binary');, but I can't get it to work on client side after I send the image via io.sockets.emit("stream",{image: image});. When I try to display this image in a browser via $("#video").attr("src", "data:image/jpeg;," + data.image);, the image is not parsed corretly. The inspector shows that the video source is updated, but there is only a binary string.

I finally managed to get it done. The binary had to be loaded into a Buffer and sent as a base64 string.
paparazzo.on("update", (function(_this) {
return function(image) {
updatedImage = image;
var vals = new Buffer(image, 'binary');
//fs.writeFileSync('testimage.jpg', vals, 'binary');
io.sockets.emit("image",vals.toString('base64'));
return console.log("Downloaded " + image.length + " bytes");
};
})(this));
On client side I had to use an image tag because canvas solutions didn't seem to work for me.
var image = document.getElementById('image');
socket.on("image", function(info) {
image.src = 'data:image/jpeg;base64,' + info;
});
The browser output was just a test before the actual Unity3D implementation. I tried many Websocket libraries for Unity3D, but the only one that worked on an Android device was the UnitySocketIO-WebsocketSharp project.
Now I could simply convert my base64 image to a byte array and load it into a Texture2D.
socket.On("image", (data) => {
bytes = Convert.FromBase64String (data.Json.args.GetValue(0).ToString());
});
void Update () {
tex.LoadImage (bytes);
}
The LoadImage seems to block the UI threat, though, which slows down my camera control script so I will have to take a look into Unity plugins that can re-write texture pixels on a lower level. Using the Cardboard SDK for Unity worked a work-around for me to get a quite smooth camera control again.

Related

How to send Video feed frames to a ThingsBoard widget, without requesting them from a local server?

I've been searching for sending my local video feed from a Rasberry PI to a ThingsBoard Widget, the typical solution would be to expose the feeding device via port forwarding or VPN to be accessed from outside my network, then I would be able to request video frames from a typical image or video or iframe element with the aid of javascript.
but my need is a little bit complex, I want to send the video feed frames to a things board widget so I could have the video feed without the hassle of the port forwarding and the network stuff. Any Ideas?
I don't think this is possible other than what you suggested and make the feed available externally and pull into an iFrame in an HTML widget. It would require a constantly connected feed to run through the Rule Engine from the device and it is not built that way.
What you could do depending on your requirements is have it send still frames from the camera in base64 every time it polls?

Does Agora.io for Unity provide these features?

I'm a bit lost looking through all the various Agora.io modules (and not sure what it means that only some of them have Unity-specific downloads).
I want to make a Unity app where two remote phones exchange data as follows:
Streaming voice in both directions
Streaming video in one direction (recorded from device camera)
Streaming a small amount of continuously-changing custom data in the other direction (specifically, a position + orientation in a virtual world; probably encoded as 7 floats)
The custom data needs to have low latency but does not need reliability (it's fine if some updates get lost; app only cares about the most recent update). Updates basically every frame.
Ideally I want to support both Android and iOS.
I started looking at Agora video (successfully built a test project) and it seems like it will cover the voice and video, but I'm struggling to find a good way to send the custom data (position + orientation). It's probably theoretically possible to encode it as a custom video feed but that sounds complex and inefficient. Is there some out-of-band signalling mechanism I could use to send some extra data alongside/instead of a video?
Agora real-time messaging sounds like it would probably work for this, but I can't seem to find any info about integrating it with Unity (either on Agora's web site or in a general web search). Can I roll this in somehow?
Agora interactive gaming could maybe also be relevant? The overview doesn't seem real clear about how it's different from regular Agora video. I suspect it's overkill but that might be fine if there isn't a large performance cost.
Could anyone point me in the right direction?
I would also consider alternatives to Agora if there's a better plugin for implementing this feature set in Unity.
Agora's Video SDK for Unity supports exporting projects to Android, iOS, MacOS, and Windows (non-UWP).
Regarding your data streaming needs, Agora's RTM SDK is in the process of being ported to work within Unity. At the moment the best way to send data using the Agora SDK is to use CreateDataStream to leverage Agora's ability to open a data stream that is sent along with the frames. Data stream messages are limited to 1kb per frame and 30kb/s so I would be cautious about running it on every frame if you are using a frame-rate above 30fps.

Video streaming solutions

I am attempting to stream a video, in a format unity3d can access, like an mjpg. I have gone through several possible solutions, including gstreamer(only does client side as far as I could tell by the examples), yawcam(I couldn't find a way to access the image directly), and silverlight(due to simply not being able to find how the heck webcam streaming was doable) I am currently just looking for any more methods of getting video over from one side to the other. Could I possibly simply read the images into a byte array and send it over a socket? Maybe I missed something in the previous three possible solutions?
If you are looking to stream video from a server than you can use Ogg encoding + WWW.movie to map it to a texture. Assuming you have a Pro license, as I think this is a Pro only feature. If this is a local file, either bundled with the app or in external folder, we use the brilliant AVPro Windows Media or AVPro QuickTime. MJPEG does offers super smooth scrubbing with AVPro but generates enormous files. Definitely not ideal for streaming or even download!
Finally RenderHead also has a Live Camera capture plugin that could meet your needs.

Realtime Audio/Video Streaming FROM iPhone to another device (Browser, or iPhone)

I'd like to get real-time video from the iPhone to another device (either desktop browser or another iPhone, e.g. point-to-point).
NOTE: It's not one-to-many, just one-to-one at the moment. Audio can be part of stream or via telephone call on iphone.
There are four ways I can think of...
Capture frames on iPhone, send
frames to mediaserver, have
mediaserver publish realtime video
using host webserver.
Capture frames on iPhone, convert to
images, send to httpserver, have
javascript/AJAX in browser reload
images from server as fast as
possible.
Run httpServer on iPhone, Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone, have the other
user connect directly to httpServer on iPhone for
liveStreaming.
Capture 1 second duration movies on
iPhone, create M3U8 files on iPhone,
send to httpServer, have the other
user connected to the httpServer
for liveStreaming. This is a good answer, has anyone gotten it to work?
Is there a better, more efficient option?
What's the fastest way to get data off the iPhone? Is it ASIHTTPRequest?
Thanks, everyone.
Sending raw frames or individual images will never work well enough for you (because of the amount of data and number of frames). Nor can you reasonably serve anything from the phone (WWAN networks have all sorts of firewalls). You'll need to encode the video, and stream it to a server, most likely over a standard streaming format (RTSP, RTMP). There is an H.264 encoder chip on the iPhone >= 3GS. The problem is that it is not stream oriented. That is, it outputs the metadata required to parse the video last. This leaves you with a few options.
Get the raw data and use FFmpeg to encode on the phone (will use a ton of CPU and battery).
Write your own parser for the H.264/AAC output (very hard)
Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions).
"Record and process in chunks (will add latency equal to the length of the chunks, and drop around 1/4 second of video between each chunk as you start and stop the sessions)."
I have just wrote such a code, but it is quite possible to eliminate such a gap by overlapping two AVAssetWriters. Since it uses the hardware encoder, I strongly recommend this approach.
We have similar needs; to be more specific, we want to implement streaming video & audio between an iOS device and a web UI. The goal is to enable high-quality video discussions between participants using these platforms. We did some research on how to implement this:
We decided to use OpenTok and managed to pretty quickly implement a proof-of-concept style video chat between an iPad and a website using the OpenTok getting started guide. There's also a PhoneGap plugin for OpenTok, which is handy for us as we are not doing native iOS.
Liblinphone also seemed to be a potential solution, but we didn't investigate further.
iDoubs also came up, but again, we felt OpenTok was the most promising one for our needs and thus didn't look at iDoubs in more detail.

Streaming live H.264 video via RTSP to iphone does work! w/example

Using FFMPEG, Live555, JSON
Not sure how it works but if you look at the source files at http://github.com/dropcam/dropcam_for_iphone you can see that they are using a combination of open source projects like FFMPEG, Live555, JSON etc. Using Wireshark to sniff the packets sent from one of the public cameras that's available to view with the free "Dropcam For Iphone App" at the App Store, I was able to confirm that the iphone was receiving H264 video via RTP/RTSP/RTCP and even RTMPT which looks like maybe some of the stream is tunneled?
Maybe someone could take a look at the open source files and explain how they got RTSP to work on the iphone.
Thanks for the info TinC0ils. After digging a little deeper I'v read that they have modified the Axis camera with custom firmware to limit the streaming to just a single 320x240 H264 feed, to better provide a consistent quality video over different networks and, as you point out, be less of a draw on the phone's hardware etc. My interest was driven by a desire to use my iphone to view live video and audio from a couple of IP cameras that I own without the jerkiness of MJPEG or the inherent latency that is involved with "http live streaming". I think Dropcam have done an excellent job with their hardware/software combo, I just don't need any new hardware at the moment.
Oh yeah, I almost forgot the reason of this post RTSP PROTOCOL DOES WORK ON THE IPHONE!
They are using open source projects to receive the frames and decoding in software instead of using hardware decoders. This will work, however, this runs counter to Apple's requirement that you use their HTTP Streaming. It will also require greater CPU resources such that it doesn't decode video at the desired fps/resolution on older devices and/or decrease battery life compared to HTTP streaming.