I'm currently working on my thesis that involves streaming video to a web site where you can choose from which camera you want to view video. I don't exactly know which technologies should I use. I know that Raspberry Pi will be sending video feed to some URL, so you can view many live feeds.
Can someone give me a advice how to tackle this problem?
Related
When I find the plugin to play video it prompts me CORS cross domain problem,Is there any other way to implement the live broadcast function of Youtube in webgl? One way is to set up a server to download the video and then perform the video transmission, but this is too much traffic for the serverenter image description here
I can realize sending data from HoloLens (using Unity coding by C#) to PC (coding by C#) by socket communication. But how to sending video steaming in real-time (the video starts to be recorded when I open the application in HoloLens) from HoloLens to PC by my original socket frame. In my view, maybe I should add some sentences to recognize the HoloLens camera, record video and encode the video to data, then transmit the data by my previous socket. Is it right and how to realize it?
By the way, I hope that the PC can receive the video by python so that I can process the video in the following steps.
To send video steaming in real-time between HoloLens and PC client, WebRTC should can meets your needs. Please check out this MixedReality-WebRTC project, it can help you to integrate peer-to-peer real-time audio and video communication into your application. It also implements local video capture you need and encapsulation it as a Unity3D component for rapid prototyping and integration.
You can read its official documentation via this link: MixedReality-WebRTC 1.0.0 documentation.
Moreover, this project can be used in desktop applications or even other non-mixed reality applications, which can save your development costs.
hi everybody i try to develop a web application that can control Smart tv like this guide http://samsungdforum.com/Guide/tut00024/index.html i work fine but now i would like to upload video from computer then it can display on the smart tv like image shown on the tutorial have any one any idea or exemple or suggestion about modification of code that can i do that can help me i would like to modify code of convergence tutorial than can sens message or send video client application to smart tv application
Sending files is covered by the tutorial. You can find API reference for this here.
Sending video file is not exactly a wise thing, because there is a 3MB limit for a file that can be sent using Convergence API. This API is designed for sending messages between TV and external client rather than files. If you want to launch video playback, send video URL from web app to the TV and let the TV download the video by itself.
I plan on mounting a Wireless Network Camera on my robot http://mydlink.dlink.com/DCS930L . DLink has an iPhone app to see live video however I want to integrate the video with my iPhone remote controller I made.
Is it possible to get that video feed into my own app?
Where should i start looking..
I know this was posted some time ago, but the easiest way to integrate a network camera feed into your own application is via UIWebView. Then connect to the camera via IP address followed by /video/mjpg.cgi.
If I want to use Google Video chat on my browser
I have to download and install a plugin for it to work.
I would like to make a piece of software that creates
some interactions with a video displayed in the browser.
I assume that it might be problematic doing it with one solution
for all the browser, so if I might need to focus on only one browser
lets talk about firefox, although I think the firefox addon SDK
would not let me do a thing as complex as video interaction.
But how does the Google Video chat plugin work for the browsers?
It's only an example for one of those plugins that lets you
do activities (media in this case) with your browser
which are normally impossible.
As I understand it, Google Video Chat uses Flash.
I'm looking for something official-looking to back that up now...
Edit: I think this explains it pretty well.
Flash Player exposes certain audio/video functions to the (SWF) application. But the Flash Player does not give access to the raw real-time audio/video data to the application. There are some ActionScript API classes and methods: the Camera class allows you to capture video from your camera, the Microphone class allows you to capture audio from your microphone, the NetConnection/NetStream classes allow you to stream the video from Flash Player to remote server and vice-versa, the Video class allows you to render video either captured by Camera or received on NetStream. Given these, to display the video in Flash Player the video must be either captured by Camera object or received from remote server on NetStream. Luckily, ActionScript allows you to choose which Camera to use for capture.
When the Google plugin is installed, it exposes itself as two Camera devices; actually virtual device drivers. These devices are called 'Google Camera Adaptor 0' and 'Google Camera Adaptor 1' which you can see in the Flash Player settings, when you right click on the video. One of the device is used to display local video and the other to display the remote participant video. The Google plugin also implements the full networking protocol and stack, which I think are based on the GTalk protocol. In particular, it implements XMPP with (P2P) Jingle extension, and UDP-based media transport for transporting real-time audio/video. The audio path is completely independent of the Flash Player. In the video path: the plugin captures video from the actual camera device installed on your PC, and sends it to the Flash Player via one of the virtual camera device driver. It also encodes and sends the video to the remote user. In the reverse direction, it receives video (over UDP) from the remote user, and gives it to the Flash Player via the second of the virtual camera device drivers. The SWF application running in the browser creates two Video objects, and attaches them to two Camera object, one each for the two virtual video device, instead of attaching it to your real camera device. This way, the SWF application can display both the local and remote video in the Flash application.