How to transimit video streaming from HoloLens (C#) to PC (python) by socket communication? - sockets

I can realize sending data from HoloLens (using Unity coding by C#) to PC (coding by C#) by socket communication. But how to sending video steaming in real-time (the video starts to be recorded when I open the application in HoloLens) from HoloLens to PC by my original socket frame. In my view, maybe I should add some sentences to recognize the HoloLens camera, record video and encode the video to data, then transmit the data by my previous socket. Is it right and how to realize it?
By the way, I hope that the PC can receive the video by python so that I can process the video in the following steps.

To send video steaming in real-time between HoloLens and PC client, WebRTC should can meets your needs. Please check out this MixedReality-WebRTC project, it can help you to integrate peer-to-peer real-time audio and video communication into your application. It also implements local video capture you need and encapsulation it as a Unity3D component for rapid prototyping and integration.
You can read its official documentation via this link: MixedReality-WebRTC 1.0.0 documentation.
Moreover, this project can be used in desktop applications or even other non-mixed reality applications, which can save your development costs.

Related

Hololens 2 audio stream from Desktop

I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.

Flutter camera plugin to transcode and transport streams to Media Server

What direction should i look for Flutter camera audio/video stream to RTMP Media Server?
Developing a live broadcasting solution,
Our attempts so far;
I. PWA approach, Antmedia WebRTC SDK was used to establish WebRTC ICE candidate connection between web mobile and antmedia server.
Problem faced: seems connection too heavy for some phones. One of my phones was able to stream through.
II. Looked in the direction of hybrid, i have been searching internet for cordova/ionic SDK for live streaming to RTMP Media Server, no luck so far. Found bambuser https://www.npmjs.com/package/cordova-plugin-bambuser, seems the SDK is tied to their live streaming services.
Please, what do i need to know? Also need expert review on Flutter camera plugin Image-Bytes access + ffmpeg for encoding and transporting streams to RTMP url rtmp://SERVER_NAME/LiveApp/STREAM_ID

How do I record video from a remote webrtc camera?

I have a raspberry pi which has webrtc via uv4l2. It is awesome! I want to record the video from the camera on a server. It's your basic surveillance camera setup... central linux server with lots of storage space, remote IP cameras, etc. I've read dozens of pages and still can't figure it out. I tried all this kurento mumbo jumbo but it's all wretch an no vomit. It never gets there. What's the command to grabthe rpi video and dump it to disk? Please help!!!
UV4L already supports audio+video recording on the server (other than on the client), if you use it with Janus WebRTC. Have a look at the bottom of the Video Conference DEMO OS page for more details. At the moment, you will have to use the REST API to login into a Janus room and turn on/off the recording. The REST API is ideal if you want to control UV4L from a custom application, but there is also a panel which allows you to dynamically send REST requests to the UV4L server.

iOS - Develop iPhone app to stream camera video to a computer?

I'm looking for a way to create an app that will allow captured camera video to be streamed on a computer. For example, one person could be walking an iPhone around a room and another person could have that video streamed on their computer. Something kind of like a one-way Facetime except the receiver is on a computer. Also, I can't just use an existing app as later I would like to change the program to do some computer vision processing on the incoming data.
At the moment, I've found that AV Foundation should be the correct option for the video capture (from this question). However, I'm having difficulty finding the method by which I can actually stream this data. In particular, searching for how to create the apps on the iPhone frequently results in existing apps that do the task, but not how to create the app.
Can anyone give me a pointer to the information on how to stream the video capture from the iPhone? Thank you much.
You can use "Wowza media Server" for Streaming purpose
For wowza media server doenload :
Wowza Download
After installing wowza Now you need to set up live setting in wowza for that purpose you need:
Setting Up Live Application
For iOS side there is library is useful for video streaming using RTMP connection
You can get Library at
RTMP library for Streaming
Library example
RTMP library for Streaming example
In this good example of Streaming from iOS side
I had success with ANGL lib and Wowza media server. It gives smooths RTMP stream.

Flash Media Server live streaming with multiple video files

I am using Flash Media server 4.5 and i read the tutorial if i want to stream the live feed, i may need to use the media live encoder. but what i found in media encoder is i have to manually setup everything and it only support camera devices.
But in my case i have multiple video files keep received from another program, my goal is use the Flash Media server to perform a live boardcasting with these video file one by one.
That means when client watching the streaming, they will not notice the server is play mov1, then mov2, then mov4, then mov5... and so on.
Also can FMS dynamically create a new streaming session (invoke by code), so that when client A uploading some video files to the server, the FMS open a new streaming session only stream cilent A video files?
Can FMS achieve such purposes? any tutorial provided would be very helpful!
Edit for Open Bounty
I want to basically deliver a live stream of video where a list of videos are source. I am currently using Flash Media Server with Cloudfront CDN to deliver content. So if I have video1, video2, and video3. I want to play them back to back as a live stream (so no skipping ahead in video), is it possible to do this? Bounty goes to clever workaround. Think of this as a television channel.
i have been working on the live streaming technologies for the past 1and half year . There is no option in flash live encoder for any file encoding.
1.To encoder your file you can use your dvd player devices or some thing else which supports usb devices playback options.and use the dvd player output to broadcast using flash media live encoder.
2.And the another set is to setup windows media encoder that supports file encoding(no need od dvd player) but it supports only windows media services.
At present i live webcast video file in this way only for my company http://www.malar.tv/live.php