I have pi camera module v2. I want to make life video stream.
I have tried to read h.264 license agreement but didn't understand: do I need to buy license for h.264 usage or it is included in camera's price?
Related
I'm currently developing an app for the HoloLens 2 that needs to stream audio from a desktop PC.
The idea is to send control information (position, orientation, etc.) to a Cycling'74 Max/Msp application running on a Windows 10 computer to process audio for 3D audio playback. I now need to somehow stream the resulting sound to the Unity app running on the HoloLens. Both devices run on the same network.
At the moment I've achieved something using mrtk webrtc for unity in combination with a virtual cable as input. My issue is that this seems to be optimized for microphone use as it applies some options like noise reduction and smaller bandwidth. I can't find a way to set the options for webrtc to stream what I need (music) with better quality.
Does anyone know how to change that on mrtk webrtc or has a better solution for the audio streaming to the hololens?
WebRTC project for Mixed Reality is deprecated and it is designed for real-time communication. If your requirement is media consumption, you need other workaround solutions.
For dedicated media streaming, you can set up a DLNA server on your PC for media access.
You may also set up Samba or NFS on your PC if you need to access files in other formats.
I have an HDMI source connected to a Chinese HD HDMI Encoder box. Playback to VLC on my PC works (open network stream http://192.168.0.150:80/hdmi)
Stream is NOT leaving my local network (on purpose)
I cannot get a signal to display on my Google Nexus Player or my NVidia Shield via Cumulus TV app. (The point being to integrate the feed into the Google Live Channels app) I have tried adjusting several of the settings to no avail. Should I be trying a specific format? I tried the Fiddler (didn't see anything descriptive in that tool) but still have no definitive answers. I am pretty sure this device only produces a H.264 bitstream, which works in the PC version of VLC, but I have no luck on my androidTV devices (to include VLC). I can also get playback on my android PHONE in VLC...
seeking help/ troubleshooting advice...
main stream settings are:
H.264 Level: high profile Encoding frame rate: 30[5-30]
Bitrate control:vbr Key interval: 30[5-200]
Encoded size: auto MinQp: 3[1-51] MaxQp: 32[MinQp-51]
MaxBitrate: 8000[16-12000]
Audio bitrate:192000 Audio channel:L+R
Audio Codec:AAC Resample:Disable Package:B HTTP: Enable /hdmi (begin with "/")
HTTP Port:80[1-65535] Change TS ID:Disable
transport_stream_id: 300[256-3800]pmt_start_pid: 480[256-3800]
stream_start_pid: 481[256-3800]RTSP: Disable Multicast IP: Disable
RTMP server ip: Disable ONVIF:Disable Enable
It looks like your encoder can stream three different formats:
Http - probably HLS
RTMP
RTP/RTSP
Now the question is which formats do your clients support and is the format on the above list.
You could install Fiddler on your PC (web app debugger) to verify that your streaming box actually serves HLS.
Since you know that VLC plays your stream you could try to install VLC on your Google Nexus player: https://play.google.com/store/apps/details?id=org.videolan.vlc
As we know BeagleBone Black dont have a DSP on SoC specific for the Video processing but is there any way we can achieve that by adding some extra DSP board.
I mean like Raspberry got Video Processing, so anyone tried to integrate both to get, so we have both the things to make that work.
I know its not the optimal way and these both are different but i have only one BBB and one Raspberry and I am trying to achieve some 1080p video streaming with better quality.
There is no DSP on BeagleBoneBlack, you need to use DSP functions.
If your input is audio, you can use ALSA.
When you say "dont have a DSP on SoC specific for the Video processing" - I think you mean what is usually called a VPU (Video Processing Unit), and indeed Beaglebone Black's AM3358 processor doesn't have it (source: http://www.ti.com/lit/gpn/am3358)
x264 has ARM NEON optimizations, so it can encode video reasonably well in software, 640x480#30fps should be fine, but 1920x1080#30fps is likely out of reach (you may get 8-10fps).
On Raspberry Pi, you can use gstreamer with omxh264enc to take advantage of the onboard VPU to encode video. I think it is a bit rough (not as solid as raspivid etc) but this should get you started: https://blankstechblog.wordpress.com/2015/01/25/hardware-video-encoding-progess-with-the-raspberry-pi/
I am working on a Screen Capture to h.264 bitstream solution using the Intel Media SDK.
I read the new 2nd Generation Intel processors have a hardware accelerated encoder so i am expecting the encode latency to drop and make it realtime.
Using ffmpeg 32bit version doing a screen capture and x264 i get an end to end latency of 200ms on the Pi. Well the Raspberry pi has a hardware decoder so i am guessing it does the decode in around 80ms. I used a Intel i5 520M and a 1st gen i7 to do the decoding the end to end was 250-350ms latency after using the Raspberry pi that went down to 150-200.
How do i link the Direct Show Screen Capture filter to the Intel Media SDK input?
there is not documentation i can follow, if anyone can shine some light.
I had the success of h.264 screen encoding by Direct3X + H.264 H/W encoder using Intel Media SDK.
screen shot by DirectX : 55ms
RGB4 -> NV12 converting by Intel Media SDK / VPP : 1ms
H.264 encoding by Intel Media SDK / H/W encoder : 7ms
Refer this link:
https://software.intel.com/en-us/forums/topic/358602
I receive raw H.264 NALUs from an IP camera (via Live555) and I want to decode them using hardware because FFmpeg is great but it's too slow (the camera sensor is large).
The only solution I see is to write the NALUs to some movie container file such as MPEG-4, and then read and decode that file using an AVAssetReader.
Am I off in the weeds? Is anyone having success decoding H.264 NALUs from a stream? Does anyone have any tips for writing NALUs to an MPEG-4 file? Other ideas?
Like Matt mentioned, there is no direct access to Apple's H264 decoder.
However, I have had success with ffmpeg and h264 decoding. Like you mentioned, I have built ffmpeg with LGPL I was able to decode H264 streams all the way to real-time HD stream with no latency on both ipad and iphone. Nothing fancy is required from ffmpeg, you can find bunch of standard decoding c++ code that will work just fine on iOS. Also, in my case H264 NALUs were delivered via RTP/RTSP in real-time.
Also, if I was you I would run your app through xcode instruments to truly see where you bottleneck is, but I would be highly surprised it is in ffmpeg decoding step.. Hopefully this info helps.
Unfortunately, you cannot do this at present. Feel free to file a radar with Apple about wanting this sort of access to the hardware decoder. It'll certainly be resolved as a duplicate :-). I assume it is for licensing reasons why they can't give this sort of access to the hardware codec.
So, you're going to have to use a software decoder. Please be aware that if you're going to ship to the App Store then you need something with a non-GPL license (unless you want to open source your app as well).