Multiple live stream video publishers using FMS, Wowza, etc.? - streaming

I need to develop a web portal with multiple live stream publishers (up to 4), and many viewers, using RTMP.
Live video publishers are well known and always the same, so in the case of using FMS (since I have some experience with Flash and Influxis), I would have no problem of using FMLE for video publishers. But the problem is how to synchronize in the media server all 4 connections to show properly on the client side. I have tested the one-connection live example that brings FMS and works fine.
Video resolution is not an issue, since we don't mind low resolution 320x240 for example. Also, we need to develop the plaform by ourselves, not depending on external platforms of live streaming. Is there any tutorial or example to use as a start point?
What would you suggest?? thanks!

Ok, I have now found the solution and, I have to say, was extremely easy. I write if someone else has the same problem.
Finally I've solved with Flash Media Live Encoder. You have to create 4 (in my case) video objects in your webpage like below, changing localhost for your hostname.
<object width='640' height='377' id='StrobeMediaPlayback' name='StrobeMediaPlayback' type='application/x-shockwave-flash' classid='clsid:d27cdb6e-ae6d-11cf-96b8-444553540000'>
<param name='movie' value='swfs/StrobeMediaPlayback.swf' />
<param name='quality' value='high' />
<param name='bgcolor' value='#000000' />
<param name='allowfullscreen' value='true' />
<param name='flashvars' value='&src=rtmp://localhost/live/livestream&autoHideControlBar=true&streamType=live&autoPlay=true' />
<embed src='swfs/StrobeMediaPlayback.swf' width='640' height='377' id='StrobeMediaPlayback' quality='high' bgcolor='#000000' name='StrobeMediaPlayback' allowfullscreen='true' pluginspage='http://www.adobe.com/go/getflashplayer' flashvars='&src=rtmp://localhost/live/livestream&autoHideControlBar=true&streamType=live&autoPlay=true' type='application/x-shockwave-flash'> </embed>
</object>
As you can see, by default the stream name is "livestream", you have to change for every object to be different. Ensure "live" folder is created (when you install FMS in localhost by default creates this folder, but in influxis you have to create manually).
Every video publisher has to open Flash Media Live Encoder and change the Output value of "Stream" by the value of stream name of the respective video object.
That's it! Works perfectly, great resolution and great performance, better than expected. Hope it helps!

Related

Google TTS <par> tags not working the way as expected

I was looking into Google TTS and found the following example with a par tag:
https://cloud.google.com/text-to-speech/docs/ssml#par
In the example, the audio is built properly. There are two sentences and some sounds in the background. However, if you actually use this with their API or the console, it doesn't work. You only get two sentences back and no audio is played. Link to the TTS console:
https://cloud.google.com/text-to-speech
This is the SSML that I was using:
<par>
<media xml:id="question" begin="0.5s">
<speak>Who invented the Internet?</speak>
</media>
<media xml:id="answer" begin="question.end+2.0s">
<speak>The Internet was invented by cats.</speak>
</media>
<media begin="answer.end-0.2s" soundLevel="-6dB">
<audio
src="https://actions.google.com/sounds/v1/cartoon/cartoon_boing.ogg"/>
</media>
<media repeatCount="3" soundLevel="+2.28dB"
fadeInDur="2s" fadeOutDur="0.2s">
<audio
src="https://actions.google.com/sounds/v1/animals/cat_purr_close.ogg"/>
</media>
</par>
What am I doing wrong? Or is there something wrong with the TTS service itself?
There is certainly something strange about the TTS console, for starters. If you look at the JSON that it says should be used, you'll note that it has omitted the <par> tag. Inspecting the network traffic shows the same thing. I also noticed that the <speech> tag needed to be omitted.
If you try this as part of the Action Simulator Console, the output does work correctly. You can get to the Action Simulator by going to https://console.actions.google.com/, picking a project to work with, and going to the "Test" tab. In the simulator itself, you then select the "Audio" tab and can enter SSML (including the <speech> tag) into the editor and then press the "Update and Listen" button.

How can I actually download/transfer a file found using UPnP?

I'm completely new to UPnP as a protocol, but I'm hoping to use it to transfer files from a Sony Camera to an iOS app I'm working on. So far I have SSDP discovery setup, I can read the UPnP client's services, search through folders and access file names, but the final hurdle which I'm stuck on is how to actually download/transfer the files once I'm able to list them!
What I end up getting is the below:
<item id="04_02_0624600856_000001_000001_000000" restricted="1" parentID="03_01_0624600856_000001_000000_000000">
<dc:title>DSC05076.ARW</dc:title>
<upnp:class>object.item.imageItem.photo</upnp:class>
<dc:date>2018-08-23T12:24:21</dc:date>
<res protocolInfo="http-get:*:image/jpeg:DLNA.ORG_PN=JPEG_SM;DLNA.ORG_CI=1">http://192.168.122.1:60151/SM_DSC05076.ARW?%2104%5f02%5f0624600856%5f000001%5f000001%5f000000%21http%2dget%3a%2a%3aimage%2fjpeg%3aDLNA%2eORG%5fPN%3dJPEG%5fSM%3bDLNA%2eORG%5fCI%3d1%21%21%21%21%21</res>
<res protocolInfo="http-get:*:image/jpeg:DLNA.ORG_PN=JPEG_LRG;DLNA.ORG_CI=1">http://192.168.122.1:60151/LRG_DSC05076.ARW?%2104%5f02%5f0624600856%5f000001%5f000001%5f000000%21http%2dget%3a%2a%3aimage%2fjpeg%3aDLNA%2eORG%5fPN%3dJPEG%5fLRG%3bDLNA%2eORG%5fCI%3d1%21%21%21%21%21</res>
<res protocolInfo="http-get:*:image/jpeg:DLNA.ORG_PN=JPEG_TN;DLNA.ORG_CI=1">http://192.168.122.1:60151/TN_DSC05076.ARW?%2104%5f02%5f0624600856%5f000001%5f000001%5f000000%21http%2dget%3a%2a%3aimage%2fjpeg%3aDLNA%2eORG%5fPN%3dJPEG%5fTN%3bDLNA%2eORG%5fCI%3d1%21%21%21%21%21</res>
I would (With my naive experience of simple HTTP APIs) then expect to simply be able to download the file in question by hitting:
http://192.168.122.1:60151/SM_DSC05076.ARW or similar (I'm assuming I have to change this URL slightly as the file is listed as image/jpeg rather than RAW?
Whatever combination I try of the full res object, snipping bits, decoding the url e.t.c. I always get a 404 response when trying to visit the URL in question. Is there something more complex I need to do here? Or something simple that I'm missing?
Thanks in advance!
The problem here was that I was using a url from a previous session. It turns out that the urls change between connection sessions, which is why I was getting a 404.
Lesson learned: UPnP is highly dynamic, and you can’t rely on caching images under their access MRL!

Wwise, Resonance Audio and Unity Integration. Configure Wwise Resonance Audio Plugin

I have tried to get a response on Github but with no activity about this issue there I will ask here.
I have been following the documentation and I am stuck when I have imported the WwiseResonanceAudioRoom mixer effect on the bus in Wwise and I do not see anything in the properties. I am not sure if I am supposed to? Right after that part of the documentation is says "Note: If room properties are not configured, the room effects bus outputs silence." I was wondering if this was the case and yes it outputs silence. I even switched the effect to see if it will just pass audio and it does just not with the Room effect, so at least I know my routing is correct.
So now this leads up to my actual question. How do you configure the plugin?? I know there is some documentation but there is not one tutorial or a step by step for us non code savvy audio folk. I have spent the better half of my week trying to figure this out b/c frankly for the time being this is the only audio spatialization plugin that features both audio occlusion, obstruction and propagation within Wwise.
Any help is appreciated,
Thank you.
I had Room Effects with Resonance Audio working in another project last year, under its former name, GVR. There are no properties on the Room Effect itself. These effect settings and properties reside in the Unity Resonance prefabs.
I presume you've follow the latter tutorial on Room Effect here:
https://developers.google.com/resonance-audio/develop/wwise/getting-started
Then what you need to do is to add the Room Effect assets into your Unity project. The assets are found in the Resonance Audio zip package, next to the authoring and SDK files. Unzip the Unity stuff into your project, add a room Effect in your scene and you should be able to see the properties in the inspector of the room object?
Figured it out thanks to Egil Sandfeld Here ! https://github.com/resonance-audio/resonance-audio-wwise-sdk/issues/2#issuecomment-367225550
To elaborate I had the SDKs implemented but I went ahead and replaced them anyways and it worked!

Why smil file is not working with RTMP both in wowza and jwplayer?

I am using wowza streaming engine and jw player to show the stream and working fine. But i want to control bitrate of a video so that user can see the video without much buffering. So i searched in stackoverflow and found following link.
Bitrate JWplayer
Then i created my myVideo.smil file according to above link. Then i go to wowza to test. It is working in MPEG DASH
http://192.168.0.106:1935/vod/smil:myVideo.smil/manifest.mpd
and also ADOBE HDS
http://192.168.0.106:1935/vod/smil:myVideo.smil/manifest.f4m
but i dont know why it is not working with RTMP
in Test Player I put
server=rtmp://192.168.0.106:1935/vod and stream=smil:myVideo.smil
after pressing start button i found connectd Current bit rate 0kbps and only black screen.
I also tried this link in jwplayer
rtmp://192.168.0.106:1935/vod/smil:myVideo.smil
and it is loading but not showing anything.
Again i tried the approach of the link
jwplayer("myElement").setup({
file: "/assets/myVideo.smil",
image: "/assets/myVideo.jpg",
height: 360,
width: 640
});
and it showing "Error loading stream:Manifest not found or invalid"
I don't know what i am missing and why not only working with RTMP. Please Help me.
Here is my sample myVideo.smil file code
<smil>
<head>
<meta base="rtmp://192.168.0.106:1935/vod/" />
</head>
<body>
<switch>H
<video src="sample.mp4" height="720" system-bitrate="200000" width="1280" />
<video src="sample.mp4" height="360" system-bitrate="80000" width="640" />
<video src="sample.mp4" height="180" system-bitrate="30000" width="320" />
</switch>
</body>
</smil>
The problem lies in the fact RTMP in itself is completely oblivious to multiple bitrates.
The way you would do this in JWPlayer is using an HTTP link to:
http://192.168.0.106:1935/vod/smil:MyVideo.smil/jwplayer.smil
This will instruct JWPlayer to use the multiple-bitrate smil received from that URL, connect to given RTMP endpoint, play the first stream listed, and switch streams to different bitrate as needed.
Basically the way to construct the URL is to take HLS/DASH/HDS url and replace the last element (say playlist.m3u8 for HLS) with jwplayer.smil.
But please note this only works for SMILs. If you try and access similar url for sample.mp4 (which would be http://192.168.0.106:1935/vod/mp4:sample.mp4/jwplayer.smil in this case) then it won't work and you will most likely get a playback error.

In Alfresco Share, how to refer to a nodeRef id in the custom config file?

I'm trying to extend Share DocumentLibrary with a new action that provide a link to some url based on the nodeRef Id (through share-config-custom.xml)
<action id="blabla" type="link" label="label">
<param name="page">../../alfresco/wcs/myPlugin/editor/{node.nodeRef.id}/param>
<param name="target">_blank</param>
</action>
But Share does not interpret {node.nodeRef.id}
It does interpret {node.nodeRef} correctly but I don't need the full URI
Like: workspace://SpacesStore/158f0ed4-a575-40c2-a6ef-7e7ed386ba94
I just want the node ref id : 158f0ed4-a575-40c2-a6ef-7e7ed386ba94
Anyone can explain me the logic behind this and suggest a solution? Thanks
First of all I assume you are asking for Alfresco 4.0. The way how actions can be extended is completely new in 4.0 and most of us haven't used that yet.
The logic that creates the place holders is probably in Java code in the Share webapp (haven't found the exact location). The node.nodeRef is a String so you can not call nodeRef.id. In my opinion you have two options:
You can keep type="link" and node.nodeRef but you link to a custom repository side webscript which then generates the correct URL and forwards (HTTP Status 301) to the correct destination.
You change the type to type="javascript" and implement a callback function in Javascript. This will be called when the link is clicked and can build the correct Url. To include custom javascript you can use the dependencies in the Share config:
As far as I know the only documentation available for extending the new actions is:
http://blogs.alfresco.com/wp/mikeh/tag/4-0/
http://docs.alfresco.com/4.0/topic/com.alfresco.enterprise.doc/concepts/doclib-web-tier.html
Just use node.id as seen at the Javascript API Wiki