Stream Key in azure media service live streaming - azure-media-services

I have created channel and program in azure media service. Both are running and I have Ingest URL. But OBS Studio asks me for "stream key". I tried providing "default" as suggested in "https://medium.com/#dsayed/live-streaming-with-microsoft-azure-eb6408d31ed" , but it is not working. Please let me know if I am missing anything or how to obtain streaming key in azure.
Regards,
Navaneeth

If the stream key is maybe what Wirecast called a "stream name" ; you can put whatever you want ('default' or 'mystream', etc).
Please make sure to put the full ingest RTMPs ingest in OBS, including the token generated by Media Services.
For example :
rtmps://02df50c7c69e4f138a25909axxxxxxxx.channel.media.azure.net:1935/live/99cf7a5638404205b32582xxxxxxxxxx

Related

Cannot create a batch pipeline to get data from ZohoCRM with http plugin 1.2.1 to BigQuery. Retuns Spark Program 'phase-1' failed

My first post here and I'm new to Data Fusion and I'm with low to no coding skills.
I want to get data from ZohoCRM to BigQuery. Module from ZohoCRM (e.g. accounts, contacts...) to be a separate table in BigQuery.
To connect to Zoho CRM I obtained a code, token, refresh token and everything needed as described here https://www.zoho.com/crm/developer/docs/api/v2/get-records.html. Then I ran a successful get records request as described here via Postman and it returned the records from Zoho CRM Accounts module as JSON file.
I thought it will be all fine and set the parameters in Data Fusion
DataFusion_settings_1 and DataFusion_settings_2 it validated fine. Then I previewed and ran the pipeline without deploying it. It failed with the following info from the logs logs_screenshot. I tried to manually enter a few fields in the schema when the format was JSON. I tried changing the format to csv, nether worked. I tried switching the Verify HTTPS Trust Certificates on and off. It did not help.
I'd be really thankful for some help. Thanks.
Update, 2020-12-03
I got in touch with Google Cloud Account Manager, who then took my question to their engineers and here is the info
The HTTP plugin can be used to "fetch Atom or RSS feeds regularly, or to fetch the status of an external system" it does not seems to be designed for APIs
At the moment a more suitable tool for data collected via APIs is Dataflow https://cloud.google.com/dataflow
"Google Cloud Dataflow is used as the primary ETL mechanism, extracting the data from the API Endpoints specified by the customer, which is then transformed into the required format and pushed into BigQuery, Cloud Storage and Pub/Sub."
https://www.onixnet.com/insights/gcp-101-an-introduction-to-google-cloud-platform
So in the next weeks I'll be looking at Data Flow.
Can you please attach the complete logs of the preview run? Make sure to redact any PII data. Also what is the version of CDF you are using? Is CDF instance private or public?
Thanks and Regards,
Sagar
Did you end up using Dataflow?
I am also experiencing the same issue with the HTTP plugin, but my temporary way to go around it was to use a cloud scheduler to periodically trigger a cloud function that fetches my data from the API and exports them as a JSON to GCS, which can then be accessed by Data Fusion.
My solution is of course non-ideal, so I am still looking for a way to use the Data Fusion HTTP plugin. I was able to make it work to get sample data from public API end-points, but for a reason still unknown to me I can't get it to work for my actual API.

Detect stream in data load editor in Qlik Sense

Is it possible to get stream related information in data load editor in Qlik Sense App? I cannot create new apps. I can access other information like app id, app path etc.
The stream that an application is published in, to the best if my knowledge, is not available by a System function within the load script. You can certainly pull that information from the Repository API inline. The obvious candidate for doing so would be the monitor_apps_REST_app data connection which should be present on your system. You should be able to pass the combination of DocumentName() as a variablized WHERE clause into the REST connection pull and then store the stream name or ID.
What is the ultimate use case?

Azure Batch support for Data Lake Store Linked Service

I am using a data factory pipeline with a custom activity (configured to run on Azure Batch) that has a data lake store input dataset and output dataset. The data lake store linked service is using service to service auth (service principal) and is working fine while being used in a Copy activity through Copy Wizard. But when used with a custom activity that tries to check if a file is present in the data lake, the activity fails with an error "Authorization is required". Upon using a Azure Blob Store as the input and output datasets, the same custom activity works fine.
Seems like an issue with Azure Batch (Compute node) not able to authorize Data Lake Store. Please help if you have solved the above mentioned problem.
I had this exact same issue about 3 weeks ago. I feel your pain!
This is a Microsoft bug!
After much trial and error and redeployments I raised a support ticket with Microsoft who confirmed that service principal authentication for data lake store currently only works with copy activities. Not with custom activities.
This is the official response I got on Monday 10th April.
The issue happen because of a bug that custom activity’s connector
schema doesn’t match the latest published connector schema. Actually,
we notice the issue on custom activity and have plan to fix & deploy
to prod in next 2 weeks.
Be aware that if you change your linked service back to use a session token etc you'll also need to redeploy your pipelines that contain the custom activities. Otherwise you'll get another error something like the following...
Access is forbidden, please check credentials and try again. Code:
'AuthenticationFailed' Message: 'Server failed to authenticate the
request. Make sure the value of Authorization header is formed
correctly including the signature.
Hope this helps.

How to automatically create a new file based on an existing file within Google Cloud Storage

It's the first time I used Google Cloud, so I might ask the question in the wrong place.
 
Information provider upload a new file to Google Cloud Storage every day.
The file contains the information of all my clients/departments.
I have to sort through information and create a new file/s containing the relevant information for each department in my company .so that everyone gets the relevant information to them (security).
I can't figure out what are the steps I need to follow, to complete the task.
Can you help me?
You want to have a process that starts automatically and subsequently generates a new file once you upload something to Google Cloud Storage.
The easiest way to handle this is using Object Change Notifications. You can set up Object Change Notifications per bucket and this will send a POST request to an URL that you can define.
You can then easily set up a server (or run it on app engine) that will execute an action based on the POST request that it receives.
There is an even simpler option (although still in alpha) named cloud functions. Cloud functions is a serverless service that provides event based microservices (e.g. 'do this' if a new file is uploaded on GCS). This means you only have to write the code that defines what needs to happen if a new file is uploaded and then Cloud Functions will take care of executing the code when you upload a file to GCS. See this tutorial on using cloud functions with Google Cloud Storage.

Live Stream midroll ad injection in Wowza Streaming Engine

I haven't found any way to automate inserting an ad spot into an existing live stream without stopping the streams and/or using a Flash client to interact with Wowza.
The idea is that these ads can be randomly chosen and inserted into the stream programatically & automated.
Can someone please point me in the right direction of how to properly change sources on the fly?
Thanks!
The following articles may be of interest for you
https://www.wowza.com/docs/how-to-switch-streams-using-stream-class-streams
https://www.wowza.com/docs/how-to-control-stream-class-streams-dynamically-modulestreamcontrol
https://www.wowza.com/docs/how-to-use-ipublishingprovider-api-to-publish-server-side-live-streams
I've previously created a custom module for Wowza that allows you to create an output stream from a live input stream, then control the output and switch between the live input stream and other live or on demand streams.