Is there any way to set up event hub that it is starts receiving events (from iot hub) from specified time ? Sometimes I nedd to do little changes in my code and I don't want to do again some actions on data that was send before I deploy my new event hub code. Maybe I should use something different to provide custom logic to my iot hub data, that allow me to do custom logic on received data without doing same code on data that I received beforce deploy my service ?
You haven't specified which API you use, but here are two options:
If you are receiving the events directly with EventHubReceiver, there is a CreateReceiver() method overload which accepts DateTime startingDateTimeUtc, see API reference
If you are using EventProcessorHost, you can specify the initial timestamp offset as part of EventProcessorOptions.InitialOffsetProvider, see docs. I believe the existing checkpoints will override this value, so you'd have to clean up the checkpoints in blob storage while deploying a new version
Related
I am using Azure communication service APIs to start a Group video/audio call in my Angular / .Net Core application. I am also using the Azure.Communication.CallingServer to record the calls. I use the Azure Event Grid Webhook on the Microsoft.Communication.RecordingFileStatusUpdated event so Azure can notify my API when the Recording is available for download. All this works well and I'm able to download the recording stream.
The issue I'm having is trying to map the recording file to the meeting record in my application database. The event grid Event Subscription is created at design time in Azure and it cannot seem to pass any custom data. When the recording becomes available for download, can Event Grid send me custom data about the recording that I may have previously passed at runtime?
You can persist the call ID and/or recording ID (a part of the StartRecording response) by the time you're starting the recording and then map the RecordingFileStatusUpdated event to those IDs by utilizing the data in the subject.
Check out how this sample uses recordingData to persist the recording status.
I have a Google Cloud Function triggered by a Google Cloud Storage object.finalize event. When I deploy a new version of this function, I would like to run it for every existing file in the bucket (which have already been processed by the previous version of the function). Processing all the existing files in the bucket is a long running task, hence I don't think a Google Cloud Function which will process all files in a row is an option.
The best option I can see for now is to make a Google Cloud Function I can triggered via HTTP that will list all the files in the bucket and publish one event per file via Google PubSub, and then process each of these events with a slightly modified version of my initial Google Cloud Function which accepts a PubSub event in place of the object.finalize storage event.
I think it can work but I was wondering if there was an easier way to perform this operation.
If the operation you're trying to perform may take longer than the maximum time that a Cloud Function can run, you will need to split that operation into multiple steps. Your approach of using a PubSub trigger for each individual file, sounds like a valid approach to do that for me.
One option might be to write a small program that lists all of the objects in a bucket and, for each object, posts a message to Cloud Pub/Sub that triggers your function in the same way a GCS change would.
In the past, it was possible to setup an Azure alert on a single event for a resource e.g. on data factory single RunFinished where the status is Failed*.
This appears to have been superseded by "Activity Log Alerts"
However these alerts only seem to either work on a metric threshold (e.g. number of failures in 5 minutes) or on events which are related to the general admin of the resource (e.g. has it been deployed) not on the operations of the resource.
A threshold doesn't make sense for data factory, as a data factory may only run once a day, if a failure happens and then it doesn't happen X minutes later it doesn't mean it's been resolved.
The activity event alerts, don't seem to have things like failures.
Am I missing something?
It it because this is expected to be done in OMS Log Analytics now? Or perhaps even in Event Grid later?
*n.b. it is still possible to create these alert types via ARM templates, but you can't see them in the portal anymore.
The events you're describing are part of a resource type's diagnostic logs, which are not alertable in the same way that the Activity Log is. I suggest routing the data to Log Analytics and setting the alert there: https://learn.microsoft.com/en-us/azure/data-factory/monitor-using-azure-monitor
I have defined several Activities in IBM Data Connect (on Bluemix) and would like to chain them together, e.g. one for copying a Cloudant DB to dashDB, another for refining the copied data - so forth and so on.
Can this be done? If yes - how.
Data Connect doesn't currently support a way of chaining your activities together. However, you could make use of the current scheduling capabilities to arrange the activities to run in sequence. As the only current trigger mechanism we have is time, their successful operation would require you to leave enough time for each one to execute before the next activity in the chain.
I will find out for you if we have the kind of feature you're after on our roadmap.
Regards,
Wesley -
IBM Bluemix Data Connect Engineering
You can also use the Data Connect API to do the orchestration. See the documentation here https://console.ng.bluemix.net/docs/services/dataworks1/index.html
Regards,
Hernando Borda
IBM Bluemix Data Connect Product Manager
I've one instance of the Orion context broker running and several other services receiving notifications from it through "ONCHANGE" subscriptions.
I also have a simple script that checks the existing subscriptions through GET /v2/subscriptions and then renews them as needed. However this end point does not return the format (XML/JSON) that the data is sent for each subscriber.
The problem is that different services require different formats and without knowing the initial Accept header is not possible to renew the subscription correctly, since the format is also updated when a call to any of the update methods is made (POST /v1/updateContextSubscription or PUT /v1/contextSubscriptions/{subscriptionID}), defaulting to XML
Is there a way I can know the format of a subscription without accessing directly to the Mongo database? or any update method that does not change the format of the messages set up initially?
XML is deprecated since Orion 0.23.0 (more info here). Thus, I recommend you to adapt all the notification receptors to process only JSON and update subscription using always JSON.
Otherwise, your update subscription program needs to to keep track of which format is being used by each receptor (in a URL->format table) in order to choose the right one in each case.