Do subscriptions work with singleValueExtendedProperties? - rest

I am looking for a way to use Graph API's for syncing calendar events. Example different types of meeting requests such as "Case Events(court cases) meetings, corporate meetings , persons meetings. For this case in particular he would like to get an event notification when If someone tried to delete a "Case Event Meeting" and prevent the delete.
We have a Java application that adds the different type of meeting requests to the calendar and all of those have an event origin of {f19d3c30-0660-4f7f-96df-6dc78a686633}
The follow code works fine for a changeType of Created, Accepted
"subscriptionConfiguration": {
"changeType": "Created,Accepted,Deleted",
"notificationUrl": "https://xxxxx/listen",
"resource": "me/events/?$filter=singleValueExtendedProperties/any(ep: ep/id eq 'String {XXYY1231-0660-5ty5-96df-6brca687744} Name event_origin' and ep/value eq null)",
.... },
The code above returns nothing in the case of changeType = delete
The only reason they are using the "singleValueExtendedProperties" filter only because they want to filter out only calls created from there java application and act upon them. It works fine for "changeType": "Created,Accepted", but Deleted it returns nothing as the filter seems to remove them.
Is there another way we could filter out the requests that would not require the use the "singleValueExtendedProperties" filter?
Can we think of other options to that would be better than the workaround above?

While you should be able to get notified about the delete, both sync and notifications are "after the fact", so there would be no way to prevent the delete. The closest you could get is to recreate the event which would of course require you to have an offline copy of the event so you could recreate it.

Related

How to retrieve the GameplayEvent payload from a GameplayAbility trigger?

When using Unreals GameplayAbilitySystem, a GameplayAbility can be triggered using a GameplayEvent.
When using UAbilityTask_WaitGameplayEvent, the GameplayEvent provides a payload of class FGameplayEventData.
Is it possible to retrieve such an event payload also from within a GameplayAbility, after it is activated by a trigger of type GameplayEvent?
A solution either in Blueprint or C++ would be fine.
It's been a long time but I think I should add an answer in case someone else comes here. After doing what you did in your pictures, you can go to your ability and delete the default 'Event ActivateAbility' and instead over-ride 'Event ActivateAbilityWithEvent'. Important to delete the default, you have to do it otherwise it will default to using it and won't trigger.
Once done, sending your trigger tag will start the ability and call this new event which will have your payload and anything else you want to send. Hope that helps!

Facebook Messenger Bot Proactive/Push Notifications using Azure

I am building a bot for for Facebook Messenger using Microsoft Bot Framework. I am planning to use CosmosDB for State Management and also as my backend data store. (I am not stuck to CosmosBD and can use any other store if needed)
I need to send daily/weekly proactive messages(push notifications) to users based on their time preference. I will capturing their time preference when they first interact with the bot.
What is the best way to deliver these notifications?
As I will be storing these preferences in CosmosDB, I am thinking using ComosDB trigger of creating an Azure Function and schedule it based on the user time preference. This Azure function will make a call to my webhook which will deliver these messages. If requried, I will change Function schedule when a user changes his/her preference.
My questions are:
Is this a good approach?
Are there any other alternatives (Notifications Hub?)
I should be able to set specific times for notifications (like at the top of the hour or something like that), does it make sense to schedule an Azure Function to run at these hours rather than creating a function based on user preference (I can actually combine these two approaches too)
Thank you in advance.
First, I don't think there's any "right" answer to be given here; it's going to depend a lot on your domain's specific needs. Scale is going to play a major factor in the design of this. Will you have 100 users? 10000 users? 1mil users? I'm going to assume you want to design for maximum scale up front, but it could be overkill.
First, based on what you've described, I don't think a CosmosDB trigger is necessarily the solution to your problem because that's only going to fire when the preference data is created/updated. I assume that, from that point forward, your function needs to continuously fire at the time slot they've opted into, correct?
So let's pretend you let people choose from the 24hrs in the day. A naïve approach would be to simply use a scheduled trigger that fires up every hour, queries the CosmosDB for all the documents where the preference is set to that particular hour and then begins sending out notifications from there. The problem is how you scale from there and deal with issues of idempotency in the face of failures.
First off, a timer trigger only ever spins up one instance. If you were to just go query the CosmosDB documents and start processing them one by one in the scope of that single trigger, you'd hit a ceiling relatively quickly on how many notifications you can scale to. Instead what you'd want to do is use that timer trigger to fan out the notifications to as many "worker" function instances as possible. The timer trigger can act as the orchestrator in the sense that it can own the query against the CosmosDB and then turn each document result it finds for that particular notification time window into a message that it places on a queue to be processed by a separate function which will scale out on its own.
There are actually a couple ways you can accomplish this with Azure Functions, it really depends on how early an adopter of technology you are comfortable with being.
The first is what I would call the "manual" way which would be done by simply using the existing Azure Storage Queue extension by taking an IAsyncCollector<YourNotificationWorkerMessage> as a parameter to the timer function that's bound to the worker queue and then pumping out the messages through that. Then you write a second companion function which uses a QueueTrigger, bind it to that same queue, and it will take care of processing each message. This second function is where you get the scaling, enabling process all of the queued messages as quickly as possible based on whatever scaling parameters you choose to configure. This is the "simplest" approach
The second approach would be to adopt the newer Durable Functions extension. With that model, you don't have to directly think about creating a worker queue. You simply kick off a new instance of your orchestrator function from the timer function and the orchestrator fans out the work by invoking N "concurrent" calls to an action for each notification. Now, it happens to distribute those calls using queues under the covers, but that's an implementation detail that you need no longer maintain yourself. Additionally, if the work of delivering the notification requires more involved work and/or retry logic, you might actually consider using a sub-orchestration instead of a simple action. Finally, another added benefit of this approach, is that you can "fan back in" to your main orchestrator function once all the notifications are delivered to do some follow up work... even if that's simply some kind of event logging that the notification cycle has completed for this hour.
Now, the challenge with either of these approach is actually dealing with failure in initially fetching the candidates for notification from CosmosDB, paging through the results and making sure you actually fan all of them out in an idempotent manner. You need to deal with possible hiccups as you page and you need to deal with the fact that your whole function could be torn down and you might have to restart. Perhaps on the initial run of the 8AM notifications you got through page 273 out of 371 pages and then you got hit with a complete network connectivity fail or the VM your function was running on suffered a power failure. You could resume, but you'd need to know that you left off on page 273 and that you actually processed the 27th record out of that page and start from there. Otherwise, you risk sending double notifications to your users. Maybe that's something you can accept, maybe it's not. Maybe you're ok with the 27 notifications on that page being duplicated as long as the first 272 pages aren't. Again, this is something you need to decide for your domain, but if you want to avoid this issue your orchestrator function will need to track its progress to ensure that it doesn't send out dupes. Again I would say Durable Functions has a leg up here as it comes with the ability to configure retries. Maintaining the state of a particular run is left up to the author in either approach though.
I use pro-active dialog extensively with botframwork and messenger without any issue. During your facebook approval process you simply need to inform them you will be sending notifications trough messenger with your bot. Usually if you use it to inform your user and stay away from promotional content you should be fine.
I also use azure function to trigger the pro-active dialog from a custom controller endpoint.
Bellow sample code for azure function:
public static void Run(TimerInfo notificationTrigger, TraceWriter log)
{
try
{
//Serialize request object
string timerInfo = JsonConvert.SerializeObject(notificationTrigger);
//Create a request for bot service with security token
HttpRequestMessage hrm = new HttpRequestMessage()
{
Method = HttpMethod.Post,
RequestUri = new Uri(NotificationEndPointUrl),
Content = new StringContent(timerInfo, Encoding.UTF8, "application/json")
};
hrm.Headers.Add("Authorization", NotificationApiKey);
log.Info(JsonConvert.SerializeObject(hrm));
//Call service
using (var client = new HttpClient())
{
Task task = client.SendAsync(hrm).ContinueWith((taskResponse) =>
{
HttpResponseMessage result = taskResponse.Result;
var jsonString = result.Content.ReadAsStringAsync();
jsonString.Wait();
if (result.StatusCode != System.Net.HttpStatusCode.OK)
{
//Throw what ever problem as an exception with details
throw new Exception($"AzureFunction - ERRROR - HTTP {result.StatusCode}");
}
});
task.Wait();
}
}
catch (Exception ex)
{
//TODO log
}
}
Bellow sample code for starting the pro-active dialog:
public static async Task Resume<T, R>(string resumptionCookie) where T : IDialog<R>, new()
{
//Deserialize reference to conversation
ConversationReference conversationReference = JsonConvert.DeserializeObject<ConversationReference>(resumptionCookie);
//Generate message from bot to user
var message = conversationReference.GetPostToBotMessage();
var builder = new ContainerBuilder();
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
//From a cold start the service is not yet authenticated with dev bot azure services
//We thus must trust endpoint url.
if (!MicrosoftAppCredentials.IsTrustedServiceUrl(message.ServiceUrl))
{
MicrosoftAppCredentials.TrustServiceUrl(message.ServiceUrl, DateTime.MaxValue);
}
var botData = scope.Resolve<IBotData>();
await botData.LoadAsync(CancellationToken.None);
//This is our dialog stack
var task = scope.Resolve<IDialogTask>();
T dialog = scope.Resolve<T>(); //Resolve the dialog using autofac
try
{
task.Call(dialog.Void<R, IMessageActivity>(), null);
await task.PollAsync(CancellationToken.None);
}
catch (Exception ex)
{
//TODO log
}
finally
{
//flush dialog stack
await botData.FlushAsync(CancellationToken.None);
}
}
}
Your dialog needs to be registered in autofac.
Your resumptionCookie needs to be saved in your db.
You might want to check FB policy regarding proactive messages
There’s a 24h limit but it might not be totally screwed in your case
https://developers.facebook.com/docs/messenger-platform/policy/policy-overview#standard_messaging

RESTful API and real life example

We have a web application (AngularJS and Web API) which has quite a simple functionality - displays a list of jobs and allows users to select and cancel selected jobs.
We are trying to follow RESTful approach with our API, but that's where it gets confusing.
Getting jobs is easy - simple GET: /jobs
How shall we cancel the selected jobs? Bearing in mind that this is the only operation on jobs we need to implement. The easiest and most logical approach (to me) is to send the list of selected jobs IDs to the API (server) and do necessary procedures. But that's not RESTful way.
If we are to do it following RESTful approach it seams that we need to send PATCH request to jobs, with json similar to this:
PATCH: /jobs
[
{
"op": "replace",
"path": "/jobs/123",
"status": "cancelled"
},
{
"op": "replace",
"path": "/jobs/321",
"status": "cancelled"
},
]
That will require generating this json on client, then mapping it to some the model on server, parsing "path" property to get the job ID and then do actual cancellation. This seems very convoluted and artificial to me.
What is the general advice on this kind of operation? I'm curious what people do in real life when a lot of operations can't be simply mapped to RESTful resource paradigm.
Thanks!
If by cancelling a job you mean deleting it then you could use the DELETE verb:
DELETE /jobs?ids=123,321,...
If by cancelling a job you mean setting some status field to cancelled then you could use the PATCH verb:
PATCH /jobs
Content-Type: application/json
[ { "id": 123, "status": "cancelled" }, { "id": 321, "status": "cancelled" } ]
POST for Business Process
POST is often an overlooked solution in this situation. Treating resources as nouns is a useful and common practice in REST, and as such, POST is often mapped to the "CREATE" operation from CRUD semantics - however the HTTP Spec for POST mandates no such thing:
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics. For example, POST is used for the following functions (among others):
Providing a block of data, such as the fields entered into an HTML form, to a data-handling process;
Posting a message to a bulletin board, newsgroup, mailing list, blog, or similar group of articles;
Creating a new resource that has yet to be identified by the origin server; and
Appending data to a resource's existing representation(s).
In your case, you could use:
POST /jobs/123/cancel
and consider it an example of the first option - providing a block of data to a data handling process - and is analogous to html forms using POST to submit the form.
With this technique, you could return the job representation in the body and/or return a 303 See Other status code with the Location set to /jobs/123
Some people complain that this looks 'too RPC' - but there is nothing that is not RESTful about it if you read the spec - and personally I find it much clearer than trying to find an arbitrary mapping from CRUD operations to real business processes.
Ideally, if you are concerned with following the REST spec, the URI for the cancel operation should be provided to the client via a hypermedia link in your job representation. e.g. if you were using HAL, you'd have:
GET /jobs/123
{
"id": 123,
"name": "some job name",
"_links" : {
"cancel" : {
"href" : "/jobs/123/cancel"
},
"self" : {
"href" : "/jobs/123"
}
}
}
The client could then obtain the href of the "cancel" rel link, and POST to it to effect the cancellation.
Treat Processes as Resources
Another option is, depending on if it makes sense in your domain, to make a 'cancellation' a noun and associate data with it, such as who cancelled it, when it was cancelled etc. - this is especially useful if a job may be cancelled, reopened and cancelled again, as the history of changes could be useful business data, or if the act of cancelling is an asynchronous process that requires tracking the state of the cancellation request over time. With this approach, you could use:
POST /jobs/123/cancellations
which would "create" a job cancellation - you could then have operations like:
GET /jobs/123/cancellations/1
to return the data associated with the cancellation, e.g.
{
"cancelledBy": "Joe Smith",
"requestedAt": "2016-09-01T12:43:22Z",
"status": "in process"
"completedAt": null
}
and:
GET /jobs/123/cancellations
to return a collection of cancellations that have been applied to the job and their current status.
Example 1: Let’s compare it with a real-world example: You go to a restaurant you sit at your table and you choose that you need ABC. You will have your waiter coming up and taking a note of what you want. You tell him that you want ABC. So, you are requesting ABC, the waiter responds back with ABC he gets in the kitchen and serves you the food. In this case, who is your interface in between you and the kitchen is your waiter. It’s his responsibility to carry the request from you to the kitchen, make sure it’s getting done, and you know once it is ready he gets back to you as a response.
Example 2: Another important example that we can relate is travel booking systems. For instance, take Kayak the biggest online site for booking tickets. You enter your destination, once you select dates and click on search, what you get back are the results from different airlines. How is Kayak communicating with all these airlines? There must be some ways that these airlines are actually exposing some level of information to Kayak. That’s all the talking, it’s through API’s
Example 3: Now open UBER and see. Once the site is loaded, it gives you an ability to log in or continue with Facebook and Google. In this case, Google and Facebook are also exposing some level of users’ information. There is an agreement between UBER and Google/Facebook that has already happened. That’s the reason it is letting you sign up with Google/ Facebook.
PUT /jobs{/ids}/status "cancelled"
so for example
PUT /jobs/123,321/status "cancelled"
if you want to cancel multiple jobs. Be aware, that the job id must not contain the comma character.
https://www.rfc-editor.org/rfc/rfc6570#page-25

Paypal REST SDK - Billing Plans no longer retrievable after being made ACTIVE

So I'm using the Paypal PHP SDK on Github, http://paypal.github.io/PayPal-PHP-SDK/ . Some strange behavior I've noticed which I'm not sure what's going on.
So let's say I create a billing plan, but don't touch it after creation, so that the state is simple CREATED. Everything is good, I can retrieve it from the list of plans. However, the moment I change the state to ACTIVE via a patch, I can see that it is in fact active, but only just once. Any subsequent attempts to see the list of plans no longer shows that plan. What's going on? I'm literally copy pasting the example source they give.
Edit - just to expand, I know the plan still exists, because I can subscribe users to it. Weirdly the paypal page where you click ok to subscribe is extremely non verbose... doesn't even say what the price is, just to approve paying my store. And yet the Agreement object that is returned by PayPal, which includes the approval url, has all this info. Weird.
If you are using the PayPal-PHP-SDK, you could assign more parameters to Plan::all() method.
As shown in the List Plan sample code, you could pass parameter 'status' as :
try {
// Get the list of all plans
// You can modify different params to change the return list.
// The explanation about each pagination information could be found here
// at https://developer.paypal.com/webapps/developer/docs/api/#list-plans
$params = array('page_size' => '20', 'page' => '98', 'status' => 'ACTIVE');
$planList = Plan::all($params, $apiContext);
} catch (Exception $ex) {
ResultPrinter::printError("List of Plans", "Plan", null, $params, $ex);
exit(1);
}
As in the case, you could change the status, and page along with page_size. This will help you get the active list of plans.
Actually, by default the list plan status is defaulted to CREATED.

Facebook Graph API event-id/comments?since=2014-02-01&until=2014-02-10 , Date filter has no effect

I am trying to bring comments made on a particular event by targeting this URL: https://graph.facebook.com/1466384840257158/comments
I am passing the user_access_token
I have two comments at present on this event made on the same
day(2014-03-29)
Now I try to pass a date which should bring an empty data result/object
like this: https://graph.facebook.com/1466384840257158/comments?since=2011-01-01&until=2014-01-10
This request has no effect, it still shows me the two comment made
on the 29th
I have tried the same kind of date range on my user-id/feed and it
gave me an empty data object.
Finally i tried event-id/feed (before trying date filter) and it
gave me the following error
.
{
"error": {
"message": "An unexpected error has occurred. Please retry your request later.",
"type": "OAuthException",
"code": 2
}
}
Could you please guide me about date filter on that particular query (point4) or if you have any other idea to use date filter on comments made for an event.
Comments use Cursor-based Pagination, so you cannot use since or until on the comments endpoint (these parameters would work f.ex. for the feed endpoint).
To get the comments in a time range you have to fetch all comments from NOW to the start of the time range, f.ex. with https://graph.facebook.com/1466384840257158/comments?filter=stream&limit=1000+paging (the filter=stream will order the result with the timestamp).
USING SINCE UNTIL FOR COMMENTS on GROUP
If you want to use since and until for comments, it is not possible directly for a group. So, First you can apply it for status(feed) and then get the comments for that feed.
This works for me:
{group_id}/?fields=feed.since(08/25/2016).until(08/31/2016){from,comments{from,message}}
Why don't you try first to filter by notifications?... notifications allows you to add parameters like since. For example (using Facebook pages):
https://graph.facebook.com/PAGEID?fields=notifications.since(2015-3-31 00:00:00).limit(250).include_read(true)&{id,created_time,updated_time,unread,object,link}&access_token=ACCESSTOKEN
Once you got the json data, loop through data, get the ID and send a second request but this time using the PAGEID_POSTID edge. Something like this:
https://graph.facebook.com/PAGEID_POSTID/comments?fields=id,from{name,id},message,can_remove,created_time&limit=1000
Voahla!... there's no need to read every comment!...
Note 1: A Page access token is required, along with the manage_pages permission
Note 2: Use the parameter/field include_read to get all the notifications, even the already readed
Note 3: In the second request, use the parameter/field "filter=stream" to order the posts and get the comments made in the name of your page
Note 4: Don't forget to control the asynchronicity once you loop!
Note 5: Notifications duplicate posts, use an array to avoid to read more than one time the postUse the parameter/field include_read to get all the notifications, even the already readed
I do not know if it's too late. But, Yeah it works in the graph api version 3.3.
for example: if you wanna get comments on a post of a Facebook page you can do it like this:
You have to use page Access-token
The get Graph Request : post_id/comments?since=some_date