Get triggered rule names which is executing on decision server from client side (Redhat Decision Manager) - drools

I am using the REST api for executing rules on Decision Server (Redhat Decision Manager 7.2) using a stateless kie session. I'm currently getting the number of triggered rules, but I also want to get the names of those rules. Is this possible?
KieServicesConfiguration conf = KieServicesFactory.newRestConfiguration(URL, USER, PASSWORD);
List<GenericCommand<?>> commands = new ArrayList<GenericCommand<?>>();
commands.add((GenericCommand<?>)
KieServices.Factory.get().getCommands().newInsert(applicant, "applicant"));
commands.add((GenericCommand<?>)
KieServices.Factory.get().getCommands().newInsert(loan, "loan"));
commands.add((GenericCommand<?>)KieServices.Factory.get().getCommands().newFireAllRules("numberOfFiredRules"));
KieCommands kieCommands = KieServices.Factory.get().getCommands();
BatchExecutionCommand batchCommand = kieCommands.newBatchExecution(commands, "default-stateless-ksession");
ServiceResponse<ExecutionResults> executeResponse = ruleServicesClient
.executeCommandsWithResults("loan-application_1.2.0", batchCommand);
System.out.println("Number of fired rules:" executeResponse.getResult().getValue("numberOfFiredRules"));

You have to use an AgendaEventListener to keep track of rules exectioned. By implemnting org.kie.api.event.rule.AgendaEventListener interface you can capture these detials.

To know which rules are triggered I added an action column with custom code (Action BRL fragment) that writes the rule name in one of the field of my fact. You can get it from rule.name.
Example:myFact.logMyRuleName(rule.name)

Related

Need Yii2 Equivalent of Zend_Session_Namespace

I am currently migrating an old Zend 1.1 website and need a replacement for the uses of Zend_Session_Namespace.
Does one exist for Yii2? Or alternatively is there a plugin or something to add this functionality?
-Edit:
Specifically the ability to set expiry timeouts and hop limits like Zend has.
Thank you.
UPDATE
The info you have added in the edit was never mentioned earlier and makes your question too broad you might create a separate question for that.
By default session data are stored in files. The implementation is locking a file from opening a session to the point it's closed either by session_write_close() (in Yii it could be done as Yii::$app->session->close()) or at the end of request. While session file is locked all other requests which are trying to use the same session are blocked i.e. waiting for the initial request to release the session file. this can work for dev or small projects. But when it comes to handling massive concurrent requests, it is better to use more sophisticated storage, such as a database.
Zend_Session_Namespace instances provide the primary API for manipulating session data in the Zend Framework. Namespaces are used to segregate all session data, if you are converting the script to Yii2 framework you might need to look into https://www.yiiframework.com/doc/api/2.0/yii-web-session
A simple example to compare both of the functionalities by example are
Zend Framework 1.1 Counting Page Views
$defaultNamespace = new Zend_Session_Namespace('Default');
if (isset($defaultNamespace->numberOfPageRequests)) {
// this will increment for each page load.
$defaultNamespace->numberOfPageRequests++;
} else {
$defaultNamespace->numberOfPageRequests = 1; // first time
}
echo "Page requests this session: ",
$defaultNamespace->numberOfPageRequests;
Yii2 Framework Counting Page Views
public function actionIndex()
{
$session = new \yii\web\Session();
$session->open();
$visits = $session->get('visits', 0);
$visits = $visits+1;
$session->set('visits', $visits);
return "Total visits $visits";
}

Facebook Messenger Bot Proactive/Push Notifications using Azure

I am building a bot for for Facebook Messenger using Microsoft Bot Framework. I am planning to use CosmosDB for State Management and also as my backend data store. (I am not stuck to CosmosBD and can use any other store if needed)
I need to send daily/weekly proactive messages(push notifications) to users based on their time preference. I will capturing their time preference when they first interact with the bot.
What is the best way to deliver these notifications?
As I will be storing these preferences in CosmosDB, I am thinking using ComosDB trigger of creating an Azure Function and schedule it based on the user time preference. This Azure function will make a call to my webhook which will deliver these messages. If requried, I will change Function schedule when a user changes his/her preference.
My questions are:
Is this a good approach?
Are there any other alternatives (Notifications Hub?)
I should be able to set specific times for notifications (like at the top of the hour or something like that), does it make sense to schedule an Azure Function to run at these hours rather than creating a function based on user preference (I can actually combine these two approaches too)
Thank you in advance.
First, I don't think there's any "right" answer to be given here; it's going to depend a lot on your domain's specific needs. Scale is going to play a major factor in the design of this. Will you have 100 users? 10000 users? 1mil users? I'm going to assume you want to design for maximum scale up front, but it could be overkill.
First, based on what you've described, I don't think a CosmosDB trigger is necessarily the solution to your problem because that's only going to fire when the preference data is created/updated. I assume that, from that point forward, your function needs to continuously fire at the time slot they've opted into, correct?
So let's pretend you let people choose from the 24hrs in the day. A naïve approach would be to simply use a scheduled trigger that fires up every hour, queries the CosmosDB for all the documents where the preference is set to that particular hour and then begins sending out notifications from there. The problem is how you scale from there and deal with issues of idempotency in the face of failures.
First off, a timer trigger only ever spins up one instance. If you were to just go query the CosmosDB documents and start processing them one by one in the scope of that single trigger, you'd hit a ceiling relatively quickly on how many notifications you can scale to. Instead what you'd want to do is use that timer trigger to fan out the notifications to as many "worker" function instances as possible. The timer trigger can act as the orchestrator in the sense that it can own the query against the CosmosDB and then turn each document result it finds for that particular notification time window into a message that it places on a queue to be processed by a separate function which will scale out on its own.
There are actually a couple ways you can accomplish this with Azure Functions, it really depends on how early an adopter of technology you are comfortable with being.
The first is what I would call the "manual" way which would be done by simply using the existing Azure Storage Queue extension by taking an IAsyncCollector<YourNotificationWorkerMessage> as a parameter to the timer function that's bound to the worker queue and then pumping out the messages through that. Then you write a second companion function which uses a QueueTrigger, bind it to that same queue, and it will take care of processing each message. This second function is where you get the scaling, enabling process all of the queued messages as quickly as possible based on whatever scaling parameters you choose to configure. This is the "simplest" approach
The second approach would be to adopt the newer Durable Functions extension. With that model, you don't have to directly think about creating a worker queue. You simply kick off a new instance of your orchestrator function from the timer function and the orchestrator fans out the work by invoking N "concurrent" calls to an action for each notification. Now, it happens to distribute those calls using queues under the covers, but that's an implementation detail that you need no longer maintain yourself. Additionally, if the work of delivering the notification requires more involved work and/or retry logic, you might actually consider using a sub-orchestration instead of a simple action. Finally, another added benefit of this approach, is that you can "fan back in" to your main orchestrator function once all the notifications are delivered to do some follow up work... even if that's simply some kind of event logging that the notification cycle has completed for this hour.
Now, the challenge with either of these approach is actually dealing with failure in initially fetching the candidates for notification from CosmosDB, paging through the results and making sure you actually fan all of them out in an idempotent manner. You need to deal with possible hiccups as you page and you need to deal with the fact that your whole function could be torn down and you might have to restart. Perhaps on the initial run of the 8AM notifications you got through page 273 out of 371 pages and then you got hit with a complete network connectivity fail or the VM your function was running on suffered a power failure. You could resume, but you'd need to know that you left off on page 273 and that you actually processed the 27th record out of that page and start from there. Otherwise, you risk sending double notifications to your users. Maybe that's something you can accept, maybe it's not. Maybe you're ok with the 27 notifications on that page being duplicated as long as the first 272 pages aren't. Again, this is something you need to decide for your domain, but if you want to avoid this issue your orchestrator function will need to track its progress to ensure that it doesn't send out dupes. Again I would say Durable Functions has a leg up here as it comes with the ability to configure retries. Maintaining the state of a particular run is left up to the author in either approach though.
I use pro-active dialog extensively with botframwork and messenger without any issue. During your facebook approval process you simply need to inform them you will be sending notifications trough messenger with your bot. Usually if you use it to inform your user and stay away from promotional content you should be fine.
I also use azure function to trigger the pro-active dialog from a custom controller endpoint.
Bellow sample code for azure function:
public static void Run(TimerInfo notificationTrigger, TraceWriter log)
{
try
{
//Serialize request object
string timerInfo = JsonConvert.SerializeObject(notificationTrigger);
//Create a request for bot service with security token
HttpRequestMessage hrm = new HttpRequestMessage()
{
Method = HttpMethod.Post,
RequestUri = new Uri(NotificationEndPointUrl),
Content = new StringContent(timerInfo, Encoding.UTF8, "application/json")
};
hrm.Headers.Add("Authorization", NotificationApiKey);
log.Info(JsonConvert.SerializeObject(hrm));
//Call service
using (var client = new HttpClient())
{
Task task = client.SendAsync(hrm).ContinueWith((taskResponse) =>
{
HttpResponseMessage result = taskResponse.Result;
var jsonString = result.Content.ReadAsStringAsync();
jsonString.Wait();
if (result.StatusCode != System.Net.HttpStatusCode.OK)
{
//Throw what ever problem as an exception with details
throw new Exception($"AzureFunction - ERRROR - HTTP {result.StatusCode}");
}
});
task.Wait();
}
}
catch (Exception ex)
{
//TODO log
}
}
Bellow sample code for starting the pro-active dialog:
public static async Task Resume<T, R>(string resumptionCookie) where T : IDialog<R>, new()
{
//Deserialize reference to conversation
ConversationReference conversationReference = JsonConvert.DeserializeObject<ConversationReference>(resumptionCookie);
//Generate message from bot to user
var message = conversationReference.GetPostToBotMessage();
var builder = new ContainerBuilder();
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
//From a cold start the service is not yet authenticated with dev bot azure services
//We thus must trust endpoint url.
if (!MicrosoftAppCredentials.IsTrustedServiceUrl(message.ServiceUrl))
{
MicrosoftAppCredentials.TrustServiceUrl(message.ServiceUrl, DateTime.MaxValue);
}
var botData = scope.Resolve<IBotData>();
await botData.LoadAsync(CancellationToken.None);
//This is our dialog stack
var task = scope.Resolve<IDialogTask>();
T dialog = scope.Resolve<T>(); //Resolve the dialog using autofac
try
{
task.Call(dialog.Void<R, IMessageActivity>(), null);
await task.PollAsync(CancellationToken.None);
}
catch (Exception ex)
{
//TODO log
}
finally
{
//flush dialog stack
await botData.FlushAsync(CancellationToken.None);
}
}
}
Your dialog needs to be registered in autofac.
Your resumptionCookie needs to be saved in your db.
You might want to check FB policy regarding proactive messages
There’s a 24h limit but it might not be totally screwed in your case
https://developers.facebook.com/docs/messenger-platform/policy/policy-overview#standard_messaging

Azure Batch - Setting custom user identity for tasks

I am using Azure Batch C# Client API 6.1. I am trying to have all my runs using the same user identity.
I am setting a custom user identity as below, as per MSDN documentation.
var task = new CloudTask("{guid}", "command string")
{
DisplayName = "display name",
UserIdentity = new UserIdentity("customUserid")
}
However when the job runs, the task executes under a random user account.
Would anyone know how to make it work OR even if it is supported by the backend Azure Batch service?
Thanks in advance
In order to use named user accounts, you need to first specify a list of UserAccount on your CloudPool during creation.
pool.UserAccounts = new List<UserAccount>
{
new UserAccount("myadminaccount", "adminpassword", ElevationLevel.Admin),
new UserAccount("mynonadminaccount", "nonadminpassword", ElevationLevel.NonAdmin),
};
You will then be able to execute tasks assigned to this pool with UserIdentity properties as you have in your example.
Unfortunately the MSDN documentation for this feature is lagging behind currently, but should be updated soon.

transaction problem with entity framework 4

I'm trying to implement a transaction with entity framework 4. From what I've read, the code below is correct. The SaveChanges works fine but as soon as I hit the first ExecuteFunction call I get the following exception:
The underlying provider failed on
Open. --->
System.Transactions.TransactionManagerCommunicationException:
Network access for Distributed
Transaction Manager (MSDTC) has been
disabled. Please enable DTC for
network access in the security
configuration for MSDTC using the
Component Services Administrative
tool.
I've logged on to the database server and I don't see a service called Distributed Transaction Manager but I do see Distributed Transaction Coordinator and it is started. I'm not sure what I need to change to allow this to work. Anyone know? Thanks.
Here's the code.
using (var h = new WhaleEntities(ConnectionHelper.DBConnectString))
{
using (TransactionScope ts = new TransactionScope())
{
h.Sites.AddObject(s);
h.SaveChanges(SaveOptions.DetectChangesBeforeSave);
retval = s.SiteID;
h.ExecuteFunction("UpdateSiteInterfaceList", new ObjectParameter("pSiteID", retval), new ObjectParameter("pList", "10"));
h.ExecuteFunction("UpdateSiteInterfaceRequiredList", new ObjectParameter("pSiteID", retval),new ObjectParameter("pList", "Email"));
h.ExecuteFunction("UpdateSiteInterfaceAlwaysShownList", new ObjectParameter("pSiteID", retval),new ObjectParameter("pList", "10"));
h.ExecuteFunction("UpdateSiteInterfaceAlwaysRequiredList",new ObjectParameter("pSiteID", retval),new ObjectParameter("pList", "Email"));
ts.Complete();
//changes must be accepted manually once transaction succeeds.
h.AcceptAllChanges();
}
}
See here: How do I enable MSDTC on SQL Server?

What is the best way to log errors in Zend Framework 1?

We built an app in Zend Framework (v1) and have not worked a lot in setting up error reporting and logging. Is there any way we could get some level or error reporting without too much change in the code? Is there a ErrorHandler plugin available?
The basic requirement is to log errors that happens within the controller, missing controllers, malformed URLs, etc.
I also want to be able to log errors within my controllers. Will using error controller here, help me identify and log errors within my controllers? How best to do this with minimal changes?
I would use Zend_Log and use the following strategy.
If you are using Zend_Application in your app, there is a resource for logging. You can read more about the resource here
My advice would be to choose between writing to a db or log file stream. Write your log to a db if you plan on having some sort of web interface to it, if not a flat file will do just fine.
You can setup the logging to a file with this simple example
resources.log.stream.writerName = "Stream"
resources.log.stream.writerParams.stream = APPLICATION_PATH "/../data/logs/application.log"
resources.log.stream.writerParams.mode = "a"
resources.log.stream.filterName = "Priority"
resources.log.stream.filterParams.priority = 4
Also, I would suggest sending Critical errors to an email account that is checked regularly by your development team. The company I work for sends them to errors#companyname.com and that forwards to all of the developers from production sites.
From what I understand, you can't setup a Mail writer via a factory, so the resource won't do you any good, but you can probably set it up in your ErrorController or Bootstrap.
$mail = new Zend_Mail();
$mail->setFrom('errors#example.org')
->addTo('project_developers#example.org');
$writer = new Zend_Log_Writer_Mail($mail);
// Set subject text for use; summary of number of errors is appended to the
// subject line before sending the message.
$writer->setSubjectPrependText('Errors with script foo.php');
// Only email warning level entries and higher.
$writer->addFilter(Zend_Log::WARN);
$log = new Zend_Log();
$log->addWriter($writer);
// Something bad happened!
$log->error('unable to connect to database');
// On writer shutdown, Zend_Mail::send() is triggered to send an email with
// all log entries at or above the Zend_Log filter level.
You will need to do a little work to the above example but the optimal solution would be to grab the log resource in your bootstrap file, and add the email writer to it, instead of creating a second log instance.
You can use Zend_Controller_Plugin_ErrorHandler . As you can see on the documentation page there is an example that checks for missing controller/action and shows you how to set the appropriate headers.
You can then use Zend_Log to log your error messages to disk/db/mail.