How to make Keycloak 20.0.1 send an e-mail when a user is blocked due to too many failed login attempts? - keycloak

I want Keycloak to send an e-mail to a user whenever a user is blocked due to too many failed login attempts (see section Realm Settings -> Security defenses -> Brute force detection).
The event in question has the following properties:
Error (org.keycloak.events.Event#getError) = user_temporarily_disabled
Type (org.keycloak.events.Event#getType) = LOGIN_ERROR
How can I do that, i. e. make Keycloak send an e-mail to the user when such event occurs?
Known ways to implement it
One obvious way to do it is to write a class that implements the org.keycloak.events.EventListenerProvider interface, detect the event in its onEvent method and trigger sending of the e-mail at some custom server (i. e. send a request to that server and it will contact an SMTP server).
Second is a variation: Detect the event in the same method and somehow make Keycloak send the e-mail using Keycloak SMTP settings ("Realm settings -> Email -> Connection & Authentication").
The screenshot in this answer made met think (possibly wrongly) that there may be a way to make Keycloak send emails upon the occurrence of certain events "out of the box," i. e. without writing custom event listeners.
Update 1: If someone else wants to do this, I recommend to look at this answer. The code below worked for me.
RealmModel realm = this.model.getRealm(event.getRealmId());
UserModel user = this.session.users().getUserById(event.getUserId(), realm);
if (user != null && user.getEmail() != null) {
System.out.println(">>>>>>>>>>>>>>>>>>>>>>>>>>>" + user.getEmail());
org.keycloak.email.DefaultEmailSenderProvider senderProvider = new org.keycloak.email.DefaultEmailSenderProvider(session);
try {
senderProvider.send(session.getContext().getRealm().getSmtpConfig(), user, "test", "body test",
"html test");
} catch (EmailException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

Keycloak does indeed support sending emails for events out of the box. However, it can only be configured by event (LOGIN_ERROR), and not by further filtered types (user_temporarily_disabled).
For this, you will need to implement your own EventListener, but it should be easy to heavily copy code from Keycloak's existing EmailEventListener, which you can find here: https://github.com/keycloak/keycloak/blob/main/services/src/main/java/org/keycloak/events/email/EmailEventListenerProvider.java
In there, you'd change the implementation of L59 in onEvent(Event event) to check your two conditions (event type and error), rather than checking against some list of configured fixed events. Your event will be added to the currently running transaction, and when the transaction ends (in success or error), Keycloak will send an email via the SMTP settings that are configured in the realm.
If you want to customize the template and subject lines of the email, you'll have to provide your own freemarker templates in src/main/resources/theme-resources/templates/{html,text}. Both the html and text folder need to contain an .ftl file of the same name. Message keys for use in the template and the subject go in src/main/resources/messages/messages_{en,fr,de,...}.properties files.
With the template and messages configured, you can use one of the 2 send(...) methods available in the EmailTemplateProvider class

Related

Facebook Messenger Bot Proactive/Push Notifications using Azure

I am building a bot for for Facebook Messenger using Microsoft Bot Framework. I am planning to use CosmosDB for State Management and also as my backend data store. (I am not stuck to CosmosBD and can use any other store if needed)
I need to send daily/weekly proactive messages(push notifications) to users based on their time preference. I will capturing their time preference when they first interact with the bot.
What is the best way to deliver these notifications?
As I will be storing these preferences in CosmosDB, I am thinking using ComosDB trigger of creating an Azure Function and schedule it based on the user time preference. This Azure function will make a call to my webhook which will deliver these messages. If requried, I will change Function schedule when a user changes his/her preference.
My questions are:
Is this a good approach?
Are there any other alternatives (Notifications Hub?)
I should be able to set specific times for notifications (like at the top of the hour or something like that), does it make sense to schedule an Azure Function to run at these hours rather than creating a function based on user preference (I can actually combine these two approaches too)
Thank you in advance.
First, I don't think there's any "right" answer to be given here; it's going to depend a lot on your domain's specific needs. Scale is going to play a major factor in the design of this. Will you have 100 users? 10000 users? 1mil users? I'm going to assume you want to design for maximum scale up front, but it could be overkill.
First, based on what you've described, I don't think a CosmosDB trigger is necessarily the solution to your problem because that's only going to fire when the preference data is created/updated. I assume that, from that point forward, your function needs to continuously fire at the time slot they've opted into, correct?
So let's pretend you let people choose from the 24hrs in the day. A naïve approach would be to simply use a scheduled trigger that fires up every hour, queries the CosmosDB for all the documents where the preference is set to that particular hour and then begins sending out notifications from there. The problem is how you scale from there and deal with issues of idempotency in the face of failures.
First off, a timer trigger only ever spins up one instance. If you were to just go query the CosmosDB documents and start processing them one by one in the scope of that single trigger, you'd hit a ceiling relatively quickly on how many notifications you can scale to. Instead what you'd want to do is use that timer trigger to fan out the notifications to as many "worker" function instances as possible. The timer trigger can act as the orchestrator in the sense that it can own the query against the CosmosDB and then turn each document result it finds for that particular notification time window into a message that it places on a queue to be processed by a separate function which will scale out on its own.
There are actually a couple ways you can accomplish this with Azure Functions, it really depends on how early an adopter of technology you are comfortable with being.
The first is what I would call the "manual" way which would be done by simply using the existing Azure Storage Queue extension by taking an IAsyncCollector<YourNotificationWorkerMessage> as a parameter to the timer function that's bound to the worker queue and then pumping out the messages through that. Then you write a second companion function which uses a QueueTrigger, bind it to that same queue, and it will take care of processing each message. This second function is where you get the scaling, enabling process all of the queued messages as quickly as possible based on whatever scaling parameters you choose to configure. This is the "simplest" approach
The second approach would be to adopt the newer Durable Functions extension. With that model, you don't have to directly think about creating a worker queue. You simply kick off a new instance of your orchestrator function from the timer function and the orchestrator fans out the work by invoking N "concurrent" calls to an action for each notification. Now, it happens to distribute those calls using queues under the covers, but that's an implementation detail that you need no longer maintain yourself. Additionally, if the work of delivering the notification requires more involved work and/or retry logic, you might actually consider using a sub-orchestration instead of a simple action. Finally, another added benefit of this approach, is that you can "fan back in" to your main orchestrator function once all the notifications are delivered to do some follow up work... even if that's simply some kind of event logging that the notification cycle has completed for this hour.
Now, the challenge with either of these approach is actually dealing with failure in initially fetching the candidates for notification from CosmosDB, paging through the results and making sure you actually fan all of them out in an idempotent manner. You need to deal with possible hiccups as you page and you need to deal with the fact that your whole function could be torn down and you might have to restart. Perhaps on the initial run of the 8AM notifications you got through page 273 out of 371 pages and then you got hit with a complete network connectivity fail or the VM your function was running on suffered a power failure. You could resume, but you'd need to know that you left off on page 273 and that you actually processed the 27th record out of that page and start from there. Otherwise, you risk sending double notifications to your users. Maybe that's something you can accept, maybe it's not. Maybe you're ok with the 27 notifications on that page being duplicated as long as the first 272 pages aren't. Again, this is something you need to decide for your domain, but if you want to avoid this issue your orchestrator function will need to track its progress to ensure that it doesn't send out dupes. Again I would say Durable Functions has a leg up here as it comes with the ability to configure retries. Maintaining the state of a particular run is left up to the author in either approach though.
I use pro-active dialog extensively with botframwork and messenger without any issue. During your facebook approval process you simply need to inform them you will be sending notifications trough messenger with your bot. Usually if you use it to inform your user and stay away from promotional content you should be fine.
I also use azure function to trigger the pro-active dialog from a custom controller endpoint.
Bellow sample code for azure function:
public static void Run(TimerInfo notificationTrigger, TraceWriter log)
{
try
{
//Serialize request object
string timerInfo = JsonConvert.SerializeObject(notificationTrigger);
//Create a request for bot service with security token
HttpRequestMessage hrm = new HttpRequestMessage()
{
Method = HttpMethod.Post,
RequestUri = new Uri(NotificationEndPointUrl),
Content = new StringContent(timerInfo, Encoding.UTF8, "application/json")
};
hrm.Headers.Add("Authorization", NotificationApiKey);
log.Info(JsonConvert.SerializeObject(hrm));
//Call service
using (var client = new HttpClient())
{
Task task = client.SendAsync(hrm).ContinueWith((taskResponse) =>
{
HttpResponseMessage result = taskResponse.Result;
var jsonString = result.Content.ReadAsStringAsync();
jsonString.Wait();
if (result.StatusCode != System.Net.HttpStatusCode.OK)
{
//Throw what ever problem as an exception with details
throw new Exception($"AzureFunction - ERRROR - HTTP {result.StatusCode}");
}
});
task.Wait();
}
}
catch (Exception ex)
{
//TODO log
}
}
Bellow sample code for starting the pro-active dialog:
public static async Task Resume<T, R>(string resumptionCookie) where T : IDialog<R>, new()
{
//Deserialize reference to conversation
ConversationReference conversationReference = JsonConvert.DeserializeObject<ConversationReference>(resumptionCookie);
//Generate message from bot to user
var message = conversationReference.GetPostToBotMessage();
var builder = new ContainerBuilder();
using (var scope = DialogModule.BeginLifetimeScope(Conversation.Container, message))
{
//From a cold start the service is not yet authenticated with dev bot azure services
//We thus must trust endpoint url.
if (!MicrosoftAppCredentials.IsTrustedServiceUrl(message.ServiceUrl))
{
MicrosoftAppCredentials.TrustServiceUrl(message.ServiceUrl, DateTime.MaxValue);
}
var botData = scope.Resolve<IBotData>();
await botData.LoadAsync(CancellationToken.None);
//This is our dialog stack
var task = scope.Resolve<IDialogTask>();
T dialog = scope.Resolve<T>(); //Resolve the dialog using autofac
try
{
task.Call(dialog.Void<R, IMessageActivity>(), null);
await task.PollAsync(CancellationToken.None);
}
catch (Exception ex)
{
//TODO log
}
finally
{
//flush dialog stack
await botData.FlushAsync(CancellationToken.None);
}
}
}
Your dialog needs to be registered in autofac.
Your resumptionCookie needs to be saved in your db.
You might want to check FB policy regarding proactive messages
There’s a 24h limit but it might not be totally screwed in your case
https://developers.facebook.com/docs/messenger-platform/policy/policy-overview#standard_messaging

When using MDA, should you differentiate between idempotent and non-idempotent event handlers?

The question assumes the use of Event Sourcing.
When rebuilding current state by replaying events, event handlers should be idempotent. For example, when a user successfully updates their username, a UsernameUpdated event might be emitted, the event containing a newUsername string property. When rebuilding current state, the appropriate event handler receives the UsernameUpdated event and sets the username property on the User object to the newUsername property of the UsernameUpdated event object. In other words, the handling of the same message multiple times always yields the same result.
However, how does such an event handler work when integrating with external services? For example, if the user wants to reset their password, the User object might emit a PasswordResetRequested event, which is handled by a portion of code that issues a 3rd party with a command to send an SMS. Now when the application is rebuilt, we do NOT want to re-send this SMS. How is this situation best avoided?
There are two messages involved in the interaction: commands and events.
I do not regard the system messages in a messaging infrastructure the same as domain events. Command message handling should be idempotent. Event handlers typically would not need to be.
In your scenario I could tell the aggregate root 100 times to update the user name:
public UserNameChanged ChangeUserName(string username, IServiceBus serviceBus)
{
if (_username.Equals(username))
{
return null;
}
serviceBus.Send(new SendEMailCommand(*data*));
return On(new UserNameChanged{ Username = userName});
}
public UserNameChanged On(UserNameChanged #event)
{
_username = #event.UserName;
return #event;
}
The above code would result in a single event so reconstituting it would not produce any duplicate processing. Even if we had 100 UserNameChanged events the result would still be the same as the On method does not perform any processing. I guess the point to remember is that the command side does all the real work and the event side is used only to change the state of the object.
The above isn't necessarily how I would implement the messaging but it does demonstrate the concept.
I think you are mixing two separate concepts here. The first is reconstructing an object where the handlers are all internal methods of the entity itself. Sample code from Axon framework
public class MyAggregateRoot extends AbstractAnnotatedAggregateRoot {
#AggregateIdentifier
private String aggregateIdentifier;
private String someProperty;
public MyAggregateRoot(String id) {
apply(new MyAggregateCreatedEvent(id));
}
// constructor needed for reconstruction
protected MyAggregateRoot() {
}
#EventSourcingHandler
private void handleMyAggregateCreatedEvent(MyAggregateCreatedEvent event) {
// make sure identifier is always initialized properly
this.aggregateIdentifier = event.getMyAggregateIdentifier();
// do something with someProperty
}
}
Surely you wouldn't put code that talks to an external API inside an aggregate's method.
The second is replaying events on a bounded context which could cause the problem you are talking about and depending on your case you may need to divide your event handlers into clusters.
See Axon frameworks documentation for this point to get a better understanding of the problem and the solution they went with.
Replaying Events on a Cluster
TLDR; store the SMS identifier within the event itself.
A core principle of event sourcing is "idempotency". Events are idempotent, meaning that processing them multiple times will have the same result as if they were processed once. Commands are "non-idempotent", meaning that the re-execution of a command may have a different result for each execution.
The fact that aggregates are identified by UUID (with a very low percentage of duplication) means that the client can generate the UUIDs of newly created aggregates. Process managers (a.k.a., "Sagas") coordinate actions across multiple aggregates by listening to events in order to issue commands, so in this sense, the process manager is also a "client". Cecause the process manager issues commands, it cannot be considered "idempotent".
One solution I came up with is to include the UUID of the soon-to-be-created SMS in the PasswordResetRequested event. This allows the process manager to only create the SMS if it does not yet already exist, hence achieving idempotency.
Sample code below (C++ pseudo-code):
// The event indicating a password reset was successfully requested.
class PasswordResetRequested : public Event {
public:
PasswordResetRequested(const Uuid& userUuid, const Uuid& smsUuid, const std::string& passwordResetCode);
const Uuid userUuid;
const Uuid smsUuid;
const std::string passwordResetCode;
};
// The user aggregate root.
class User {
public:
PasswordResetRequested requestPasswordReset() {
// Realistically, the password reset functionality would have it's own class
// with functionality like checking request timestamps, generationg of the random
// code, etc.
Uuid smsUuid = Uuid::random();
passwordResetCode_ = generateRandomString();
return PasswordResetRequested(userUuid_, smsUuid, passwordResetCode_);
}
private:
Uuid userUuid_;
string passwordResetCode_;
};
// The process manager (aka, "saga") for handling password resets.
class PasswordResetProcessManager {
public:
void on(const PasswordResetRequested& event) {
if (!smsRepository_.hasSms(event.smsUuid)) {
smsRepository_.queueSms(event.smsUuid, "Your password reset code is: " + event.passwordResetCode);
}
}
};
There are a few things to note about the above solution:
Firstly, while there is a (very) low possibility that the SMS UUIDs can conflict, it can actually happen, which could cause several issues.
Communication with the external service is prevented. For example, if user "bob" requests a password reset that generates an SMS UUID of "1234", then (perhaps 2 years later) user "frank" requests a password reset that generates the same SMS UUID of "1234", the process manager will not queue the SMS because it thinks it already exists, so frank will never see it.
Incorrect reporting in the read model. Because there is a duplicate UUID, the read side may display the SMS sent to "bob" when "frank" is viewing the list of SMSes the system sent him. If the duplicate UUIDs were generated in quick succession, it is possible that "frank" would be able to reset "bob"s password.
Secondly, moving the SMS UUID generation into the event means you must make the User aggregate aware of the PasswordResetProcessManager's functionality (but not the PasswordResetManager itself), which increases coupling. However, the coupling here is loose, in that the User is unaware of how to queue an SMS, only that an SMS should be queued. If the User class were to send the SMS itself, you could run into the situation in which the SmsQueued event is stored while the PasswordResetRequested event is not, meaning that the user will receive an SMS but the generated password reset code was not saved on the user, and so entering the code will not reset the password.
Thirdly, if a PasswordResetRequested event is generated but the system crashes before the PasswordResetProcessManager can create the SMS, then the SMS will eventually be sent, but only when the PasswordResetRequested event is re-played (which might be a long time in the future). E.g., the "eventual" part of eventual consistency could be a long time away.
The above approach works (and I can see that it should also work in more complicated scenarious, like the OrderProcessManager described here: https://msdn.microsoft.com/en-us/library/jj591569.aspx). However, I am very keen to hear what other people think about this approach.

Redemption: RDOMail accessing attachments and moving mail

I am trying to access the attachments for an RDOMail object. When I either search for a specific item using LINQ or just try and iterate through the list with a foreach it freezes outlook and throws no exception.
Also when I try and move the RDOMail to another folder it freezes outlook and throws no exception.
I can accomplish both these things just using the Outlook.MailItem
Anyone have any ideas?
void store_OnNewMail(string entryId)
{
RDOMail mail = _store.GetMessageFromID(entryId);
RDOAttachment protocolAttachment = mail.Attachments.Cast<RDOAttachment>().SingleOrDefault(attach => attach.FileName == "protocol.id");
mail.Move(_hiddenDeliveryTrustFolder);
}
My guess is that the IMAP4 message available at the time of the NewMail event is just an envelope message (header only, no body or attachments). When you access the attachments, IMAP4 provider attempts to connect to the IMAP4 server to retrieve the data, but the call is blocked because of a critical section signaled before the event is raised.
Try to bypass the IMAP4 provider level by
RDOStore unwrappedStore = rSession.Stores.UnwarpStore(_store);
RDOMail mail = unwrappedStore.GetMessageFromID(entryId);
You can also try to save the message entry id in a variable and start a timer (use the one from the Forms namespace). When the timer event fires (you will be out of the newMail event by then), you can open the message and process it.

Smack service discovery without login gives bad-request(400)

I am trying to discover items that a pubsub service provides. When I log into the target server, I can get the response successfully. But when I connect bu do not login, it gives a bad request error.
This is the code:
ConnectionConfiguration config = new ConnectionConfiguration(serverAddress, 5222);
config.setServiceName(serviceName);
connection = new XMPPConnection(config);
connection.connect();
connection.login(userName, password); //!!!when I remove this line, bad request error is received
ServiceDiscoveryManager discoManager = ServiceDiscoveryManager.getInstanceFor(connection);
DiscoverItems items;
try {
items = discoManager.discoverItems("pubsubservice." + serverName);
} catch (XMPPException e) {
e.printStackTrace();
}
Is there a way to discover items when the user is not logged in, but the connection is established?
No, you must authenticate to send stanzas to any JID in XMPP (otherwise they would not be able to reply to you, since they wouldn't know your address).
Perhaps one option for you is anonymous authentication. Most servers support it, and it creates a temporary account on the server for you, with a temporary JID. You don't need a password, and login time is quick.
#MattJ is correct and you could try using anon login. That will get you part way there.
Your current request will only get you the nodes though, after which you will have to get the items for each node. It would be simpler to use PubsubManager to get the information you want since it provides convenience methods for accessing/using all things pubsub.
Try the documentation here, the getAffiliations() method is what you are looking for.
BTW, I believe the typical default service name for pubsub is pubsub not pubsubservice. At least this is the case for Openfire.

Changing the From field for an email activity in a plug-in

When an email is sent to a queue and there is a contact associated with the "From" email in CRM, upon promoting an email to an email activity the system automatically fills in the "From" field with the contact information. However, if a user with the same email exists in CRM, too, then the system always picks up the system user instead of the contact. I need to override this behaviour to ALWAYS pick up the contact if one with the email exists.
I created a post-operation plug-in (tried a pre-operation plug-in, too) for the event Create for email, trying to override the From field. The problem is, it does not work. When I debug the plug-in, it goes quietly past the assignment without any errors and then the same plug-in fires for the same email again. And again. And again.
When I try instead to create a new email and use the same ActivityList[] I was trying to use for the entity that triggered the event, it works. It seems that the problem is that CRM does not allow changing the From field from a plug-in, or am I doing something wrong? If it's a limitation enforced by CRM, is there a way around it?
My code is below:
var email = ((Entity)context.InputParameters["Target"]).ToEntity<Email>();
...
var oldFrom = ((EntityCollection)email.Attributes["from"]).Entities;
List<ActivityParty> newFrom = new List<ActivityParty>();
foreach (Entity party in oldFrom)
{
EntityReference entRef = (EntityReference)party.Attributes["partyid"];
if (entRef.LogicalName == SystemUser.EntityLogicalName)
user = userLogic.Get(new Guid(entRef.Id.ToString()));
if (user == null) return;
string emailAddress = user.InternalEMailAddress;
Contact contact = contactLogic.LookupPASIndividual("", emailAddress);
if (contact != null)
{ newFrom.Add(new ActivityParty() {PartyId = new EntityReference(Contact.EntityLogicalName, contact.ContactId.Value) });
}
else
return;
}
email.From = newFrom;
Update: So I registered the plug-in on Pre-validation now and it's not triggered when an email activity is created by a router, it IS triggered when a user creates an email in CRM though...
The problem is that you aren't changing the email which is processed at all.
var email = ((Entity)context.InputParameters["Target"]).ToEntity<Email>();
This line converts the record which is currently processed to an object of type email. You modify the record which is not in scope of the operation. You have to modify the From of the target (either directly or write it back).
For the processing stages: take a look at the Event Execution Pipeline. Pre-Validation is to early for your task. I'am not quite sure when the address resolution is done, but I would try to do your conversion Pre-Create.
I ended up using a workaround: created an async Post-Event that associates the email activity with the contact if a contact with the same email exists, leaving the user associated with the email in the "From" field.