Crm 2011 how to get the stepid in which the plugin is executing - plugins

In crm 2011, inside the Execute method of a plugin, how can I know the id of the registered step that is executing? For instance, I have two steps for the pre create of an account. The execute method will run two times one for each step. I need to know in the execute method the stepid of the step that is actually running. I can't find it in the context.
UPDATE:
I'm updating here to explain the scenario, because in the comments I don't have enough characters. So the scenario:
I have a solution for autonumbering entities that enables users to format their numbers the way they want.
For that I have an entity (autonumber) where they configure the format, the entity and the field they want to number. Every time a record is created for the autonumber entity it will create and register a step dynamically in the pre operation of the create message of the entity to be numbered, for example the account.
When that step is executed it will load the autonumber record to know how to number the account field.
The created step must be linked to the autonumber record and for that the autonumber entity has an attribute to store the id of the step. This attribute is filled on the pre create of the autonumber entity when the step is created.
This link attribute allows for the step to be unregistered when the user deletes the autonumber record because it knows exactly which step to unregister. It also allows the user to set the order in which the step is going to be executed if there are more plugins registered to the account.
The problem that I had was when I wanted to number 2 or more attributes for the same entity. In this case the users would create, lets say, 2 records of the autonumber entity in order to number 2 fields of the account. In this case I will have 2 steps registered to the account. When the account is being created one step will number one field and the other step will number the other field. That's why I need to know the id of the step that is being executed in order to load the right autonumber record.
Sorry for the tedious explanation but this scenario is a bit complex and I'm not sure if I was clear enough, but if you want I'll try to be more clear.

The OwningExtension property available on the IPluginExecutionContext will return an EntityReference to the SdkMessageProcessingingStep which should provide all the information you need.
What are you trying to achieve by registering the same plugin twice for the same Message and Stage? I'm struggling to think of a valid scenario.

You can get the name of the message from the context. Usually, I do something similar to this.
public void Execute(IServiceProvider serviceProvider)
{
IPlugingExecutionContext context
= (IPlugingExecutionContext)serviceProvider
.getService(typeof(IPlugingExecutionContext));
switch(context.MessageName)
{
case "Create" ExecuteCreate(); break;
case "Retrieve" ExecuteCreate(); break;
case "Update" ExecuteCreate(); break;
case "Delete" ExecuteCreate(); break;
default ExecuteFunctionality(Context.MessageName);
}
}
Then, of course, you need to implement those methods too. And usually I have a private field that hold the reference to context. It's good to be able to access it easily when the need arises. Also, you can (and should) check if the message is supported by your plug in, if there's a Target and if it's of the right entity type. Stuff like that.

Related

Should the natural or surrogate key be returned in an API?

First time I think about it...
Until now, I always used the natural key in my API. For example, a REST API allowing to deal with entities, the URL would be like /entities/{id} where id is a natural key known to the user (the ID is passed to the POST request that creates the entity). After the entity is created, the user can use multiple commands (GET, DELETE, PUT...) to manipulate the entity. The entity also has a surrogate key generated by the database.
Now, think about the following sequence:
A user creates entity with id 1. (POST /entities with body containing id 1)
Another user deletes the entity (DELETE /entities/1)
The same other user creates the entity again (POST /entities with body containing id 1)
The first user decides to modify the entity (PUT /entities/1 with body)
Before step 4 is executed, there is still an entity with id 1 in the database, but it is not the same entity created during step 1. The problem is that step 4 identifies the entity to modify based on the natural key which is the same for the deleted and new entity (while the surrogate key is different). Therefore, step 4 will succeed and the user will never know it is working on a new entity.
I generally also use optimistic locking in my applications, but I don't think it helps here. After step 1, the entity's version field is 0. After step 3, the new entity's version field is also 0. Therefore, the version check won't help. Is the right case to use timestamp field for optimistic locking?
Is the "good" solution to return surrogate key to the user? This way, the user always provides the surrogate key to the server which can use it to ensure it works on the same entity and not on a new one?
Which approach do you recommend?
It depends on how you want your users to user your api.
REST APIs should try to be discoverable. So if there is benefit in exposing natural keys in your API because it will allow users to modify the URI directly and get to a new state, then do it.
A good example is categories or tags. We could have these following URIs;
GET /some-resource?tag=1 // returns all resources tagged with 'blue'
GET /some-resource?tag=2 // returns all resources tagged with 'red'
or
GET /some-resource?tag=blue // returns all resources tagged with 'blue'
GET /some-resource?tag=red // returns all resources tagged with 'red'
There is clearly more value to a user in the second group, as they can see that the tag is a real word. This then allows them to type ANY word in there to see whats returned, whereas the first group does not allow this: it limits discoverability
A different example would be orders
GET /orders/1 // returns order 1
or
GET /orders/some-verbose-name-that-adds-no-meaning // returns order 1
In this case there is little value in adding some verbose name to the order to allow it to be discoverable. A user is more likely to want to view all orders first (or a subset) and filter by date or price etc, and then choose an order to view
GET /orders?orderBy={date}&order=asc
Additional
After our discussion over chat, your issue seems to be with versioning and how to manage resource locking.
If you allow resources to be modified by multiple users, you need to send a version number with every request and response. The version number is incremented when any changes are made. If a request sends an older version number when trying to modify a resource, throw an error.
In the case where you allow the same URIs to be reused, there is a potential for conflict as the version number always begins from 0. In this case, you will also need to send over a GUID (surrogate key) and a version number. Or don't use natural URIs (see original answer above to decided when to do this or not).
There is another option which is to disallow reuse of URIs. This really depends on the use case and your business requirements. It may be fine to reuse a URI as conceptually it means the same thing. Example would be if you had a folder on your computer. Deleting the folder and recreating it, is the same as emptying the folder. Conceptually the folder is the same 'thing' but with different properties.
User account is probably an area where reusing URIs is not a good idea. If you delete an account /accounts/u1, that URI should be marked as deleted, and no other user should be able to create an account with username u1. Conceptually, a new user using the same URI is not the same as when the previous user was using it.
Its interesting to see people trying to rediscover solutions to known problems. This issue is not specific to a REST API - it applies to any indexed storage. The only solution I have ever seen implemented is don't re-use surrogate keys.
If you are generating your surrogate key at the client, use UUIDs or split sequences, but for preference do it serverside.
Also, you should never use surrogate keys to de-reference data if a simple natural key exists in the data. Indeed, even if the natural key is a compound entity, you should consider very carefully whether to expose a surrogate key in the API.
You mentioned the possibility of using a timestamp as your optimistic locking.
Depending how strictly you're following a RESTful principle, the Entity returned by the POST will contain an "edit self" link; this is the URI to which a DELETE or UPDATE can be performed.
Taking your steps above as an example:
Step 1
User A does a POST of Entity 1. The returned Entity object will contain a "self" link indicating where updates should occur, like:
/entities/1/timestamp/312547124138
Step 2
User B gets the existing Entity 1, with the above "self" link, and performs a DELETE to that timestamp versioned URI.
Step 3
User B does a POST of a new Entity 1, which returns an object with a different "self" link, e.g.:
/entities/1/timestamp/312547999999
Step 4
User A, with the original Entity that they obtained in Step 1, tries doing a PUT to the "self" link on their object, which was:
/entities/1/timestamp/312547124138
...your service will recognise that although Entity 1 does exist; User A is trying a PUT against a version which has since become stale.
The service can then perform the appropriate action. Depending how sophisticated your algorithm is, you could either merge the changes or reject the PUT.
I can't remember the appropriate HTTP status code that you should return, following a PUT to a stale version... It's not something that I've implemented in the Rest framework that I work on, although I have planned to enable it in future. It might be that you return a 410 ("Gone").
Step 5
I know you don't have a step 5, but..! User A, upon finding their PUT has failed, might re-retrieve Entity 1. This could be a GET to their (stale) version, i.e. a GET to:
/entities/1/timestamp/312547124138
...and your service would return a redirect to GET from either a generic URI for that object, e.g.:
/entities/1
...or to the specific latest version, i.e.:
/entities/1/timestamp/312547999999
They can then make the changes intended in Step 4, subject to any application-level merge logic.
Hope that helps.
Your problem can be solved either using ETags for versioning (a record can only modified if the current ETag is supplied) or by soft deletes (so the deleted record still exists but with a trashed bool which is reset by a PUT).
Sounds like you might also benefit from a batch end point and using transactions.

Accessing multi-level fields in a CRM 2011 Workflow

Sorry if this is sort of confusing because I'm not sure how to word this. I am trying to create a workflow that runs off of Account's in Microsoft CRM 2011. One part of this workflow requires me to retrieve a field contained in the Business Unit of the User in the Account's "Created By" field. However, the workflow will only allow me to access the Business Unit itself, but not any of its fields.
I'm wondering if there is a simple trick or work-around that will allow me access to this data.
Thanks!
For reference, the Account has a User, who has a Business Unit, and the Business Unit has a field I need to access. CRM, however, doesn't want to let me get more than 2 levels deep when accessing fields.
Clunky but do-able if you accept a bit of denormalisation (temporarily or otherwise). I'll assume for the sake of example you want to get at the "cost centre" field from the BU.
Add a field on User entity to temporarily hold the value from the BU (so make it same type and length, text(100) in this case), optionally put it on the form.
Create a child workflow for the User entity to update the user with the "cost centre" value from their BU. Make it only available to run as a child, not onDemand or anything else. Activate
In your Account workflow, add a step to call the child workflow against the relevant user (eg Created By in your case).
Add a step to wait until the new cost centre field on the user record contains data.
Now do whatever you need to with the value from the user record, such as update the Account, or do some branched logic.
Whatever you do, once you have used the value, clear the field on the user record, or do this as the last step of the workflow.
Now, since Users don't change BU very often, you might actually just go ahead and keep that value on the User record permanently, and instead of a child workflow, simply run this on create of a new user, or on change of BU, and store the value permanently on the User record. Yes, it is 'denormalised' and not purest SQL design, but then you don't need a child workflow, you don't need a wait state and you don't have to clear the value at the end, or worry about what happens when two Accounts need to run their workflow at the same time. I include the more general approach above as this might apply to other records which do change their parent quite often.
Just an additional thought - you can access the "owning business unit" of the Account, but this will be the BU of the Owning User, rather than the Created By, but is your business process such that this would normally be the same person? (eg users only have Create priviledge to "user owned" depth, so can only create records they own).
If so, then you could get at the BU directly from the Account, and then any fields on it too (in a condition or to update the Account)
Alternative which is less ideal but a similar approach - add a relationship from Account to BU (eg "created BU"). Now you can update the Account with this by referring to the Created By User's BU, then in the next step, reference this value from the Account. This is again denormalised, and less preferable since number of Accounts is far greater than number of users, so the level of duplicate information is much higher.
You can't get deeper with the standard steps of a workflow.
The solution is to create a custom workflow activity, you can start from this article:
http://msdn.microsoft.com/en-us/library/gg328515.aspx

CQRS - When a command cannot resolve to a domain

I'm trying to wrap my head around CQRS. I'm drawing from the code example provided here. Please be gentle I'm very new to this pattern.
I'm looking at a logon scenario. I like this scenario because it's not really demonstrated in any examples i've read. In this case I do not know what the aggregate id of the user is or even if there is one as all I start with is a username and password.
In the fohjin example events are always fired from the domain (if needed) and the command handler calls some method on the domain. However if a user logon is invalid I have no domain to call anything on. Also most, if not all of the base Command/Event classes defined in the fohjin project pass around an aggregate id.
In the case of the event LogonFailure I may want to update a LogonAudit report.
So my question is: how to handle commands that do not resolve to a particular aggregate? How would that flow?
public void Execute(UserLogonCommand command)
{
var user = null;//user looked up by username somehow, should i query the report database to resolve the username to an id?
if (user == null || user.Password != command.Password)
;//What to do here? I want to raise an event somehow that doesn't target a specific user
else
user.LogonSuccessful();
}
You should take into account that it most cases CQRS and DDD is suitable just for some parts of the system. It is very uncommon to model entire system with CQRS concepts - it fits best to the parts with complex business domain and I wouldn't call logging user in a particularly complex business scenario. In fact, in most cases it's not business-related at all. The actual business domain starts when user is already identified.
Another thing to remember is that due to eventual consistency it is extremely beneficial to check as much as we can using only query-side, without event creating any commands/events.
Assuming however, that the information about successful / failed user log-ins is meaningful I'd model your scenario with following steps
User provides name and password
Name/password is validated against some kind of query database
When provided credentials are valid RegisterValidUserCommand(userId) is executed which results in proper event
If provided credentials are not valid
RegisterInvalidCredentialsCommand(providedUserName) is executed which results in proper event
The point is that checking user credentials is not necessarily part of business domain.
That said, there is another related concept, in which not every command or event needs to be business - related, thus it is possible to handle events that don't need aggregates to be loaded.
For example you want to change data that is informational-only and in no way affects business concepts of your system, like information about person's sex (once again, assuming that it has no business meaning).
In that case when you handle SetPersonSexCommand there's actually no need to load aggregate as that information doesn't even have to be located on entities, instead you create PersonSexSetEvent, register it, and publish so the query side could project it to the screen/raport.

How to get list of aggregates using JOliviers's CommonDomain and EventStore?

The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.

Accessing a class that is related to two other classes

Given the following tables: User, Trial, UserTrial. Where A user has multiple trials, a Trial does not internally map to any Users and contains details about the trial (name, description, settings), and a UserTrial contains information specific to an instance of a User's trial (expiration date, for example). What would be the proper way for the controller of an MVC application to access data about a UserTrial?
Additional Details
There is no ORM
Each class is dual-purposed to be useable to create new, or load existing Users, Trials, or UserTrials. The constructor loads data when passed an ID and persists it with the method ->save()
It would seem that there are 2 options:
1
User.SetTrial()
User.GetUserTrial()
2
UserTrial.SetUser()
UserTrial.SetTrial()
UserTrial.GetSomeData()
Which is the most appropriate usage?
I don't think your option 1 will work because if each User can have multiple Trials, then you'd need something like User.AddTrial(Trial), User.RemoveTrial(Trial), User.getUserTrails().
Which design option you choose depends on whether you want to make UserTrial objects "first class" or not. Do you want to consider Users and Trials to be the primary objects with UserTrial objects just glue to hold the relations, or do you want UserTrial objects to be primary as well? If the former, you'll want something like your option 1; if the latter, you'll want something like your option 2.