Add a unique custom field in Azure DevOps - azure-devops

Can I add an unique custom field inside a work item.
So when a new work item is added, a validation error occurs if a previously added work item already contain a that value.
I've tried inside the "Rule" section of work item customization, but without success

There is no built-in rule to enforce uniqueness. The only field that is guaranteed to be unique is the work item ID.
It is possible to create a custom control that uses the REST API to query whether the contents of a field are unique and have it enforce that uniqueness. But that has a few caveats. The rule will only be enforced in the UI, other experiences (like bulk changes, excel etc) won't triggr this validation. Direct manipulation through he REST API won't either. And I would expect concurrency problems when you venture in this direction.

Related

uber cadence :: want to store an object inside a workflow

Want to store an object inside a workflow then want to receive it through cadence api.
ListOpenWorkflowExecutionsRequest listOpenWorkflowExecutionsRequest=new ListOpenWorkflowExecutionsRequest();
listOpenWorkflowExecutionsRequest.setDomain(DOMAIN);
listOpenWorkflowExecutionsRequest.setStartTimeFilter(startTimeFilter);
ListOpenWorkflowExecutionsResponse response=
cadenceService.ListOpenWorkflowExecutions(listOpenWorkflowExecutionsRequest);*
I am open to go with any solution.
Use the QueryWorkflowExecution API to retrieve information from a single workflow.
The list API is used to get lists of workflows without querying them directly. You can attach custom information (called memo) to a visibility record that is returned by a list API. Use WorkflowOptions.memo property to add it.
The memo is not indexable. If you want the ability to index on custom attributes use the Search Attributes feature. One other feature of search attributes is that they are updatable from the workflow code using upsertSearchAttributes API. So for example, if the workflow code updates the "state" attribute on each state transition then it would be possible to find all the workflows in a given state. Also, all the search attributes are returned by the list API so their value can be shown in the UI list view even if they are not part of the search predicate. Note that this requires Elastic Search cluster integration enabled.

REST new ID with DDD Aggregate

This question seemed fool at first sight for me, but then I realized that I don't have a proper answer yet, and interestingly also didn't find good explanation about it in my searches.
I'm new to Domain Driven Design concepts, so, even if the question is basic, feel free to add any considerations to it.
I'm designing in Rest API to configure Server Instances, and I came up with a Aggregate called Instance that contains a List of Configurations, only one specific Configuration will be active at a given time.
To add a Configuration, one would call an endpoint POST /instances/{id}/configurations with the body on the desired configuration. In response, if all okay, it would receive a HTTP 204 with a Header Location containing the new Configuration ID.
I'm planning to have only one Controller, InstanceController, that would call InstanceService that would manipulate the Instance Aggregate and then store to the Repo.
Since the ID's are generated by the repository, If I call Instance.addConfiguration and then InstanceRepository.store, how would I get the ID of the newly created configuration? I mean, it's a List, so It's not trivial as calling Instance.configuration.identity
A option would implement a method in Instance like, getLastAddedConfiguration, but this seems really brittle.
What is the general approach in this situation?
the ID's are generated by the repository
You could remove this extra complexity. Since Configuration is an entity of the Instance aggregate, its Id only needs to be unique inside the aggregate, not across the whole application. Therefore, the easiest is that the Aggregate assigns the ConfigurationId in the Instance.addConfiguration method (as the aggregate can easily ensure the uniqueness of the new Id). This method can return the new ConfigurationId (or the whole object with the Id if necessary).
What is the general approach in this situation?
I'm not sure about the general approach, but in my opinion, the sooner you create the Ids the better. For Aggregates, you'd create the Id before storing it (maybe a GUID), for entities, the Aggregate can create it the moment of creating/adding the entity. This allows you to perform other actions (eg publishing an event) using these Ids without having to store and retrieve the Ids from the DB, which will necessarily have an impact on how you implement and use your repositories and this is not ideal.

How to properly access children by filtering parents in a single REST API call

I'm rewriting an API to be more RESTful, but I'm struggling with a design issue. I'll explain the situation first and then my question.
SITUATION:
I have two sets resources users and items. Each user has a list of item, so the resource path would like something like this:
api/v1/users/{userId}/items
Also each user has an isPrimary property, but only one user can be primary at a time. This means that if I want to get the primary user you'd do something like this:
api/v1/users?isPrimary=true
This should return a single "primary" user.
I have client of my API that wants to get the items of the primary user, but can't make two API calls (one to get the primary user and the second to get the items of the user, using the userId). Instead the client would like to make a single API call.
QUESTION:
How should I got about designing an API that fetches the items of a single user in only one API call when all the client has is the isPrimary query parameter for the user?
MY THOUGHTS:
I think I have a some options:
Option 1) api/v1/users?isPrimary=true will return the list of items along with the user data.
I don't like this one, because I have other API clients that call api/v1/users or api/v1/users?isPrimary=true to only get and parse through user data NOT item data. A user can have thousands of items, so returning those items every time would be taxing on both the client and the service.
Option 2) api/v1/users/items?isPrimary=true
I also don't like this because it's ugly and not really RESTful since there is not {userId} in the path and isPrimary isn't a property of items.
Option 3) api/v1/users?isPrimary=true&isShowingItems=true
This is like the first one, but I use another query parameter to flag whether or not to show the items belonging to the user in the response. The problem is that the query parameter is misleading because there is no isShowingItems property associated with a user.
Any help that you all could provide will be greatly appreciated. Thanks in advance.
There's no real standard solution for this, and all of your solutions are in my mind valid. So my answer will be a bit subjective.
Have you looked at HAL for your API format? HAL has a standard way to embed data from one resources into another (using _embedded) and it sounds like a pretty valid use-case for this.
The server can decide whether to embed the items based on a number of criteria, but one cheap solution might be to just add a query parameter like ?embed=items
Even if you don't use HAL, conceptually you could still copy this behavior similarly. Or maybe you only use _embedded. At least it's re-using an existing idea over building something new.
Aside from that practical solution, there is nothing in un-RESTful about exposing data at multiple endpoints. So if you created a resource like:
/v1/primary-user-with-items
Then this might be ugly and inconsistent with the rest of your API, but not inherently
'not RESTful' (sorry for the double negative).
You could include a List<User.Fieldset> parameter called fieldsets, and then include things if they are specified in fieldsets. This has the benefit that you can reuse the pattern by adding fieldsets onto any object in your API that has fields you might wish to include.
api/v1/users?isPrimary=true&fieldsets=items

Master Data Services - Domain based attributes

We are using Master Data Services as an MDM solution for our SQL Server BI environment. I have an entity containing a first name and last name and then I have created a business rule that concatenates these two fields to form a full name which is then stored in the "name" system field of the entity.
I use this as a domain based entity in another entity. Then the user can then see the full name before linking it as a attribute in the second entity.
I want to be able to restrict the users from capturing data in the first entity against the name attribute because the business rule deals with the logic to populate this attribute. I have read that there are two ways to do this:
Set the display width to zero of the attribute. This does not seem to work, the explorer version still shows a narrow version of the field in the rows and the user can still edit the field in the detail pane.
Use the security to make the attribute read only. I have tried different combinations of this but it seems that you cannot use this functionality for a name field (system field).
This seems like pretty basic functionality that I require and it seems that there is no clear cut way to do this in MDS.
Any assistance will be appreciated.
Thanks
We do exactly the same thing.
I tested it, and whether you create a new member, or edit an existing member, the business rule just overwrites the manual input value in the name attribute.
Is there a specific 'business' reason why you need to restrict data input in the name field? If it is for Ux reasons, you can change the display name of the name attribute to something like 'Don't populate' or alternatively make it a '.', then the users won't know what to input.

Should the natural or surrogate key be returned in an API?

First time I think about it...
Until now, I always used the natural key in my API. For example, a REST API allowing to deal with entities, the URL would be like /entities/{id} where id is a natural key known to the user (the ID is passed to the POST request that creates the entity). After the entity is created, the user can use multiple commands (GET, DELETE, PUT...) to manipulate the entity. The entity also has a surrogate key generated by the database.
Now, think about the following sequence:
A user creates entity with id 1. (POST /entities with body containing id 1)
Another user deletes the entity (DELETE /entities/1)
The same other user creates the entity again (POST /entities with body containing id 1)
The first user decides to modify the entity (PUT /entities/1 with body)
Before step 4 is executed, there is still an entity with id 1 in the database, but it is not the same entity created during step 1. The problem is that step 4 identifies the entity to modify based on the natural key which is the same for the deleted and new entity (while the surrogate key is different). Therefore, step 4 will succeed and the user will never know it is working on a new entity.
I generally also use optimistic locking in my applications, but I don't think it helps here. After step 1, the entity's version field is 0. After step 3, the new entity's version field is also 0. Therefore, the version check won't help. Is the right case to use timestamp field for optimistic locking?
Is the "good" solution to return surrogate key to the user? This way, the user always provides the surrogate key to the server which can use it to ensure it works on the same entity and not on a new one?
Which approach do you recommend?
It depends on how you want your users to user your api.
REST APIs should try to be discoverable. So if there is benefit in exposing natural keys in your API because it will allow users to modify the URI directly and get to a new state, then do it.
A good example is categories or tags. We could have these following URIs;
GET /some-resource?tag=1 // returns all resources tagged with 'blue'
GET /some-resource?tag=2 // returns all resources tagged with 'red'
or
GET /some-resource?tag=blue // returns all resources tagged with 'blue'
GET /some-resource?tag=red // returns all resources tagged with 'red'
There is clearly more value to a user in the second group, as they can see that the tag is a real word. This then allows them to type ANY word in there to see whats returned, whereas the first group does not allow this: it limits discoverability
A different example would be orders
GET /orders/1 // returns order 1
or
GET /orders/some-verbose-name-that-adds-no-meaning // returns order 1
In this case there is little value in adding some verbose name to the order to allow it to be discoverable. A user is more likely to want to view all orders first (or a subset) and filter by date or price etc, and then choose an order to view
GET /orders?orderBy={date}&order=asc
Additional
After our discussion over chat, your issue seems to be with versioning and how to manage resource locking.
If you allow resources to be modified by multiple users, you need to send a version number with every request and response. The version number is incremented when any changes are made. If a request sends an older version number when trying to modify a resource, throw an error.
In the case where you allow the same URIs to be reused, there is a potential for conflict as the version number always begins from 0. In this case, you will also need to send over a GUID (surrogate key) and a version number. Or don't use natural URIs (see original answer above to decided when to do this or not).
There is another option which is to disallow reuse of URIs. This really depends on the use case and your business requirements. It may be fine to reuse a URI as conceptually it means the same thing. Example would be if you had a folder on your computer. Deleting the folder and recreating it, is the same as emptying the folder. Conceptually the folder is the same 'thing' but with different properties.
User account is probably an area where reusing URIs is not a good idea. If you delete an account /accounts/u1, that URI should be marked as deleted, and no other user should be able to create an account with username u1. Conceptually, a new user using the same URI is not the same as when the previous user was using it.
Its interesting to see people trying to rediscover solutions to known problems. This issue is not specific to a REST API - it applies to any indexed storage. The only solution I have ever seen implemented is don't re-use surrogate keys.
If you are generating your surrogate key at the client, use UUIDs or split sequences, but for preference do it serverside.
Also, you should never use surrogate keys to de-reference data if a simple natural key exists in the data. Indeed, even if the natural key is a compound entity, you should consider very carefully whether to expose a surrogate key in the API.
You mentioned the possibility of using a timestamp as your optimistic locking.
Depending how strictly you're following a RESTful principle, the Entity returned by the POST will contain an "edit self" link; this is the URI to which a DELETE or UPDATE can be performed.
Taking your steps above as an example:
Step 1
User A does a POST of Entity 1. The returned Entity object will contain a "self" link indicating where updates should occur, like:
/entities/1/timestamp/312547124138
Step 2
User B gets the existing Entity 1, with the above "self" link, and performs a DELETE to that timestamp versioned URI.
Step 3
User B does a POST of a new Entity 1, which returns an object with a different "self" link, e.g.:
/entities/1/timestamp/312547999999
Step 4
User A, with the original Entity that they obtained in Step 1, tries doing a PUT to the "self" link on their object, which was:
/entities/1/timestamp/312547124138
...your service will recognise that although Entity 1 does exist; User A is trying a PUT against a version which has since become stale.
The service can then perform the appropriate action. Depending how sophisticated your algorithm is, you could either merge the changes or reject the PUT.
I can't remember the appropriate HTTP status code that you should return, following a PUT to a stale version... It's not something that I've implemented in the Rest framework that I work on, although I have planned to enable it in future. It might be that you return a 410 ("Gone").
Step 5
I know you don't have a step 5, but..! User A, upon finding their PUT has failed, might re-retrieve Entity 1. This could be a GET to their (stale) version, i.e. a GET to:
/entities/1/timestamp/312547124138
...and your service would return a redirect to GET from either a generic URI for that object, e.g.:
/entities/1
...or to the specific latest version, i.e.:
/entities/1/timestamp/312547999999
They can then make the changes intended in Step 4, subject to any application-level merge logic.
Hope that helps.
Your problem can be solved either using ETags for versioning (a record can only modified if the current ETag is supplied) or by soft deletes (so the deleted record still exists but with a trashed bool which is reset by a PUT).
Sounds like you might also benefit from a batch end point and using transactions.