How can I pull more (or ideally, all) of the updates via the PROJ object? - rest

A search on the PROJ object for updates seems to be capped at 20, even though more updates exist. Here is an example:
https : //[domain].attask-ondemand.com/attask/api/v4.0/proj/search?method=GET&fields=updates:styledMessage&ID=[guid]
Conversely, by searching the NOTE object using topNoteObjCode = PROJ and topObjID = [guid], all of the notes are retrieved.
Anyone know a trick to pull more (or ideally, all) of the updates via the PROJ object?
Regards,
Doug

I am working on something similar at the moment and it does not appear that it is possible to pull any more or less than 20 updates using the project->Updates collection. I suspect that is because this is a collection and you cannot pass arguments in.
For items like this I end up simply passing an array of ID's I'm looking for into the updates object as a "in" search against the refObjCode field. There is an unpublished number of ID's you can pass at a time. I think it is around 130, but I always batch it at 100.
It's a bit of a pain sorting the resulting list of updates back into the list of projects or tasks.

Related

Getting state change for work items by sprint not date from Azure DevOps

I'm trying to collect the state of work items as it were in a particular iteration. I'm capable of getting the present state of the work items with a query to the REST API like this one:
https://analytics.dev.azure.com/{organisation}/{project}/_odata/v2.0/WorkItems?$expand=Iteration.
It does give me the creation, activation and completion date but I need to know which iteration it was created, activated and completed in
The solution turned out to be very simple. It had simply eluded me when searching
https://analytics.dev.azure.com/{organisation}/{project}/_odata/v2.0/WorkItemRevisions?$expand=Iteration
returns all revisions of the work items, including what iteration they were assigned to when the change happened
I need to know which iteration it was created, activated and completed
in
For this demand,you can try to use this rest api:
GET https://{instance}/{collection}/{project}/_apis/wit/workItems/{id}/revisions?api-version=5.0
With this rest api, you can list all the revisions of the work item,and in each revision,you can see the iteration path of the work item and the state of the work item at that time.
The downside is that it can only be used for one work item. If you want to collect all the work items, it will be a bit cumbersome.

PowerApps datasource to overcome 500 visible or searchable items limit

For PowerApps, what data source, other than SharePoint lists are accessible via Powershell?
There are actually two issues that I am dealing with. The first is dynamic updating and the second is the 500 item limit that SharePoint lists are subject to.
I need to dynamically update my data source, which I am currently doing with PowerShell. My data source is not static and updating records by hand is time-consuming and error prone. The driving force behind my question is that the SharePoint list view threshold is 5,000 records however you are limited to 500 visible and searchable records when using SharePoint lists in the Gallery View and my data source contains greater than 500 but less than 1000 records. If you have any items beyond the 500th record that should match the filter criteria, they will not be found. So SharePoint lists are not optional for me until that limitation is remediated
Reference: https://powerapps.microsoft.com/en-us/tutorials/function-filter-lookup/
To your first question, Powershell can be used for almost anything on the Microsoft stack. You could use SQL server, Dynamics 365, SP, Azure, and in the future there will be an SDK for the Common Data Service. There are a lot of connectors, and Powershell can work with a good majority of them.
Take note that working with these data structures through Powershell is independent from Powerapps. Powerapps just takes the data that the data connector gives it, and if you have something updating the data in the background (Powershell, cron job, etc.), In order to get a dynamic list of items, you can use a Timer control and a Refresh function on your data source to update the list every ~5-20 seconds.
To your second question about SharePoint, there is an article that came out around the time you asked this regarding working with large lists. I wouldn't say it completely solves your question, but this article seems to state using the "Filter" function on basic column types would possibly work for you:
...if you’d like to filter the set of items that you are showing in the gallery control, you will make use of a “Filter” expression, rather than the “Search” expression, which is the default that existing apps used. With our changes, SharePoint connector now supports “equals” type of queries on columns that support filtering (Single line of text, choice, numbers, dates and people), so make sure that the columns and the expressions you use are supported and watch for the same warning to avoid reverting back to the top 500 items.
It also notes that if you want to pull from a list larger than the 5k threshold, you would need to use indexes, I have not fully tested this yet but it seems that this could potentially solve your problem.

What is the proper way to keep track of updates in progress using MondoDB?

I have a collection with a bunch of documents representing various items. Once in a while, I need to update item properties, but the update takes some time. When properties are updated, the item gets a new timestamp for when it was modified. If I run updates one at a time, then there is no problem. However, if I want to run multiple update processes simultaneously, it's possible that one process starts updating the item, but the next process still sees the item as needing an update and starts updating it as well.
One solution is to mark the item as soon as it is retrieved for update (findAndModify), but it seems wasteful to add a whole extra field to every document just to keep track of items currently being updated.
This should be a very common issue. Maybe there are some built-in functions that exist to address it? If not, is there a standard established method to deal with it?
I apologize if this has been addressed before, but I am having a hard time finding this information. I may just be using the wrong terms.
You could use db.currentOp() to check if an update is already in flight.

Detect when a record is being cloned in trigger

Is there a way to detect that a record being inserted is the result of a clone operation in a trigger?
As part of a managed package, I'd like to clear out some of the custom fields when Opportunity and OpportunityLineItem records are cloned.
Or is a trigger not the correct place to prevent certain fields being cloned?
I had considered creating dedicated code to invoke sObject.Clone() and excluding the fields that aren't required. This doesn't seem like an ideal solution for a managed package as it would also exclude any other custom fields on Opportunity.
In the Winter '16 release, Apex has two new methods that let you detect if a record is being cloned and from what source record id. You can use this in your triggers.
isClone() - Returns true if an entity is cloned from something, even if the entity hasn’t been saved.
getCloneSourceId() - Returns the ID of the entity from which an object was cloned.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_methods_system_sobject.htm#apex_System_SObject_getCloneSourceId
One approach, albeit kind of kludgy, would be to create a new field, say original_id__c, which gets populated by a workflow (or trigger, depending on your preference for the order of execution) when blank with the salesforce id of the record. For new records this field will match the standard salesforce id, for cloned records they won't. There are a number of variations on when and how and what to populate the field with, but the key is to give yourself your own hook to differentiate new and cloned records.
If you're only looking to control the experience for the end user (as opposed to a developer extending your managed package) you can override the standard clone button with a custom page that clears the values for a subset of fields using url hacking. There are some caveats, namely that the field is editable and visible on the page layout for the user who clicked the clone button. As of this writing I don't believe you can package standard button overrides, but the list of what's possible changes with ever release.
You cannot detect clone operation inside the trigger. It is treated as "Insert" operation.
You can still use dedicated code to invoke sObject.Clone() and exclude the fields that aren't required. You can ensure that you include all fields by using the sObject describe information to get hold of all fields for that object, and then exclude the fields that are not required.
Hope this makes sense!
Anup

Meteor batch update

I'm using meteor. I'm wondering if theres a shorthand way to do batch updates before the DOM is updated.
for instance I want to update some records,more than one (All at once):
Collection.update(id1,{..})
Collection.update(id2,{..})
Collection.update(id3,{..})
The problem is there are 3 items being updated separately. So when the DOM in my case was being redrawn 3 times instead of once (with all 3 updated records).
Is there a way to hold off the ui updating until all of them are updated?
Mongo's update can modify more than one document at a time. Just give it a selector that matches more than one document, and set the multi option. In your case, that's just a list of IDs, but you can use any selector.
Collection.update({_id: {$in: [id1, id2, id3]}}, {...}, {multi:true});
This will run a single DB update and a single redraw.
Execute them on the server instead, that way they might be synchronously done such that they are less likely to cause multiple DOM updates on the client.
See the first two and last interesting code bits, which explain how to protect your clients from messing with the database as well as how to define methods on the server and call them from the client.