CRM 2011 RU13 The workflow cannot be published or unpublished by someone who is not it's owner - workflow

I have created and added some workflows to CRM 2011 RU13, through the UI
Through not fault of my own my development environment is completely air gapped from my production environment.
I added these workflows to my solution and exported the solution as managed and given the solution to the production admin.
When he deploys it fails with this message.
The workflow cannot be published or unpublished by someone who is not it's owner
How do I fix this. There is no way to not give workflows an owner. or say that the owner is the solution.

The production admin gets that message because he is not the owner (inside the target CRM environment) of one or more active workflows included in your solution.
This happens in these situations:
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned automatically
to him. If later USER_B try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
First time you give your solution to be imported, is USER_A to
perform the operation and all the workflows are assigned
automatically to him. Meanwhile one or more workflows are assigned to
USER_C. If later USER_A try to import an updated version of the
solution he gets the error message because is not the owner of the
workflow(s).
Before a workflow can be updated must be first deactivated, and only the owner can deactivate a workflow. This is by design.
In your case the production admin must be the owner of the processes (he can assign temporarily the workflows to himself, import the solution and after assign back to the right user) or needs to be the owner of the workflows to import the solution (if he has the rights)

A couple of additional points for clarity for the OP:
The owner of the workflows in your dev environment is not relevant, the importing user in prod will become the owner (this does not contradict Guido, I'm just making sure you don't follow a red herring). It is quite right for their to be an "air gap" between dev and prod.
If you know which workflows are in your solution, assign those in prod to yourself, then import, then if and only if you need to, reassign them to the original owner(s).
You may not need to if that owner is just an equivalent system admin user, but if it is a special user (eg "Workflow daemon" so users can see why it updated their records) you will want to re-assign.
Note that after re-assigning them, that user has to activate the workflows. You cannot activate a workflow in someone else's name (or users could write workflows to run as admins and elevate their priviledges).
If the workflows have not actually been changed in this version of your solution, take them out of the solution and ignore them - often I find that a workflow has been written, carried across to production in the original "go live" and is then working perfectly fine, but is left in the solution which is constantly updated and re-published (ie export / imported).
Personally I often have a "go live" solution (or more than one, but that's a different thread...) and then we start all over again with a new solution which only contains incremental changes thereafter. This means that all your working workflows, plugins, web resources etc do not appear in that solution so this avoids confusion as to versions, reduces solution bloat, and avoids this problem of workflow ownership. If a workflow is actually updated, then you need to deal with the import, but don't make this a daily occurrence for unrelated changes.

Related

How to make a deployment with different parallel approvals?

I want to make a pipeline (or azure workflow of any kind) that deploys a snapshot of my software to somewhere for User Acceptance Tests, then having people approve this User Acceptance Test and after successful approvals deploy the very same snapshot to the final production place.
Having "Approvals & Checks" configured for an environment and making a YAML-Pipeline with a "Production"-Stage which is assigned to that environment solves the problem.
But only if I had one group of approvers.
In my case there need to be 3 groups of approvers with different rules each that should be able to have their approval process at the same time (to make it as fast as possible):
Change Advisory Board when one more than 50% of the assigned Change Advisory Board members give approval
Tech Review when anyone of the assigned people gives approval
Design Review when anyone of the assigned people gives approval
Any idea how to do that?

Prevent users from creating new work items in Azure DevOps

I've been looking at organisation and project settings but I can't see a setting that would prevent users from creating work items in an Azure DevOps project.
I have a number of users who refuse to follow the guidelines we set out for our projects so I'd like to inconvenience them and the wider project team so that they find it better to follow the guidelines than not - at the moment we've got one-word user stories and/or tasks with estimates of 60-70 hours which isn't reflective of the way that we should be planning.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
The Azure DevOps Aggregator project allows you to write simple scripts that get triggered when a work item is created or updated. It uses a service hook to trigger when such an event occurs and abstracts most of the API specific stuff away, providing you with an instance of the work item to directly interact with.
You can't block the creation or update from, such a policy, Azure DevOps will inform the aggregator too late in the creation process to do so, but you can revert changes, close the work item etc. There are also a few utility functions to send email.
You need to install the aggregator somewhere, it can be hosted in Azure Functions and we provide a docker container you can spin up anywhere you want. Then link it to Azure DevOps using a PAT token with sufficient permissions and write your first policy.
A few sample rules can be found in the aggregator docs.
store.DeleteWorkItem(self);
should put the work item in the Recycle Bin in Azure DevOps. You can create a code snippet around it that checks the creator of the work item (self.CreatedBy.Id) against a list of known bad identities.
Be mindful that when Azure DevOps creates a new work item the Created and Updated event may fire in rapid succession (this is caused by the mechanism that sets the backlog order on work items), so you may need to find a way to detect what metadata tells you a work item should be deleted. I generally check for a low Revision number (like, < 5) and the last few revisions didn't change any field other than Backlog Priority.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
I am afraid there is no such out of setting to do this.
That because the current permission settings for the workitem have not yet been subdivided to apply to the current scenario.
There is a setting about this is that:
Project Settings->Team configuration->Area->Security:
Set this value to Deny, it will prevent users from creating new work items. But it also prevent users from modify the workitem.
For your request, you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions.

Default permissions come back on AzDevOps queues

On premises AzDevOps Server 2019, version Dev17.M153.5. I have restricted default access rights to agent queues on every single project in every single collection - removed the default set (Release Admins/Build Admins/Project Admins), added some other lines (Server Admins).
Now, ever once in a while, intermittenly with no pattern that I can see, those three permissions keep coming back automagically. On different projects, through no human actions (all the humans who have the rights for that have been told), those three lines with the Administrator role reappear on the default agent queue ACL.
Is that a known behavior in AzDevOps? Any way to opt out?
EDIT: here's what it looks like. The first three lines don't belong.
EDIT: as per the advice, I'd try to track it down using the activity log. I went and made a dummy change to default queue security elsewhere. There was a log record with command SecurityRoleAssignments.SetRoleAssignments. I then filtered the activity log on the collection where the permissions have reverted, and searched for the same command. No instances. The log ends around 7/14, which is likely before the event.
This should be caused by Inheritance permission. By default, the option Inheritance is turned on and the following groups are added to the Administrator role of 'All agent pools': Build Administrators, Release Administrators, Project Administrators.
If we turn off the option Inheritance, we can remove the default permission groups (Release Admins/Build Admins/Project Admins).
If we turn on the Inheritance, the permission group will be inherited again and the default permission groups will come back, please check the option and confirm that inheritance is always off. Please also confirm with all the humans who have the rights to update the option.
Uodate1
Login {Azure DevOps Server URL}/_oi/_diagnostics/activityLog, we can see the Activity Log and check who added the permission groups, please check it.
Installed Azure DevOps 2020. A couple of weeks in, no such behavior.
Concluding it was a bug in AzDevOps 2019 all along that they've quietly fixed.

My Main branch in TFS just disappeared - why?

Our Main branch was apparently just deleted and there's no record of why. (The branch still appears in Source Control Explorer - When I view the history of the branch it's empty). When I get latest on the branch it deletes everything locally. We have numerous children branches that all appear to be fine, but Main is now empty with no record of how/why. Anybody have any idea how we can figure out what happened and recover it? We have a child branch that should be a duplicate so we should be OK, but we'd really like to figure out what happened!
What may have happened
There are a few things I can think of, the most logical in this case is that someone issued a tf destroy $/project/Branch/* /recursive, that would have the observed effect.
It could also be that someone has renamed the branch, that would not be visible in the history per se, unless you turn on the "Show Deleted Items" option in the options of the Team Foundation Source control options.
Your Application Tier's version control cache may have become corrupted, the chance of this happening is very slim, but it may have caused this. Ensure you have a good backup of your databases even if this may seem the case, if it isn't you're going to need the database backup and the older it is, the more unlikely it is data marked for deletion will still be there.
How can you find out what happened?
Check the tbl_command in the Project Collection Database or access the hidden _oi activity log page on the web access server. You may be able to find the command that caused the deletion.
If that doesn't tell you, analyze the transaction logs of the SQL Server (if your server is configured to keep these).
What to do now?!
Make a backup of your TFS server or secure the ones you have if you haven't done so
If the version control cache is the culprit clearing it (on Application Tier machines) may solve your problem, the cache location shows on the TFS Admin Control panel:
Best way to go about this, is to stop the TFS server temporarily and then delete the contents of this folder.
There seem to be a few ways out:
Forget about it, take the contents of the most up-to-date branch and use that to repopulate the missing data. Just add them to the empty folder, check them in and then re-merge all other branches and resolve all conflicts.
Pro: Fast
Con: you loose history, resolving conflicts will be a horrible task.
Restore the project collection database to a previous point in time (warning! may require restore of all project collections to a previous point in time)
Pro: You get all your history back
Con: You loose changes made since the last known good backup, takes alot of work, will impact all projects in the same collection, possibly all projects on the same server.
Restore the whole server to temporary server and restore the collection with the missing data to the last known good configuration. Use a tool like OpsHub or Team Foundation Migration Toolkit to replay the changes since the disaster.
Pro: You get back to the most up to date point in time
Con: Takes a lot of time and expertise in TFS Migration
Restore the collection database and use the transaction logs to replay as much of the changes to the collection , then skip the transactions that perform the destroy. Be careful though, usually the destroy action marks files as deleted, but a job does the actual deletion in the background.
Pro: You get back to the most up to date point in time
Con: Takes a lot of time and expertise in SQL
Contact Microsoft Support and get a Field expert in the house. They may be able to restore the deletion if it was done without immediately triggering the cleanup job.
Pro: You will get back into the best state possible
Con: it will be costly
Whatever you do, make sure you have a backup of your current situation, that allows you to try different tactics, should your first attempts fail.
Consider splitting the project collection to allow other projects to continue working. You will end up in a situation were this one project ends up in an isolated Project Collection on its own, but it will allow you to move forward quickly.
OK - this is one for the record books, because inexplicably the project reappeared later in the day. All of it's history is back as well. I would have thought that perhaps the DBAs here did a database restore, but that's not possible since all of the checkins that have been happening all day are still there.
So if this happens to you in the future, just cross your fingers and wait a few hours!
p.s. I did look in the SQL logs but couldn't find anything. Bizarre!

Automatically triggering merge activity after remote on-site (custom) development?

In our office, the software we create is sent to our client's office along with an engineer and a laptop. They modify the code at the customer site, based on the customer requests, and deploy the exe.
When the engineer returns to the office, the changed/latest code is not updated to the server, thereby causing us all sorts of problems in the source code on the development boxes and laptops.
I tried to use a version control system like svn, but sometimes the engineer forgets to update the latest code to the svn server. Is there an automatic way that when the laptop connects to the domain, the version control system should automatically check for changes and prompt the user to update the code on the server, or automatically update the code to the server.
I think that the key to this is to require the on-site engineers to use a VCS at the customer site, and to make it a condition of their continued employment that the code at the customer site is in fact reloaded into the VCS on return to the office. You could say that the engineers sent on-site need to be trained in their duties, and they should be held accountable for not doing the complete job - the job isn't finished until the paperwork is done (where 'paperwork' in this context includes updating the source repositories with the customer's custom adaptations of the software).
It seems to me that it might be better to use a DVCS such as Git or Mercurial rather than SVN in this context. However, you should be able to work with SVN if the laptop dispatched to the server has a suitable working copy created for the customization work.
That said, the question is "can we make this easier and more nearly automatic". In part, that might depend on your infrastructure - it also might depend on Windows capabilities about which I'm clueless. There might be a way to get a particular program to run when the laptop connects to a new domain. An alternative (Unix-ish) approach would be to use some regularly scheduled job that runs, say, every hour and looks to see whether it is on the home domain and whether there are changes that should be submitted to the main repository.