How to mention integration service name during importing workflow in a versioned repository from one environment to another - workflow

Whenever I try to import a workflow from Dev/Test environment of a versioned repository into Production environment which is also versioned, I get a option where it asks me if I want to Check in or continue without check in. What happens if I do not check in and continue? Will all the objects used in the workflow will not be checked in or is it just the workflow which will not be checked in? I am asking this because, it will be double work if all the objects used will not be checked in including the Workflow, then, I have to go one by one to check in the objects. If I check the check in option for the workflow, after importing the Workflow, the integration service is left blank and when I run it, it is pointing towards the Integration service not mentioned error. For this I generally check out the workflow once I import just to mention the Integration service name. I do not think this is a good practice. Any advices on this will be greatly appreciated.
Thanks
Dhruuv.

Will all the objects used in the workflow will not be checked in or is it just the workflow which will not be checked in?
The objects that will be left in the checked out state are:
the workflow,
the new objects (i.e. they were not present in the Prod repository before the import from Dev/Test),
the modified objects (i.e. they were present in the Prod repository but were overwritten because you chose the Replace option).
I have to go one by one to check in the objects
You don't have to check in every individual object - in Repository Manager open the Versioning menu and choose the Find Checkouts... option. All the checked out objects will be listed - you can select them and check in all of them at once.

Related

How do I change values in a file after publishing to GitHub?

I have a Python project that I am looking to publish on GitHub. This project has a couple of variables in one of the files that needs to have their values obfuscated. Ie: API Key, user/password, etc.
My test code has those variables filled with my own data, but I want to boilerplate them when I push changes, for obvious reasons.
Would I be on the right track looking at a GitHub action to accomplish this? If so, any pointers towards what kind of action I should be looking for that is most appropriate for this kind of task?
You should look into dotenv, which allows you to have a .env file or to have OS environment vars set to pull those private information to use in your code without have it set directly. One good tool for this is pydantic BaseSettings, which you should install via:
pip install pydantic[dotenv]
One nice thing I like about pydantic is that you can either have a .env or have environment variables, it will work.
If you have Continuous Integration (CI), you can add GitHub Secrets, which can be pulled in for your test runs with private API keys. You'll need to properly call them with GitHub contexts.
Don't put those values in your code. Read them from an environment variable or from a file instead. Then whoever wants to use your projects only needs to provide said env vars or said file. Check Keep specific git branch offline.
GitHub actions seems overcomplicated imo. Keep in mind that if you already made a commit with those to-be-obfuscated variables, then they will be visible by going to those commits

Avoid updation of cloned testcase when master copy is updated in VSTS

I have a Master Copy of test cases, which I will be cloning for every testing cycle.
Changes in Master Copy test cases gets reflected in already cloned test cases. How can i avoid this?
Avoid updation of cloned testcase when master copy is updated in VSTS
AFAIK, Azure devops offers two types of copy operations for test suites and test cases, namely, copy and clone. Copy uses a mechanism called shallow copy that simply creates a reference to the artifact. If any amendment is made to the artifact, it reflects into all its references. Clone uses a mechanism called deep copy; the new artifacts have no reference back to its origin and is not impacted by any updates made to the original artifact.
So, you need to create a clone of the test case, for that you may use the following steps:
Go the test case and right click on it.
Click on Create Copy and add to suite option in the list.
Make the change if you want (Here you can modify test case
independently without affecting other).
Click on Save and Close.
Now drag and drop that test case to other suites.
You could check this document for some more details.
Hope this helps.

Azure DevOps Projects List GET not returning a project

When running the Azure DevOps Projects List GET, one particular Project is excluded from the results. I cannot find any different settings. I am the admin of it. I can add new projects, and there were projects I created before it, that all show up in the results. It's the API call as documented here: https://learn.microsoft.com/en-us/rest/api/azure/devops/core/projects/list?view=azure-devops-rest-5.0
I cannot find explanation of why a project is excluded from the results aside from Project State, which I have troubleshot already.
I've already tried running the GET API in the browser and I do not return the missing project. I have tried creating another project in the same manner and it appears in what is returned. I have added an argument for Project State = All, and that does not improve the outcome.
Under what circumstances is a project excluded from these results as a standard (undocumented) constraint?
edit: I am a project admin, and I have access to the default project team. I have tried recycling things in the background by changing the Project Name back and forth, and having myself removed as Admin and added back in, with no change in the API response.
edit: It seems like the more important question is how to force Azure DevOps to cycle the 'lastUpdateDate', when it's currently set to a non-date.
This may be due to the fact that the result set for the call is 100 entries. So you either need to use the continuationToken to get the next 100 entries, and so on. Otherwise, you can also use the $top query parameter:
https://-collection_url-/_apis/projects?api-version=5.0&$top=200

Is it possible to check in VSTS was build stated manually or not?

I'm trying to execute a bit different build steps in VSTS based on type how build was started: automatically or manually.
I'm especially interested in accessing that information from powershell script. But so far was not able to find suitable solution or workaround.
Did someone faced similar requirement before? How did you solved it? I would appreciate your help!
Seems you want to know whether the project build is happening through TFS triggered build or manually triggered build.
There is no such feature for vnext build for now. About this , you could submit your uservoice to this link, TFS Product Team is listening to your voice there.
As a workaround either to use two build definitions through different version patterns or manually add a specifical tag after a manually build finished. Through using tags to set some labels on the build to distinguish manual and automatic builds. But this is a manual action, it would be better if we can do this automatically.
It seems to be I've managed to find an option that allows to determine wherever build was triggered automatically or manually.
All builds started manually have actual user in $Env:BUILD_QUEUEDBY variable, while automatic builds have system account there. My value was [********]\Project Collection Service Accounts.
I don't know how reliable it is, but for me so far following code did the job:
# Identifying who triggered the build
$OwnerId = $Env:BUILD_QUEUEDBY;
$OwnerId = $OwnerId.ToUpper();
if ($OwnerId.EndsWith("PROJECT COLLECTION SERVICE ACCOUNTS"))
{
Write-Host "Build was triggered automatically. Resulting files considered 'BETA'"
}
else
{
Write-Host "Build was triggered manually. Resulting files considered 'STABLE'"
}

TFS "The item ... may not be cloaked because it does not have a mapped parent."

I'm working with TFS 2013 via the TFS Server plug-in for Eclipse (Team Explorer Everywhere v14.0.1).
We have two branches of a project, a master and a release. Each has a DEV folder (containing application.properties, logback.xml, etc ) in which environment configuration is maintained. The folder was initially placed in source control to allow new members to pull everything they need in one shot. Now, however this seems burdensome to established team members as when trying to switch between branches, the error
The item $/projectName/project-branchName/src.../DEV may not be
cloaked because it does not have a mapped parent.
To switch branches, one must uncloak any cloaked folders to continue. I have since deleted folders which contain dev configurations or those which should stay out of source control.
However, I'd like to know is there another way to resolve this?
Are you trying to use and edit a single workspace when trying to "switch" branches?
If so, the recommended approach is to use two different workspaces, one for each branch, and then switch between workspaces.
The reason for this can be best illustrated with an example:
Imagine your workspace contains two mappings:
map $/projectName/project-branchName/src some-local-path
cloak $/projectName/project-branchName/src.../DEV
and by “switching” you mean that you edit the workspace mappings and change project-branchName from master to release or vice versa. This is a typical catch-22. If you change the branch name in the first mapping first, you immediately get an error because the second mapping tries to cloak a folder in the old branch, which is not mapped anymore. If you first change the branch name in the second mapping, you get an error because the mapping tries to cloak a folder in the new branch, which is not mapped yet.
Not sure this fully answers your question. If not, please feel free to provide a little more explanation of what you are attempting to do and we'll see if we can better assist!
Thanks!