Archive Artifacts in azure devops - azure-devops

For archiving artifacts I am looking for the users of that artifact.
Published artifacts in feeds can be used or downloaded by others, is there a possibility to find out who they have been

is there a possibility to find out who they have been
I am afraid there is no such way to achieve this.
We all know that the artifacts in the feed can be used or downloaded. At this time, the package is only read and does not use write permissions.
In other words, it's like when we use repo, we only clone to local without modifying repo, so there will be no change to repo at this time, and there is no meaning of recording. Only changes made to the repo will have a commit record.
The same is true for packages in the artifact feed, and only when the package is modified will the information of who changed it will be recorded.
So, there is no such way to achieve this request.

Related

Is there an equivalent of GitHib releases in Azure DevOps?

Simply speaking, does Azure Devops have something that works very close to GitHub releases?
I would like to publish artifacts that are created during an Azure DevOps pipeline so that they be easily viewed and downloaded afterwards in a central location. The closest equivalent to what I'm looking for is how GitHub releases work, where there is a web page listing out all the versions of the repository and the assets that can be downloaded for each version.
It seems to me that published artifacts within Azure DevOps pipelines are always tied to the run of the pipeline, and there isn't an easy way to see one list of artifacts that have been created in a historical view like GitHub releases provides, but maybe I'm missing something.
Azure Artifacts does not meet my needs because it is tied to particular packaging formats and is meant to be used for developer tooling.
I would like to publish artifacts that are created during an Azure
DevOps pipeline so that they be easily viewed and downloaded
afterwards in a central location.
As a workaround ,you can switch Artifact publish location to A file share in Publish Pipeline Artifacts task then specify your network drive folder path.
Specifying the path to the file share where you want to copy the files. The path must be a fully-qualified path or a valid path relative to the root directory of your repository. Publishing artifacts from a Linux or macOS agent to a file share is not supported.
Besides, there should be no other built-in hosting, I am afraid that Azure Artifact is the closest to your needs. because it is tied to particular packaging formats you can consider using Universal Packages.

Azure datafactory deployment automation from multiple branches

I want to create automated deployment pipeline for azure datafactory.
For one stream of development we can configure it using doc
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
But when it comes to deploying to two diff test datafactories for parrallel features development (in two different branches), it is not working because the adb_publish which gets generated is only specific to the one datafactory.
Currently we are doing deployement using powershell scripts and passing object list which needs to be deployed.
Our repo is in Azure devops.
I tried
linking the repo to multiple df but then it is causing issue, perhaps when finding deltas to publish.
Creating forks of repo instead of branches so that adb_publish can be seperate for the every datafactory - but this approach will not work when there is a conflict, which needs manual merge, so the testing will be required again instead of moving to prod.
Adf_publish get generated whenever you publish. Publishing takes whatever you have in your repo and updates data factory with it.
To develop multiple features in parallel, you need to just use "Save". Save will commit your changes to the branch you are actually working on. Other branches will do the same. Whenever you want to publish, you need to first make a pull request from your branch to master, then publish. Any merge conflict should be solved when merging everything in the master branch. Then just publish and there shouldn't be any conflicts, and adf_publish will get generated after that.
Hope this helped!
Since a GitHub repository can be associated with only one data factory. And you are only allowed to publish to the Data Factory service from your collaboration branch. Check this
It seems there is not a direct and easy way to accomplish this. If forking repo as workaround, you may have to solve the conflicts before merging as #Martin suggested.

Azure Pipelines: Store git submodules as artifacts and only build as needed

We have a project written in C that depends on several libraries as git submodules. We built an Azure Pipeline to build it, using multiple containers targeting multiple environments.
The challenge is that the build takes more time than we'd like, partly because of the fact that the submodules are being recompiled every time, even though they do not change.
What I'm looking for is a way to build the submodules only when needed, store them as artifacts, and have the main build consume them.
As far as I understand, I can set up a build for the submodule's repos which will poll for changes, but I want my product to depend on specific commits of the submodules - i.e. I'm not always taking the latest submodule version.
So I'm looking to trigger a submodule build whenever we switch to a new commit. Can this be achieved in Azure Pipelines? What would be the best way to manage the artifacts (e.g. store the commit ID as part of the artifact name)?

Artifact feed azure devops.Not sure how it meant to work

I have set up a feed on azure devops
I now have nuget.config in my solution with the packages sources configured with my feed.
Question
Given that my mobile solution contains projects that will output nuget packages, why are those packages not appearing in my feed?
When building the app in appcenter I was expecting all the dependencies and nugets to appear
in the artifact feed automatically but only 02 did?
do you have to have a pipeline to push packages to the feed?
Can just building a solution be enough for all the dependencies in the solution to be pushed
to the artifact feed? .Hope makes sense
I have looked at all microsoft docs and its' not clear!
Any suggestions how feeds are meant to work apart from pushing them yourself via either nuget push or a pipeline?
The packages don't get published to your feed unless you explicitly publish them to the feed. There's no mechanism that detects that you've created packages and automatically publishes them.
Use the NuGet task with the push option and you can choose the artifact feed to which your packages are published.

How you increment the version number using Travis CI?

The project that I am working on is a jQuery plugin. I have managed to get Travis CI to build a test project using Gulp/NodeJS successfully. Now I am trying to work out what workflow to use to bump the version number.
In TeamCity and MyGet there is a setting in the CI server to form a version number pattern that auto increments on each build, which can be used by the build script to update versions in the deployment files and to label the Git repo. However, in the free version of Travis CI, there doesn't seem to be an option for versioning at all.
I have read several articles on continuous deployment with Travis CI, here, here, and here, but none of them even broach the topic of versioning. Obviously, the version needs to be changed for the release. So what am I missing here?
Another problem I noted when going through the documentation is that it mentioned that Travis CI is not able to update the GitHub repository. Doesn't that basically mean it won't be able to create a Git tag?
If there is no way to version from Travis CI, then what is the typical workflow for the release process for such a plugin? Is the versioning always done manually? If so, how could there be "continuous deployment"?
Before it starts running the instructions in your .travis.yml file, Travis will set a bunch of environment variables (in the VM that is building your project) with various bits of information about your build, such as what branch is being built and so on.
You probably want one of these:
TRAVIS_BUILD_NUMBER: The number of the current build (for example, “4”).
TRAVIS_JOB_NUMBER: The number of the current job (for example, “4.1”).
But it's going to be very difficult to do anything sensible if you don't have control of the repository, because you'll need to upload a .travis.yml file into the root of your source code folder, otherwise Travis won't know what to do.
Use bumped for release versioning. When you're satisfied with the changes in master, run:
bumped release <major|minor|patch>
After you push the changes, either directly or through a release PR, you can check for the presence of new tags in Travis CI and publish the package to the registry automatically.
If you consider that every PR must end up to your enduser without thinking of the impact of such changes, then your version numbers have no meaning.
You don't give your user a way to know if it is a major change that break compatibility or a bug fix. You don't allow him to get update without worrying about backward compatibility.
Currently, the commit id is your version number.
If you want to give meaning to your version numbers then you have to think of the impact of your pull requests on the enduser (http://semver.org/). You have to choose a version number for a specific PR or a group of PR.
So basically, since you have to 'think' of a certain version number for a specific version that you want to deliver, you can't automate this process.
Release/tag creation is the way to go : )
You can accomplish this by setting up a script that would create a ~/.netrc file to access the repository. In this file you can specify something like:
machine https://github.com/xxx/yyy.git
login <blah>
And instead of putting in your credentials, you can pass an github access token. You can use the travis encrypt to register it in the .travis.yml file, and export the variable for your script's use. From there in your script, you can issue regular git commands such as:
git add <some file>
git commit -m "This is $TRAVIS_BUILD_NUMBER"
git push origin <branch>