Does Azure Artifact upstream behaviour keep security patches from being consumed - azure-devops

I am referring to the construction of the set of available packages as documented here.
Consider the setup at the very bottom, i.e. Fabrikam builds on Contoso builds on AdventureWorks. According to the documentation, Fabrikam can only pull packages from Contoso that Contoso itself has already pulled from AdventureWorks.
Now suppose that AdventureWorks creates a security update for one of its packages and makes it available immediately. As per documentation, Fabrikam will not see this new package version unless Contoso pulls it first. That would mean that the security of Fabrikam also hinges on the response time of Contoso. If Contoso never pulls the new package, Fabrikam is never going to get it.
Am I misinterpreting the documentation? This seems like a blatant security problem to me.
I would expect that new package versions can be pulled through Contoso regardless of Contoso having pulled them itself first.

You are correct, Azure Artifacts upstreams are not transitive.
If it's possible and reasonable to do so, Fabrikam could also directly upstream to AdventureWorks.

Related

How to delete old snapshot artifacts from GitHub packages

I have a GitHub worflow which builds and deploys a snapshot version of a library as a GitHub package, e.g., mycompany.mytool.1.0.0-SNAPSHOT.jar. Whenever I make a new build and deploy, a new asset is created, like, e.g., mycompany.mytool.1.0.0-20210723.145233-1.jar instead which is then somehow associated with the SNAPSHOT tag. This all seems to work and I can access mycompany.mytool.1.0.0-SNAPSHOT.jar without problem.
My question now is, how can I get rid of all these older versions of this jar? Actually I just want to keep the latest version. I can delete them manually via the web-interface but that is a more than awkward task. I would somehow like to automate this too.
This is not possible as of this writing. GitHub staff member Jamie Cansdale wrote this in their community forum:
SNAPSHOT versions are exposed as artifacts inside a regular versions. There isn’t an API for cleaning up artifacts, only whole versions.
(source)
Which means that a single SNAPSHOT version (like 1.0.0-SNAPSHOT) will accumulate all builds you make, and all artifacts will show up on the Assets list to the right of the web page.
The only practical solution I can think of, is that you delete the whole version from a script, before publishing each build's artifacts. Then you'd have the effect of having a single set of artifacts stored as part of the 1.0.0-SNAPSHOT version name.
However this solution is not ideal: public package versions cannot be deleted if they are popular enough (probably to avoid squatting attacks):
If the package is public and the package version has more than 5,000 downloads, you cannot delete the package version. In this scenario, contact GitHub support for further assistance.

How to Accomplish This Branching and Deployment Strategy Using TeamCity and Octopus

I have been researching and am trying to figure out the best branching and deployment strategy to accomplish the requirements below. Maybe I’m missing something but it is more complicated than it seems. Ideally, we’d just have one permanent branch, ‘master’, that could have specific commits tagged to mark releases to production.
Our current strategy is based on Git Flow and has permanent branches ‘master’ (only has releases to production) and ‘develop’. The primary thing that complicates using a multiple permanent-branches model is the concept of “promoting” the same build from the staging environment to production. Currently, this needs to be done in a separate source code branch (deployments to staging come from ‘develop’, deployments to prod come from ‘master’).
Tools: Git (VSTS), TeamCity, Octopus Deploy
Requirements (feature and hotfix lifecycles):
All code is reviewed via pull requests (enforced via branch policies)
All code gets deployed to a staging environment for testing
We can quickly go back to any snapshot of code that was deployed previously
If testing is successful, then the same build can be “promoted” from our staging environment to production (no need to build again)
Features accumulate over time before pushing out to production as a single release. Hotfixes have to be able to go through without getting caught up in the "all or nothing" next regular release.
I like the idea of having one permanent branch with tags (re: The master/develop split is redundant, http://endoflineblog.com/gitflow-considered-harmful), but having additional permanent branches may better facilitate deploying to different lifecycles/versions (feature and hotfix) to Octopus.
I have been wrestling with how best to pull this off and I may be over complicating things. Any feedback is appreciated.
It seems you have a number of questions and they are quite broad... I'll add some comments to each of your requirements as a conversation starter, but this whole thread might get blocked by moderators as it is definitely not the style of questions SO was made for.
All code is reviewed via pull requests (enforced via branch policies)
I haven't looked at VSTS for ages, but I'd expect they already support branch policies and pull-requests, so not sure if there's anything you need here other than configure settings in your repositories.
In case VSTS does not support that, you might consider moving to a tool that does e.g. BitBucket, GitHub, etc. Both of these have an on-premises version in case you can't (or don't want to) use the cloud hosted version.
All code gets deployed to a staging environment for testing
You achieve that with setting up lifecycles in Octopus Deploy, to make sure deployments/promotions follow the the sequence you want.
We can quickly go back to any snapshot of code that was deployed previously
You already have source control, so all you need now is traceability from the code that is deployed in an environment, to the deployment version in Octopus Deploy, the build job in TeamCity, the branch and exact commit in your source control.
There's a few things that you can do, to achieve that:
Define a versioning scheme that works for you. I like to use semantic versioning. "Major" and "Minor" versions are defined by the developers, and the "Patch" is the auto-incremented number from TeamCity (%build.number%). Every git push build the code and generates a unique build version (%major%.%minor%.%build.number%)
As part of the build steps in TeamCity, before you compile the code, make sure your source files are patched with the version number assigned by each build, the commit hash from your source control, and the branch name. e.g. if you are using .NET, make sure all the AssemblyInfo.cs files are updated with that version, so that the version is embedded in the binaries. This allows anyone to query the version looking at the properties of the binary files, and also allows you to display the app version on the app itself (e.g. status bar, footer, caption, about box, etc.)
Have TeamCity tag your source control with the version number of every build, so you can quickly see on your source control history. You probably only want to do that for the master branch, though which is what you care about.
Have Octopus tag your source control with the deployment version number and the environment name, so that you can quickly see (from your source control) what got deployed where.
Steps 1 and 2 are the most important ones, really. 3 and 4 are just nice-to-have. Most of the time you'll just open the app in the environment, check the commit hash in the "About", and do a git checkout to that commit hash...
If testing is successful, then the same build can be "promoted" from our staging environment to production (no need to build again)
Again, Octopus Deploy lifecycles, and make sure anything different in each environment is defined in the configuration file of the application, which is updated during the Octopus deployment, using environment-specific variables.
In terms of branch workflow, this last requirement makes it mandatory to merge changes into master (or whatever your "production" branch is) before the deployment lifecycle can begin.

Nuget Gallery with multiple feeds

I recently installed Nuget Gallery (https://github.com/NuGet/NuGetGallery) as a repository. Ideally I would like to create multiple feeds so that I could differentiate between nuget packages that will be reused in other projects (dll's, contracts etc) from the packages we use to deploy our projects to the production environment.
I know I can achieve this by creating multiple instances of the Nuget Gallery, but this seems to me a bit of an overkill, it would mean two websites two databases. I am also familiar with the fact that MyGet provides this functionally but I will not be able to get an approval for the purchase. I am also aware teamcity contains its own feed server but it doesn't allow this multiple feed scenario, nor its performs well enough to be used in a large scale.
In a nutshell the ideal deployment scenario would be as follow:
teamcity generate deployment package or dll/contract package, depending on the build scenario.
teamcity publishes deployment packages to a nuget gallery deploy feed
(say: nugetgallery.server.com/deploy/api/v2).
teamcity publishes dll/contract packages to a nuget gallery dev feed
(say: nugetgallery.server.com/dev/api/v2).
octopus always searches for packages in
nugetgallery.server.com/deploy/api/v2
devs / teamcity searches for packages in
nugetgallery.server.com/dev/api/v2
This way I keep things clean and I can even go as far as create a third type of feed that only contains release packages so that I can be sure nothing would ever be deployed to production if it wasn't on that feed.
I might have missed some fundamental approach, so alternatives to this one I picked are welcome.
As I couldn't find anything relevant I ultimately gave up and went with the two servers solution. I struggled a lot to find any documentation what functionalities the nuget gallery really has.
Right now we have something like deploy-nuget.server.com and dev-nuget.server.com, separate urls, iis instances and sql instances and folder location.
For someone that might look into this in the future, one of the solutions that could work is to make private repository based on the user, unfortunately in my case that would not be enough as I would also want the packages to be stored in different locations so we could enforce different backup policies based on the type of package. Another option would be to actually change to fork the project, but from my previous experience this never ends well as sooner rather than later you will want to upgrade and your custom changes will have to be sorted somehow.
I understand this is not the idea behind nuget gallery, as you are not supposed to delete packages. But we do have some space constraints so eventually we will remove certain deployment packages that were created for QA environments which we obviously dont care anymore.
you can try Proget. using this server you can easily manage multiple NuGet feeds.
it also provides free edition which supports all features.

Promoting several modules (integration -> milestone) in ivy

Ivy is great for managing dependencies, but it isn't meant to handle the entire software lifecycle across many modules. That said, it does have several features that seem to support it (such as the status and branch attributes), and the ivy best practices blurb alludes to being able to promote integration revisions to milestone or release, "with some work".
Unfortunately I haven't found definitive guidance on how to manage the dev -> test -> deploy cycle. Here are some things I want to achieve:
(Given that devs typically work across many modules in a local workspace)
Dev can locally publish changes to a module, so that other modules in the workspace can get the updated artifacts.
Dev can designate a version as "ready to deploy to test" with one command.
Tester can designate a version as "ready for prod" with one command.
Dev can rebuild any version from source and the appropriate dependencies are picked up correctly (aka repeatable builds).
Some things I'm fairly clear about are:
The revision status should be used to denote whether that revision is meant only for development, is ready for testing, or is ready for production
The branch attribute should be sufficient to handle different project branches
Here is what I'm grappling with:
How to promote integration builds
Say I have these modules checked out in my workspace:
Now I'm happy with module a, and decide to publish a milestone using the checked out versions in my workspace. What needs to happen in the repo is:
e-1.0-RC1 gets published
d-1.1-RC2 gets published, referencing e-1.0-RC1 as a dependency
c-2.0-RC1 gets published, referencing d-1.1-RC2 as a dependency
b-3.3-RC1 gets published, referencing e-1.0-RC1 as a dependency
Finally, a-7.1-RC2 gets published, referencing c-2.0-RC1 and b-3.3-RC1 as dependencies.
If I try to roll my own for this, I'd probably end up doing some workspace management, ivy.xml find & replace, etc. Before I open that can of worms, I'd like to get some opinions. What's the best way to tackle this?
You can use recursive delivery to publish modules and their dependencies with a higher status.
Using your example:
e-1.0-RC1 gets published with an integration status
d-1.1-RC2 gets published with an integration status, referencing e-1.0-RC1 as a dependency
c-2.0-RC1 gets published with an integration status, referencing d-1.1-RC2 as a dependency
b-3.3-RC1 gets published with an integration status, referencing e-1.0-RC1 as a dependency
a-7.1-RC2 gets published with an integration status, referencing c-2.0-RC1 and b-3.3-RC1 as dependencies.
Finally, you decide to promote a-7.1-RC2 to a milestone status, so you do a recusive delivery (use the delivertarget attribute). This will recursively call the delivertarget for each dependency that has a status lower than milestone and publish it with a milestone status.
The nice thing about this, is that you don't need (or want) to have each project checked out in your workspace, just a. This also means that it's much easier to create a deployment pipeline and have your CI server:
run unit tests for a,
build a,
publish a as integration,
deploy a to a System Test environment,
run some System Tests
promote a from integration to milestone (which promotes it's dependencies)
deploy a to a Acceptance Test environment,
run some Acceptance Tests
promote a from milestone to release (which promotes it's dependencies)
deploy a to production (or upload it to a download site)
At no time does the pipeline need to access the dependant projects and, since the recursive delivery is generic, when you add or remove dependencies (via your ivy.xml files), you don't need to change anything in your pipeline.
I've marked this answer as a community wiki. Anyone else care to expand on it or correct anything I got wrong?
How do you do the line?:
promote a from milestone to release (which promotes it's dependencies)
I was planning on doing a retrieve and publish. Is there a better way?

How do you keep track of what you have released in production?

Tipically a deploy in production does not involve just a mere source code update (build) but requires a lot of other important tasks like, for example:
Db scripts
Configuration files (differents from test\production)
Batch to schedule
Executables to move to the correct path
Etc. etc.
In our company we just send an email to a "Release email address" describing the tasks in order, which changeset need to be published (TFS), which SP need to be updated, db scripts and so on.
I believe there's not a magic tool that does these tasks automagically in order, rollback included; but probably there's something better than email that helps to keep track of releases in production.
Do you have any tools to suggest or practices to share?
When multiple tasks are required to support a full project deployment (and that's frequently the case, in my experience), I'd suggest using a build/deployment tool. I've used Ant in the past with great success, but know others who swear by Capistrano, Maven and others.
Using Ant, I wrote a script that would:
Pull the specific revision I wanted from my VCS
Create a tarball of the target directory on the remote machine (in case a rollback was required)
Create a MySQL dump file of the database (also for rollback purposes)
Delete the remote directory and SSH the new content just pulled from the VCS
Perform various other logistical operations (setting file perms, ownership, etc.)
Create a release branch on the VCS itself
Create a tag with the appropriate version information so I always had a snapshot of the code base at that moment of deployment.
Hope that helps some. I've written a few blog posts about this that may (or may not) be useful. They're dated now, but the general information should still be solid enough.
Introductory thoughts
Details of how I use Ant for deploying--including scripts
You might be interested in the Team Foundation Build Recipes Website, that showcases some build scripts developed using SDC Tasks Library and the MSBuildTasks library
How about something like SVN? You can put all of your code in a repository, then when you are ready to release from production bring your stuff over from test. Then you'll have very specific revisions with information on what happened. SVN keeps track of all of it.