Currently, we have already implemented the SonarQube on Azure VM, Due to cost and maintenance activity, we are thinking about moving to SonarCloud. Basic differences we are having from the pricing point of view and LOC, by feature-wise, am looking for major differences
Question: We have observed that in SonarQube we can set new code analysis on any branch whereas I did not find the same thing in SonarCloud it only allows me to set new code only on the Master/Default branch, the rest of the branches are not having options to set, also If I set the previous version, specific analysis the build fails for other branches except for master/default.
How can we set a new code rule for branches?
For SonarCloud, the prime focus is on the developer’s workflow and bringing value to the development team within their existing ALM environment. Thus, the ‘Enterprise’ use-case is not currently addressed by SonarCloud.
SonarCloud is hosted by SonarSource in AWS and your code will be stored in a private subnet which is secured but again it's your choice..!! (We prefer to keep our code in our own infra)
For Enterprise use-cases, explore SonarQube
SonarQube Enterprise Edition (EE) includes a few management features that may be valuable to your organization. SonarQube along with a supported database is installed on your own on-site servers or in a self-managed cloud environment
SonarQube is enterprise ready application with all the required configurations whereas SonarCloud is missing enterprise level functions such as Authentication, Portfolios, Global settings, Branch & New code, and Housekeeping etc. that is why you are unable to set New code analysis on branch level.
SonarCloud meant for small projects that focuses on Master where your application will be built and deployed using only Master/release branch and rest of the branches are considered as a short-living branches in which we don't get much more options to play.
As part of Licensing and services both products cover essentially the same languages (SonarCloud doesn’t support PL/I, RPG or VB6) and support similar functions, SonarCloud additionally offers a SonarServer and Database to store the code and results with all the services like database and server SonarCloud still much more cheaper than the SonarQube but why? --
You can go through the below links for more details.
https://blog.sonarsource.com/sq-sc_guidance
https://sonarcloud.io/documentation/user-guide/new-code/
Related
I have a software project which is currently hosted on BitBucket. I would like to implement a CI/CD pipeline which would have to run on local agents for build/test/deploy. The runners would also have to be compatibile with Windows 7/10 (x86/x64) and Linux (x86/x64/arm64/armv7). I am pretty new to DevOps, but after a thorough search, I came up with 2 options: GitHub and GitLab. Can you present to me which one would be better, exposing some advantages/disadvantages for each one? Thanks a lot
My recommendation would be you go with GitLab because of some of the following reasons.
GitLab CI has been in the market for a much longer time than GitHub actions that was announced in Nov of 2019 you can see some of the feature comparisons on GitLab blog here
When you are getting started It is much easier to navigate the GitLab GUI to configure all the tools that you need for DevOps in comparison to GitHub's somewhat difficult to navigate GUI due to the number of other tools that are available on GitHub
In addition GitLab is primarily focused on improving DevOps and as a result, they have integrated a couple of features over time in line with making the whole entire DevOps process much smoother than GitHub which just jumped started out in 2019.
Also there are a bunch of templates available for you to get started on GitLab which is not the case in GitHub.Plus these templates are in a wide range of languages which I am sure to cover your project requirements
Ease of access of CI within GitLab well in addition to having an easy to navigate GUI GitLab has all the tools necessary for your DevOps bundled in one location so every single DevOps feature that you will need will be accessible in this one place and in addition to that they do have a YAML template available for you that can help you get started quickly.
Finally there are way more features within GitLab majorly because it has been in the market since 2012 or 2011 compared to GitHub actions of 2019
There are however some major similarities that I would also like to point out which I believe could make your transition easier or just in case you want to try out both tools to judge for yourself.
Both GitHub Actions and GitLab Ci are build-in tools.
Both GitHub and GitLab use the same commands so there will not be a learning curve for you in terms of managing and collaborating changes on your project.
To promote CI/CD for Analysis Services tabular models (SSAS, Azure AS or Power BI datasets), I always recommend people to use Tabular Editor with Azure DevOps. One popular feature of Tabular Editor's CLI, is that it can perform a schema check, in which Tabular Editor connects to the data source defined within the tabular model, in order to validate the partition queries against the actual columns specified in the model metadata.
Microsoft recommends the use of the MSOLEDBSQL provider (1,2) for SQL Server-based data sources. Unfortunately, this provider is not available on Microsoft-hosted build agents in Azure Pipelines (neither vs2017-win2016 nor windows-2019).
Unfortunately, the installer for MSOLEDBSQL requires admin permission, so I don't think that we can install the driver as part of our pipeline.
One workaround is to use Tabular Editor's scripting functionality to temporarily change the data source to use for example the SQLNCLI provider when performing the schema check. However, it feels like the missing MSOLEDBSQL driver on the build agents is an oversight on Microsoft's part, especially considering that they're recommending the use of this driver for production purposes.
Is there any way we can have the MSOLEDBSQL driver available on a Microsoft-hosted Windows-based build agent?
Is there any way we can have the MSOLEDBSQL driver available on a
Microsoft-hosted Windows-based build agent?
Virtual Environments repo contains the source used to create the virtual environments for GitHub Actions hosted runners, as well as the VM images of Microsoft-hosted agents used for Azure Pipelines.To file bug reports, or request that tools be added/updated, please open an issue using the appropriate template.
So I think you can open a Tool Request here with the given template, then the team there would consider and check your feedback.
In addition: As temporary workaround you can consider installing one self-hosted agent in your local machine, so that you can run the pipeline with local environment. (With more control to install dependent software needed for your build and deployment)
GitHub announced an upcoming feature, GitHub Actions.
I'm positive on the benefits of CI tools like Jenkins for automatic building or testing, which GitHub Actions is aimed to be used for in the future.
Having a repository on GitHub and using an external CI tool has the huge benefit of allowing to move the repository to another Git repository platform (or even local) without rewriting of the whole CI process. With GitHub Actions, you're more or less tied to the GitHub ecosystem.
I assume the integration of GitHub's Actions will be more fluent in the native environment, but are there any other advantages or disadvantages besides that?
I've been working with GitHub actions full time for a couple of months now.
It's still early days (June 2019), but here's my list:
Advantages:
GitHub actions are just consecutive docker runs. Very easy to reason about and debug.
Reproducing the build environment for container-based Travis is
possible,
but more difficult.
On GitHub actions it's just a docker build docker run away.
The individual actions in a workflow are isolated by default.
You can use a completely different computing environment for, say, compilation and testing.
Travis CI (and I think other "traditional" CI) would run all "stages" (~ actions) in the same computing environment.
Again, GitHub actions are much easier to reason about and debug.
The main.workflow spec (a subset of the HCL and really just a directed acyclic graph) is open source.
The whole thing is a pretty thin wrapper around Docker anyway, so platform lock-in is arguably minimal.
There are already open-source reimplementations of GitHub actions, such as act for local testing.
You have ready access to the GitHub API with (somewhat limited) authentication out of the box.
There might be a vibrant community (marketplace?) where people can share actions.
For example, I'm reusing deploy actions build by different people in different ecosystems.
A directed acyclic graph (DAG) and the visual editor for main.workflows is perhaps a good way to model CI/CD in particular and workflows in general.
Takes some getting used to, but generalises well.
GitHub actions can do a whole lot more than just CI! You've got basically the whole API at your fingertips as inputs and outputs.
Disadvantages:
GitHub actions (still?) has sometimes surprisingly foundational limitations at this point (june 2019).
No native caching.
You get image and layer caching (it's complicated), but nothing else.
For build artefacts, you have to roll your own cache (via AWS, Azure, etc. ...), which can be a lot of work. (You can see a hacky setup here.
Surprisingly, no support for pull requests from forks.
It's again a bit complicated, and understandable from a security standpoint, but it's currently not possible to run actions a) against the secrets of the receiving repo of a fork PR (base), and/or b) against the would-be merge result of a fork PR (that's what travis does).
For a workflow that involves forks, that makes GitHub actions largely unuseable as CI/CD tool.
Single platform, it's just whatever you can run inside docker, so some Linux distro. That seems unlikely to change, but might be an acceptable limitation.
You can always add an action to call other cross-platfrom CI/CD services.
The documentation is still pretty sparse.
Not much in the way of best practices or scaffolding.
The quality and breadth of published GitHub actions (at least on the marketplace) is still pretty low / limited.
We'll see whether that takes off.
No great way to unit-test actions. (I hacked something together, but I'm not too sure about it).
Having a repository on GitHub and using an external CI tool has the huge benefit of allowing to move the repository to another Git repository platform (or even local) without rewriting of the whole CI process.
With GitHub Actions, you're more or less tied to the GitHub ecosystem.
Yes, and starting November 2019, slightly less so:
See Joe Bourne's annoucement "Self-hosted runners for GitHub Actions is now in beta".
You can have self-hosted runners, which means:
Your environment, your tools,
Any size machine or configuration,
Secure access and networking,
Large workload support.
To support using self-hosted runners in your workflows, we’ve expanded the experience of using the runs-on key.
When registering your self-hosted runners, they’re each given a read-only label self-hosted which you can use with runs-on.
Here’s an example:
# Use Any available Self-hosted runners connected to repo
runs-on: self-hosted
See the documentation at "Hosting your own runners".
I'm a developer whose transitioning into Devops. By observation, I've noticed that a lot of dev shops have started using Octopus Deploy and Azure Devops Services (AzDo, formerly VSTS), or they are starting new projects to setup devops ci/cd pipelines AND they spec to use both tools.
I've been through some quick training for both tools and though they aren't perfectly the same, AzDo seems to offer all of the same features as Octopus Deploy.
So, my question is if a company is already using AzDo for much of their version control, or anything CI/CD pipeline-related, why would you use Octopus? What benefit does it offer to use Octopus for your build and deploys to AzDo?
Note, I am very, very new to Devops. I'm just asking because at the "10,000 feet view" there doesn't seem to be any reason for Octopus if you're already using AzDo. I mention Octopus Deploy by name because I see it come up frequently. However, I assume there could be other tools that serve the same purpose of automatic build and deploying that might also integrate with AzDo. However, AzDo offers build and deploy built in one. Why split out the work?
Let me preface why I like to both build and deploy with VSTS:
Same permissioning end to end
Line of sight from end to end build and deployment
Reasons I favor Octopus Deploy over VSTS Release:
Ability to upload packages/artifacts
External ones that are maybe one off packages to get deployed for a specific release
Target Definition
When you create Targets or servers you are deploying to, you are able to add a target to one or multiple environments and assign tags/roles to a target. What does this mean? More flexible server definition rather than defining strict Agents to a pool or servers to a Deployment Group, you can allow a target to span multiple (ie: a testing server that spans your Dev and Test environments and only gets triggered on steps that are defined for that role). I realize you can accomplish similar things to this in VSTS but in my opinion it's far more cumbersome.
Variable Definition
Variables can be grouped at a global level and grouped by a specific pipeline/process (that part is similar to VSTS). Variables can also be grouped or scoped by environments or roles (above) so you are able to have different variable values per role per environment; both super granular and flexible. Places this comes in handy is if you have a backend server with a connection string and maybe 2 content delivery nodes (role - content delivery) that get slightly different values than the backend server. At the moment, I do not know (other than creating new environments) how one would accomplish the same in VSTS.
Process Definition
All of the above comes together in the process definition of Octopus Deploy. The super flexible and granular variables and target definition allows you to focus on the actual deployment process rather than getting hung up on the nuances of the UI and its limitations. One example would be defining a process where the first step would be taking something out of a load balancer from a central server, step two deploy code to delivery server one, step three put back in lb, step 4 take out node two from lb called from a central server, step 5 deploy code to node two, and last step, back into load balancer. I realize it's a very simple hypothetical, but within Octopus Deploy, it's one steady process filtered to execute on specific roles, within VSTS you would have to break that down into different agent phases and probably pipelines.
Above are really the biggest points I see to use Octopus Deploy over VSTS Release. Now why would someone use VSTS to build and OD to release/deploy? There are a lot of different factors that go into it, some are corporate drivers like having an enterprise git client that has permissions handled thru MSDN. Sometimes it's a project management driver of having work items tied tightly to commit and builds, but with the added flexibility that OD brings to the table for free/minimal cost.
Hoping this help shine a little light into maybe why some people are crossing streams and using both VSTS and OD.
A lot of good points have been made already, but it really comes down to what you need. I would venture a lot of us started using Octopus before Release Management was really a thing.
We use VSTS for all our source control and builds and then all our deployments are handled through Octopus.
When we started evaluating tools, VSTS had nothing for deployments. Even now, they are still playing catch up to Octopus in feature set.
If you are doing true multi-tenanted and multi-environment deployments, I don't think VSTS really compares. We are using Octopus with around 30 tenants, some on Azure, some on premise. We deploy a mix of web and desktop apps. We are even using Octopus to deploy some legacy VB6 and winforms applications.
Multi-Tenancy (critical for us)
VSTS added Deployment Groups a while ago which sound pretty similar to Octopus Environments before multi-tenancy was implemented. Before Octopus had true multi-tenancy (it's been around a while now), people would work around it by creating different environments per tenant, like "CustomerA - Dev", "CustomerA - Prod", etc. Now you just have your Dev/Test/Prod environments and each tenant can have variables scoped to those individual environments.
Support
Documentation is excellent and it's really easy to get up and running.
The few times I've needed to contact someone at Octopus, they've answered very quickly and knowledgeably.
Usability
Having the Octopus dashboard giving us an overview of all our projects is amazing. I don't know of anyway to do this in VSTS, without going into each individual project.
Octopus works great on a mobile device for checking deployment status and even starting new deployments.
Community
Octopus works with their customers to understand what they want and they often release draft RFCs and have several times completely changed course based on customer feedback.
If we know what sort of applications you are deploying, and to what kinds of environments, we would be able to better tailor our responses.
The features you see today in VSTS weren't there a few years ago, so there might be an historical reason.
But I want to state here some non-opinionated reasons that may suggest an organization to opt for different tools instead of one.
Separate responsibility and access levels
Multiple CI tools in dev teams (orgs that are using also Jenkins or TeamCity or else) and need to standardize and control deployments
An org needs a feature available only in Octopus (maybe Multi-tenancy)
Octopus does a great job of focusing on deployments. Features reach octopus before vsts, support is local and responsive. That, and you never run out of build/release minutes!
Seriously though, I just like to support smaller companies where possible and if all features were equal, I'd still pick them.
The big reason in the past was that TFS On prem and early VSTS did NOT support non-Microsoft (.Net) code very well if at all. You could utilize the source control and work features of TFS and then use octopus/Jenkins etc... as the build release parts to cover code that TFS didn't really know what to do with.
Also the release pipelines used to be very simplistic and not that useful where the other products were all plugin based and could do (almost) anything you needed them to. Most of that has changed so that VSTS is much better at working with Non-Microsoft code bases then it used to be. Over time integrations get created inside a companies walls and undoing those decisions can be more painful then just having "too many" tools. Also I feel like there is just more people out there familiar with those tools since they have been mature longer and cover a larger part of the development world then VSTS has in the past.
To fully implement CD you need both. VSTS runs tests and is a build server. OD isn’t. VSTS is light on sophisticated application installations. And if you are provisioning environments, IaC style, you need Terraform in addition. Don’t try to shoehorn everything into a single tool. DevOps requires a whole ecosystem. The reasons are not historical.
I'm advocating using Visual Studio Team Services for our source control solution, and have actually started doing so. However, my manager, who is somewhat apprehensive when it comes to cloud-hosted storage and services, wants to know what our contingency plan is in the event of Team Services ceasing to be accessible for whatever reason.
I've pointed out that we have our source code on our developers' computers, in their mapped work spaces, but admittedly if we ended up with just that and no access to Team Services we'd certainly be in a bit of bind. They might all be working on different parts of the same solution and we wouldn't be able to check all of their changes back into the central repository or merge changes made in separate branches. We also wouldn't have access to the comments associated with previous check-ins, or our backlog, tests, etc.
So, the question is, is there a way to backup everything that we're hosting in Team Services so that, in the event of something going wrong, we'd be able to restore all of that to a locally-hosted installation of TFS (or somewhere else)?
I'm a bit late to the party but we developed a Team Services backup tool. We scheduled it as a scheduled task and it runs once a night. It then just clones all our repositories to disk.
Taken from this blog:
We use the VSO Rest API to query our VSO account and get all the data
we need. Since in VSO you can only have one Team Project Collection,
we retrieve all the team projects of the default collection. Each of
these team projects can have multiple repositories that need to be
backed up. A folder is created for each team project and saved to a
location on disk that can be configured in the app.config. When the
team project folder is created, the task loops over each repository in
the team project and creates folders for each repository.
You can also fork it on GitHub here
There's no out of the box backup ability.
Now, if you are only referring to source control, and not work items, pull requests, builds, test plans or anything else that the service offers, then I'd suggest you migrate your code over to git.
With git every developer will have a complete copy of the source repository, including all history and commit comments. From there, it's a simple task to push the git repository to a different git hoster (such as bitbucket or github) and make them your new centrally hosted git repository.
On a historical note, Visual Studio Team Services at one point offered a data export for a period of time. You might want to add a vote or three to this related UserVoice idea to help raise the importance of the feature with Microsoft.
Side comment: The business risks in using Visual Studio Team Services will come from either Microsoft shutting down the Visual Studio Team Services service or that the underlying Azure infrastructure has such a catastrophic failure that your Visual Studio Team Services account is unrecoverable. Both of those are extremely low risk, and very likely lower than the risks you'd have running TFS on-premises, in your own data centre, unless of course, your infrastructure and staff are better than Microsoft's :-)
Not a full VS backup in terms of a restore of service. But you can take a full Zip from root down using the Code web site. Right click the root folder and has a zip download option. Pretty neat feature.
The easiest way to back up everything is to use something like the TFS Integration Platform to periodically pull off all your data into an on-premises TFS solution. I've set this up using an Azure VM that we turned off when we weren't actively backing up, which makes it really low cost. For more info on using the TFS IP with Team Services, see this: http://nakedalm.com/migration-from-tf-service-to-tf-server-with-the-tfs-integration-platform/