I am pretty new to Development community and specifically to DevOps practices , as a part of project we are trying to integrate SonarQube with Gitlab , did some R& D on SonarQube and Git CI ( Continuous Integration ) and look like plugin is released for Github and SonarQube whereas not for Gitlab.
How realistic is it to configure GitLab with SonarQube for inspecting code quality for every pull request and what will be the best practice to integrate these two piece.
Thanks
you don't really need a plugin.
make something like this in your .gitlab-ci.yml
stages:
- build
build_master:
image: maven
stage: build
artifacts:
paths:
- target/*.jar
script:
- mvn package sonar:sonar -Dsonar.host.url=https://sonar.yourdomain.tld/
only:
- master
and every master push will be tested!
(this is for a Java project...)
Currently there are (as far I am aware) two community driven plugins which aim to provide MR-analysis/integrate with GitLab.
Both of them are currently going through the Feedback phase for their next release and both aim to land into the Update Center with that release.
(No longer supported due deprecations in SonarQube) https://git.johnnei.org/Johnnei/sonar-gitlab-plugin | RFF for 0.2.0
https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin | RFF for 2.0.0
With both you're able to run a build which will provide comments in GitLab with the newly found violations. Both are highly inspired by SonarSource's GitHub plugin.
However I'm not in the position to advise you on which of the two to use as I'm the developer the first and thus biased.
I was into same requirement and here is how I implemented,
Create a runner without specifying any tags and of shared type.
Create a file .gitlab-ci.yml file with the following commands,
variables:
SONAR_URL: "http://your_sonar_url"
SONAR_LOGIN: "sonar_user_id"
SONAR_PASSWORD: "sonar_password"
sonarqube_master_job:
stage: test
only:
- master
image: maven:3.3.9-jdk-8-alpine
script:
- mvn --batch-mode verify sonar:sonar -Dsonar.host.url=$SONAR_URL -Dsonar.login=$SONAR_LOGIN -Dsonar.password=$SONAR_PASSWORD
If you create runner with specific tags, then you need to mention the tags in the .gitlab-ci.yml file
you can get more information on adding tags in this link, https://forum.gitlab.com/t/activated-specific-runner-is-not-working/7002
There could be a new alternative (to SonarQube) with GitLab 13.3 (August 2020)
It does not cover everything that SonarQube address, but can focus on the security side of the static code analysis, for multiple languages.
SAST security analyzers available for all
We want to help developers write better code and worry less about common security mistakes. Static Application Security Testing (SAST) helps prevent security vulnerabilities by allowing developers to easily identify common security issues as code is being committed and mitigate proactively. As part of our community stewardship commitment we have made all 15 of our open source based SAST analyzers available in every GitLab tier. This allows ALL GitLab users developing in any of our 18 supported languages and frameworks to leverage GitLab SAST in their projects.
Getting started is as easy as using our new guided SAST configuration experience, enabling Auto DevOps, or adding the SAST configuration template to your gitlab-ci.yml file. Customers not on the Ultimate tier can interact with generated SAST vulnerability report by downloading the SAST job artifact. We’ve also updated our docs with details about the tier breakdown for all our SAST features.
See Documentation and Issue.
And (not free, as opposed to the previous section):
Guided SAST configuration experience
GitLab’s Static Application Security Testing (SAST) now supports a new guided configuration experience. Enabling SAST is now as simple as two clicks. We believe that security is a team effort and this configuration experience makes it easier for non-CI experts to get started with GitLab SAST. The tool helps a user create a merge request to enable SAST scanning while leveraging best configuration practices like using the GitLab-managed SAST.gitlab-ci.yml template and properly overriding template settings.
With GitLab SAST covering 18 languages across 14 analyzers, there are many SAST configuration options and it can be hard to understand and setup. This new guided SAST configuration experience helps anyone get started with SAST, and lays the foundation for us to introduce new configuration options like custom rulesets and more. We also intend to expand this guided experience to our other security scanning tools.
See Documentation and Issue.
See also GitLab 13.5 (October 2020)
Customizing SAST & Secret Detection rules
GitLab Static Application Security Testing (SAST) and Secret Detection now support customizing detection rules. This allows GitLab users to change the vulnerability detection defaults to tailor results to their organization’s preferences. SAST custom rulesets allow you to exclude rules and modify the behavior of existing rules. Secret Detection now supports disabling existing rules and adding new regex patterns that allow the detection of any type of custom secret.
Custom rulesets can be defined by adding a new file to the .gitlab folder named sast-ruleset.toml or secret-detection-ruleset.toml containing customizations written in the correct notation. You can learn more about this file format and see examples in our documentation for SAST custom rulesets and Secret Detection custom rulesets. We intend to provide additional support for importing custom rulesets in .gitlab-ci.yml files in the future.
See Documentation and Epic.
Below is how I did for a MVP.
.gitlab-ci.yml
stages:
- sonarqube_test
sonarqube_test:
tags:
- your-tag-attached-to-gitlab-runner
stage: sonarqube_test
script:
- .cicd/sonarqube.sh
sonarqube.sh file
#!/bin/bash
#
# Args: deploy.sh
#
cd ~
wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-3.3.0.1492-linux.zip
unzip sonar-scanner-cli-3.3.0.1492-linux.zip
rm sonar-scanner-cli-3.3.0.1492-linux.zip
chmod 777 sonar-scanner-3.3.0.1492-linux/conf/sonar-scanner.properties
echo 'sonar.host.url=http://<your_sonarqube_server_url>' >> sonar-scanner-3.3.0.1492-linux/conf/sonar-scanner.properties
chmod +x sonar-scanner-3.3.0.1492-linux/bin/sonar-scanner
sonar-scanner-3.3.0.1492-linux/bin/sonar-scanner \
-Dsonar.projectKey=<project_name> \
-Dsonar.sources=. \
-Dsonar.host.url=http://<your_sonarqube_server_url> \
-Dsonar.login=<token_from_gitlab_UI>
Related
I have apex code stored in the master branch of my remote repository in GitHub and would like to deploy it directly into my Salesforce Org.
Is there a way to setup a "pipeline" in GitHub with Salesforce.com in order to facilitate a direct deployment into Salesforce?
Edited on 22nd-Oct-2022: is there a way to setup a button on GitHub that I can click on when I want to deploy changes (delta) from the master branch to a related Salesforce DEV Org?
Learn from Salesforce themselves. The https://github.com/trailheadapps/lwc-recipes (all lwc demos mentioned in documentation) contains actions to run code quality tools (prettier, lint), unit tests (apex, jest), deploy to scratch org, assign permission set, run all tests, delete the scratch org, report on code coverage...
Lots of goodies there. Might be too much if you're just after plain deploy to prod but it's good to know what toys are out there.
Currently, we have already implemented the SonarQube on Azure VM, Due to cost and maintenance activity, we are thinking about moving to SonarCloud. Basic differences we are having from the pricing point of view and LOC, by feature-wise, am looking for major differences
Question: We have observed that in SonarQube we can set new code analysis on any branch whereas I did not find the same thing in SonarCloud it only allows me to set new code only on the Master/Default branch, the rest of the branches are not having options to set, also If I set the previous version, specific analysis the build fails for other branches except for master/default.
How can we set a new code rule for branches?
For SonarCloud, the prime focus is on the developer’s workflow and bringing value to the development team within their existing ALM environment. Thus, the ‘Enterprise’ use-case is not currently addressed by SonarCloud.
SonarCloud is hosted by SonarSource in AWS and your code will be stored in a private subnet which is secured but again it's your choice..!! (We prefer to keep our code in our own infra)
For Enterprise use-cases, explore SonarQube
SonarQube Enterprise Edition (EE) includes a few management features that may be valuable to your organization. SonarQube along with a supported database is installed on your own on-site servers or in a self-managed cloud environment
SonarQube is enterprise ready application with all the required configurations whereas SonarCloud is missing enterprise level functions such as Authentication, Portfolios, Global settings, Branch & New code, and Housekeeping etc. that is why you are unable to set New code analysis on branch level.
SonarCloud meant for small projects that focuses on Master where your application will be built and deployed using only Master/release branch and rest of the branches are considered as a short-living branches in which we don't get much more options to play.
As part of Licensing and services both products cover essentially the same languages (SonarCloud doesn’t support PL/I, RPG or VB6) and support similar functions, SonarCloud additionally offers a SonarServer and Database to store the code and results with all the services like database and server SonarCloud still much more cheaper than the SonarQube but why? --
You can go through the below links for more details.
https://blog.sonarsource.com/sq-sc_guidance
https://sonarcloud.io/documentation/user-guide/new-code/
I have a software project which is currently hosted on BitBucket. I would like to implement a CI/CD pipeline which would have to run on local agents for build/test/deploy. The runners would also have to be compatibile with Windows 7/10 (x86/x64) and Linux (x86/x64/arm64/armv7). I am pretty new to DevOps, but after a thorough search, I came up with 2 options: GitHub and GitLab. Can you present to me which one would be better, exposing some advantages/disadvantages for each one? Thanks a lot
My recommendation would be you go with GitLab because of some of the following reasons.
GitLab CI has been in the market for a much longer time than GitHub actions that was announced in Nov of 2019 you can see some of the feature comparisons on GitLab blog here
When you are getting started It is much easier to navigate the GitLab GUI to configure all the tools that you need for DevOps in comparison to GitHub's somewhat difficult to navigate GUI due to the number of other tools that are available on GitHub
In addition GitLab is primarily focused on improving DevOps and as a result, they have integrated a couple of features over time in line with making the whole entire DevOps process much smoother than GitHub which just jumped started out in 2019.
Also there are a bunch of templates available for you to get started on GitLab which is not the case in GitHub.Plus these templates are in a wide range of languages which I am sure to cover your project requirements
Ease of access of CI within GitLab well in addition to having an easy to navigate GUI GitLab has all the tools necessary for your DevOps bundled in one location so every single DevOps feature that you will need will be accessible in this one place and in addition to that they do have a YAML template available for you that can help you get started quickly.
Finally there are way more features within GitLab majorly because it has been in the market since 2012 or 2011 compared to GitHub actions of 2019
There are however some major similarities that I would also like to point out which I believe could make your transition easier or just in case you want to try out both tools to judge for yourself.
Both GitHub Actions and GitLab Ci are build-in tools.
Both GitHub and GitLab use the same commands so there will not be a learning curve for you in terms of managing and collaborating changes on your project.
I'm new to the CI/CD world and now I would like to implement these workflows in my development process.
I would like to understand how properly make a build and release pipeline to manage Dev, Test and Prod environments when Dev, Test and Prod have slight differences.
So I'm making an Asp .Net Core app, the code is hosted in Azure DevOps which I will use also for build and release, for the client side code (js and css) I use Typescript and SASS and to compile to js and css I use npm scripts.
Now in the Dev environment I want to deploy the non minified js and css and I want also the sourcemap files, in Test environment instead I want the minified js and css and the sourcemap files, in the prod environment I want only the minified version of my css and js.
This case is taken only as practical example, but I would like to understand the general rule, which I can apply regardless of the kind of app or the host, build and release platforms.
As an additional note I understand that this case is pretty trivial and could be managed pretty easily without too much ceremonies, but I would like to understand the guidelines and the best practices, and then I will chose what is appropriate to my particular case and adapt those guidelines and best practices accordingly.
Now I can chose between different options:
I can manage the differencies at the build stage:
I can have one build pipeline which produce the "standard" client code, the source map and the minified versions and deploy the same artifacts to Dev, Test and Prod;
I can have different build pipeline for different environment;
I can have one build pipeline and use conditional tasks;
I can manage the differences at the release stage:
I can build the code using the option 1.1 and then exclude the files that I don't need in the release pipeline;
I can build only the server side code in the build pipeline and compile the client side code during the release pipeline;
I can compile the standard version of the js and css files in the build pipeline and in the release pipelines I can produce the source map or i can minify the js and css;
I don't like the option 1.1 because I don't like to have useless files spread all over the place and this add some extra steps in the build pipeline that aren't necessary.
The options 1.2 and 1.3 adds some complexity to the build pipelines.
With the options 2.x we have "incomplete" builds, because the artifacts produced by the build process lacks of some artifacts that are required by the deploy environment.
To me, which I don't know what are the guidelines and the best practices for the CI and CD workflows, seem that the much more appropriate is one of the option 1.3 or 2.3.
If I'm not wrong now the question become:
It is acceptable to have build pipelines that produces artifacts which are not entirely shippable because they don't meet the requirements for the deploy environment (like the needs to have the sourcemap in Dev environment)?
Ciao Leoni,
I've been a release manager for a number of years, and I understand your pain. In the system I worked on the sequence was something like this:
1: from the development domain to a staging server
2: from the staging server to a penetration & vulnerability testing environment
3: from the testing domain to SaaS production domain and DML repository.
4: from production domain to an escrow and installed cut.
My recommendation is that all tidying up, such as removal of developer's back-up routines (named following an strict convention) and minification is done on the staging server. We allowed minor bug fixes to be applied to the staging server code, and then 'fix pack' releases cut. Once the code is in the penetration & vulnerability testing environment, our practice was that the code itself must not change: only the security settings between domains and for escrow/installed release.
Once a documented process is agreed to, it's easy for people to use that as a check sheet. Your processes may need to be different from what I've out-lined above, and they should be expected to be refined over time. I know many people who do not like documented procedures, but I've documented some benefits here:
http://www.esm.solutions/wp/change-management/
A presto, Robert
I am using Sitecore 6.6.0, we have multiple environments
Local
DEV
QA
PROD
I have to deploy few changes directly from Local to Prod (Don't ask me why directly to PROD, even if it is for QA, my question remains same), what I am doing is create a package on my local with all items and separately create folder structure for all files related to the fix an deploy that to PROD.
There is always a chance of human error, since I will have to remember all associated items and files for a fix, so is there a better automated way, which will not skip any changed Items or Files?
On the other note I am using Bit-bucket for source controlling sitecore code what about sitecore DBs? most of the sitecore developments stays in DBs. What is the best approach to source control sitecore DBs?
Update
Installed packages from nuget
After installing Unicorn from nuget and unicorn.default.config, I get the following error
Attempt by method 'Unicorn.Data.DataProvider.UnicornDataProvider..ctor(Unicorn.Data.ITargetDataStore, Unicorn.Data.ISourceDataStore, Unicorn.Predicates.IPredicate, Rainbow.Filtering.IFieldFilter, Unicorn.Data.DataProvider.IUnicornDataProviderLogger, Unicorn.Data.DataProvider.IUnicornDataProviderConfiguration, Unicorn.Predicates.PredicateRootPathResolver)' to access method 'System.Action`1<System.__Canon>..ctor(System.Object, IntPtr)' failed.
Further after following the ReadMe on Github
When I do a sync on site/unicorn.aspx.
[P] Auto-publishing of synced items is beginning.
ERROR: Method not found: 'Sitecore.Publishing.Pipelines.Publish.PublishResult Sitecore.Publishing.Publisher.PublishWithResult()'. (System.MissingMethodException)
at Unicorn.Publishing.ManualPublishQueueHandler.PublishQueuedItems(Item triggerItem, Database[] targets, IProgressStatus progress)
at Unicorn.Pipelines.UnicornSyncEnd.TriggerAutoPublishSyncedItems.Process(UnicornSyncEndPipelineArgs args)
at (Object , Object[] )
at Sitecore.Pipelines.CorePipeline.Run(PipelineArgs args)
at Unicorn.ControlPanel.SyncConsole.Process(IProgressStatus progress)
Solution:
For older sitecore versions (pre 7.2 iirc) you need to disable the auto
publish config file as it relies on a method added later by sitecore.
https://github.com/kamsar/Unicorn/issues/103
In order to track the database changes you are making, you will first need to install software that will be able to help you serialize your changes and store in source control. Team Development for Sitecore (TDS) and Unicorn are the two most popular options.
You will also want to make sure you have your own local database where you are making your changes so you can isolate those changes from your QA, PROD, etc. allowing you to maintain the same level of isolation you do for developing code.
Automation of this process helps reduce the human error you mention for the deployment by introducing a repeatable and known process. Here are a few blogs that can help you get started:
Jason Bert - Continuous Deployment (Git/TDS/TeamCity)
Jason St-Cyr - Automating with TeamCity and TFS (TFS/TDS/Team Build)
Andrew Lansdowne - Auto deploy Sitecore Items using Unicorn and TeamCity (Unicorn/TeamCity)
Brian Beckham - TDS and Build Configurations
You may also want to look into configuration transforms to support different values in your Sitecore Include patch files. SlowCheetah plugin will let create the transforms in Visual Studio (it might be in Visual Studio 2015 now...). TDS can pick up those transforms automatically and execute them on the build server for you, or you can do it with Visual Studio itself to create published packages.
For Sitecore versioning and deployment Unicorn is also a good option.
https://github.com/kamsar/Unicorn
Cheers,
Bo