How to analyze the code from GitHub repository in sonarqube - github

How to analyze the code from GitHub repository in sonarqube ?
Please suggest and provide the step by step procedure.

Github is a source repository. It's sort of like a database. It just sits there and waits for requests. In a way it doesn't really "do" anything. If you need to "do" something with it, like build executables or run SonarQube scans on the code, you have to utilize other resources like Travis or Jenkins to run scripts that will build the executables and run SonarQube scans.
Depending on what programming language your code is using, there are different tools for managing the running of the SonarQube scan. If your code is Java and you're using Maven, you can use the "sonar:sonar" goal, along with setting several properties, to run the scan. If you have something else, then use the "sonar-scanner" tool, also with setting several properties.
Note that the SonarQube scan is almost always run as part of the build process, because the SonarQube scan needs some of the artifacts produced by the build to produce its analysis reports. For instance, it is typical to run the SonarQube scan after the unit tests are run, so SonarQube can see the resulting code coverage.
At this point, I can't really give you a step-by-step procedure. There are many pieces you're going to have to assemble, and that will require some choices on your part.

Related

How to run a code analysis locally from the command line using a quality profile on the SonarQube server?

On my SonarQube server I have 2 quality profiles (1 for C# and 1 for JS).
How to run a code analysis from the command line locally using them (retain them on server, without using tools like SonaLint) or using a gulp task?
take a look at the documentation of sonarqube for analyzing source code ( current link https://docs.sonarqube.org/latest/analysis/overview/) - there you find a lot of useful scanners for different environments and languages.
You just need to configure them properly, but this is also something you can find within the docs of sonarqube.
Use the command line scanner. It will run an analysis locally from your command line.
Make sure, that you set sonar.host.url in sonar-project.properties, so that the correct quality profiles will be taken into account.

How do I analyze .net project from command-line?

1)There are couple of ways to analyze .net projects like SonarQube.Scanner.MSBuild or sonar-runner or sonar-scanner you can use from command line.
2)I had started using sonar-runner and it worked fine initially for C# and
javascript running analysis twice - one for C# and one for javascript.
3)Now when I'm running analyis for javascript project(with jquery,require.js,
bootstrape.js files) it throws an error - "parser error", "Error during sonar runner execution. Unable to execute sonar. Caused by: Java heap space". I tried increasing heap size in sonar.properties file but didn't help.
4)So I started analyzing projects using SonarQube.Scanner.MSBuild it worked but here you don't have option to specify language(or I don't know the option to specify) and due to this I'm not able to run analysis for languages(PL/SQL,Swift for which I have licence keys) other than C#,javascript.
Could anyone specify best way to analyse project for different languages from command-line.
1) SonarQube Scanner for MSBuild is recommended for the analysis of .NET projects. Why? The analysis configuration for such a project is extremely difficult to write correctly by hand, and the Scanner for MSBuild takes care of all the details for you
2) Yay.
3) By the time the scanner reads your properties file, the process has already been started and its heap space set. You need to set that new value before the process starts: on the command line or in the environment
4) You have projects that contain C#, JavaScript, PL/SQL, and Swift?! If the answer were "no", I'd advise you to use the right tool for the job, and analyze your .NET projects with the SonarQube Scanner for MSBuild, and the other projects with the plain/default SonarQube Scanner. Since I know from the comments the answer is "yes" then I'll advise you to stick with the SonarQube Scanner for MSBuild for the reasons cited in #1.

Jacoco code coverage for remote machine

I tried to find this answer but hardly found it anywhere. I am doing the API testing, In process I need to call the rest API from my local machine. local machine contains the maven project and a framework to call respective rest API.
I need to check the code coverage of remote Rest API and form a report based on the code coverage. please help, how to do that?
Note: I found this link useful but it does not elaborate clearly on what to do?
http://eclemma.org/jacoco/trunk/doc/agent.html
you will probably do a bit of file copying around - depending on the way you run the tests.
JaCoCo runs as a java agent. So you usually add the javaagent parameter as mentioned in the docs you linked to the start script of you application server.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]
so it would look like:
java -javaagent: -jar myjar.jar
Using tomcat you can add the "-javaagent" part into JAVA_OPTS or CATALINA_OPTS environment variables. Should be similar for other servers.
this will create the jacoco*.exec files. you need to copy those back to your build or CI server to show its results (for ex if you use sonar you need those files before running the sonar reporter). Its important to just include the packages you're interested in.
You can also create one jacoco.exec file per test flavour (jacoco.exec for unit tests, jacoco-it.exec for integration tests, jacoco-at.exec for application tests).
And I would not mix coverage with performance testing - just to mention that too.
There are some examples on stackoverflow for JBoss

Incremental Build with MSBuild.exe

I'm building a Visual Studio 2010 solution through Python with a call to subprocess. When called directly from the command line it takes devenv.com ~15 seconds to start. But when called from Python this jumps up to ~1.5 minutes.
Naturally I'm hoping to remove that dead time from our build. So I decided to test out MSBuild.exe (from .NET 4). It looks like MSBuild.exe runs instantly. But... it seems to do a full build every time and not an incremental.
The command I'm using is
"C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe" "C:\path\to\my\project.sln" /target:build /maxcpucount:8 /property:Configuration=Release
It seems like this should support an incremental build. But I've seen posts online indicating that msbuild may not be able to support a incremental build like this.
Is this possible? If so what am I doing wrong?
Update:
I've read into this a bit more. Based on
http://msdn.microsoft.com/en-us/library/ms171483.aspx
and
http://www.digitallycreated.net/Blog/67/incremental-builds-in-msbuild-and-how-to-avoid-breaking-them
It seems like I need the Input and Output properties set in my .vcxproj files. Checking out my files these are indeed missing.
When would they be generated? Most my .vcxproj files were converted over from Visual Studio 2008. But I also generated a new project which is missing the Input and Output properties as well.
Does VS2010 not create projects with these properties?
Update: We've since upgrade to VS 2013. Now msbuild supports incremental builds. Never got to the bottom of the VS 2010 issue.
I think that fact that Incremental builds are not supported is a false Statement from according to official sources,Managed Incremental Build this feature and was included in VS2010 SP1
We first introduced the managed incremental build feature in VS2008.
In VS2010, we were not able to re-implement the managed incremental
build feature with the build system moving to MSBuild. We received
strong customer requests for this feature. As a result, we
re-implemented this feature and it is included in VS2010 SP1.
Other Solutions I found on Web
Projects should build incrementally already (just make sure that you
do Build instead of Rebuild). The best way to check if incremental
building works is to run the build from the command line. The second
time you build it should take almost no time.
If things are still getting rebuilt, then perhaps you've modified
your projects in some way that's messing up with the build order.
Looking at the build logs (via the /v option) can help you poinpoint
what's going on.
Other reason which can cause problems with the incremental build is GenerateResource.TrackFileAccess PropertyThis API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
Gets or sets a switch that specifies whether we should be tracking file access patterns.

FitNesse Test History with Version Control

I'm starting some automated acceptance testing for our company, and have decided to use FitNesse.
I want to have FitNesse under source control - that is the FitNesse executable + plugins, the wiki pages and the test fixture source code.
Then anyone can get all they need from source control to build and run the acceptance tests locally. Including a Continuous Integration server.
I have read that the page versioning can be turned off using the -e 0 parameters. Then we don't have ZIP files in the FitNesse root folder under source control - nice.
But what about Test History? Do I want the history of locally ran tests to be checked in? And when someone gets the latest version, do they want their local test history to be overwritten?
I'm very grateful to anyone who can share their experiences of using FitNesse in similar scenarios to that described above.
Why do you not clear all test history before check-in?
In my current project, there are only 2 required operations for creating test cases on FitNesse: Drawing the table(Edit the page) and developing the api(output as .dll files).
We also develop a tool for triggering the FitNesse running testing from remote machines automatically. After finish testing, we get the testing result by handling the output excel files.
The structure of our svn:
-SVN
--FitNess
--- TestLib
--- FitNesse
---- FitNesseBin
---- TestCases
[Update]
Test Fixture code should be finished and frozen before testers start writing test cases and running them. Certainly, when Test Fixture need bug fixing or enhancement, the code could also be changed. In my team, we ask different roles to handle different tasks. Developer provide API for testing use. Senior QA wrap the API in test fixture. QA write the table/wiki. Every role only take the assigned parts. Before modification, team member should update local copy of the FixNesse, and check-out the file. And only check-in the modified check-outed file.
Disabling and auto-purging of test history is still a valid requirement for those integrating fitnesse into a standard build (e.g. Maven) and for running fitnesse builds locally, despite .svnignore .gitignore etc options.
The test history slows down the finalisation of the test and when it's purged after a few runs you will certainly notice the difference.
Uncle Bob mentioned he was working on an option to only keep the test history for the latest test run (you always need at least 1 so you can show the results) here http://tech.groups.yahoo.com/group/fitnesse/message/14306 but cannot see such an option in the code. I got a python error trying to reply to post - so no answer on that option unfortunately :-(