Today I want to backup sonar, but I am not sure how to back it up to GitLab using sonar plugins or manual Please could you help me with some instructions? What do I need to backup in Sonar for configuration and its data.
Related
We want use JBPM 7.3 for business rules development and testing. However for execution we will execute kie-base in our application, which is in java. For this we require access to JBPM maven repository.
How to fetch kjar from jbpm maven repository in my development environment using pom.xml. I mean, can i access jbpm 7.3 maven repository, if yes what is repository url.
JBPM 7.3 doesn't provide backup option for git repository and maven artifactory. Is there any recommendation, if yes. how to setup that Or it is not require, JBPM will auto take care of this. ( JBPM doesn't provide any setting option for maven and git repo.)
Please let me know, if more detail require.
This section of the official documentation shows you how to clone a project from the Kie Workbench into your local environment and to run some tests using maven. It also shows you how to get the repository's URL.
As you can see, each project in the Kie Workbench belongs to a single maven repository containing a maven project inside.
As far as I know, there is no automatic backup feature. There is another part of the documentation that explains how to configure a VFS Cluster. You can use this as a way of keeping 2 git repositories in sync. But that is probably too much work. For a simple backup mechanism you can create a CRON job that keeps a backup repository always in sync with the repository in your Kie Workbench.
Hope it helps,
I have an ubuntu staging server where I have installed apache, php, mysql, git, composer installed. I have a private git repository setup on the bitbucket, the project is already cloned to the staging server and to my local development machine. The Laravel setup is working perfectly fine on both machine.
What I am currently doing is whenever there is an update to the git repository, I do login to the staging server, pull the latest code from the git repository and do composer install, npm install, bower install.
I want to automate this process via capistrano tool. I checked the tutorials online, but all of them do the clone of repository whenever, I issue a deploy command and creates a fresh installation every time. Can't capistrano helps me to work on the existing folder that is already setup?
The basic premise of Capistrano is the idea that a new installation is created each time, such that there is not much to be done initially in terms of setup. If you'd rather use a different mechanism, a different tool would work better for you! For such cases, you could try to write a script using SSHKit directly (fairly advanced), or write a makefile or some other tool to automate your process.
If you do want to try to make Capistrano work on its terms, look into how linked_dirs and linked_files work in it. They allow you to have some files (e.g. config files, log dirs, etc) which are outside of the deployment directory and as such are shared between deploys.
I want use the TFVC plugin with sonar.
I have copied the file sonar-scm-tfvc-plugin-2.0.jar in Sonar\extensions\plugins.
I use the following config sonar.properties :
sonar.scm.enabled=true
sonar.scm.provider=tfvc
sonar.tfvc.username=my Tfs UserAccount
sonar.tfvc.password.secured=My TFS password
When I run a sonar analysis on the command line c:sonar.net-runner.cmd,
the analysis is successful.
But on the web side, all issues are not assigned....
Is there something wrong ?
SonarQube version 5.x+ will automatically assign issue to the last committer on the line if:
It is a new issue that has been introduced since the last analysis
It was possible to match the SCM user to a SonarQube user
So, if you did an initial analysis of your project, then enabled the SonarQube SCM TFVC plugin, and redid an analysis, none of the issues are new, and so it is expected for all of them to stay unassigned.
Start by verifying that you get the SCM data from TFVC properly imported into SonarQube:
I am doing PowerShell DSC POC. I configured the Pull server and one client machine. It is working fine and I am very much happy about PowerShell DSC feature.
Now I want to integrate this feature with our continuous Integration process. We are using Nolio for MSI deployment and do the other configurations. As of now I want to use DSC only for configurations and Nolio will continue for deployment process (due to reduce the Migration complexity). Later we planned to replace the Nolio with DSC including deployment. Here is my questions.
1) We have monthly releases. As per my understanding I need to install the MSI(will deploy websites) in all machines including Pull Server and Nodes. Then I will do the configuration settings using Pull Server configuration. Once I configured the Pull server how could I do the second deployment. Will Pull server node create any problem at the time of deployment like reverting the installed files as per old configurations? Is there any way to stop the Pull server settings at the time of deployment?
2) If I want to install MSI also from DSC, I am planning to do that like below.
Change the Pull server configuration to install the MSI from other configuration settings.
Install the MSI in Pull server and all Node machines.
Do all other configurations in Pull server.
Change the configuration to apply pull server configurations to
Node from Install MSI.
Is this good process?
Could you please anyone help me to achieve this? Please share if you have any other best practices.
Thanks in advance.
I want to enable sonar with git but is it neccesary that first pull the project from git repository using hudson or something else and then sonar will analyse the code periodically on hudson .am I right means my steps :
1.Pull project from git using hudson.
2.Sonar on hudson will analyse the code and send the updates.?
or directly we can use git+sonar how it works ,can anybody guide me to get it work.
Yes, you need first to pull your project from GitHub, and then launch a Sonar analysis on your local copy (Sonar needs the file to exist on the file system to be able to analyse them).
So you can pull your project manually or obvioulsy using a CI server like Jenkins/Hudson.
The good news, yesterday (2015-07-08) SonarQube has launched a Github Pluging, every time a pull request is submitted, the CI system launches a SonarQube preview analysis.
Reference:
http://www.sonarqube.org/github-pull-request-analysis-helps-fix-the-leak/