Documentation Generation is disabled - doxygen

I did all that is specified in the tutorial - Doxygen Plugin.
Here is the sonarqube-4.5.1/conf/sonar.propeties file doxygen entries:
# Doxygen
sonar.doxygen.generateDocumentation=enable
sonar.doxygen.deploymentPath=D:\Downloads\sonarqube-4.5.1\web
sonar.doxygen.deploymentUrl=http://localhost:9000/sonar/documentation
The output of the sonarqube runner:
16:07:16.265 INFO - ANALYSIS SUCCESSFUL
16:07:16.266 DEBUG - Post-jobs : org.sonar.plugins.doxygen.DoxygenPostJob#28bda649
16:07:16.266 INFO - Executing post-job class org.sonar.plugins.doxygen.DoxygenPostJob
16:07:16.271 INFO - === SUPPRESS PREVIOUS GENERATION ===
16:07:16.395 INFO - === DOXYGEN EXECUTION ===
16:07:16.396 INFO - ### Generating configuration ###
16:07:16.427 INFO - ### Generating documentation ###
Also, in the specified \web folder there is a documentation folder which seems to contain the correct doxygen documentation output.
Yet I keep getting this Documentation Generation is disabled. message in the SonarQube web interface:
UPDATE
This is what my sonar-project.properties file contains now ― using a unix-style path:
#Doxygen
sonar.doxygen.generateDocumentation=enable
sonar.doxygen.deploymentPath=/Downloads/sonarqube-4.5.1/web
sonar.doxygen.deploymentUrl=http://localhost:9000/sonar/documentation
The output remains the same, same issue.
What do I need to do in order to see the documentation in the web server interface?
This seems to be a server linkage problem, because the documentation is being generated correctly, at this location: /Downloads/sonarqube-4.5.1/web/documentation.
I have also found this content:
core,true,sonar-core-plugin-4.5.1.jar|9289fc1067c31372c0b020aa01163087
emailnotifications,true,sonar-email-notifications-plugin-4.5.1.jar|bb35818e4a655a3ba2cff2afc65a296b
findbugs,false,sonar-findbugs-plugin-2.4.jar|bb0bf263ef1e0d56f569878732060cc9
java,false,sonar-java-plugin-2.4.jar|a105d018165ddeb2c0f5074100768660
cpd,true,sonar-cpd-plugin-4.5.1.jar|e11ff5066c9e2308036838510d87a6fe
dbcleaner,true,sonar-dbcleaner-plugin-4.5.1.jar|a444b3b4571791e1cde146ffa5132ee4
design,true,sonar-design-plugin-4.5.1.jar|0c6476994a44904307cfa8b8a08bbddd
doxygen,false,sonar-doxygen-plugin-0.1.jar|d86e1ab81c3ac34e6b31aa1da28d7f72
l10nen,true,sonar-l10n-en-plugin-4.5.1.jar|c21d53f67901cf6df3da1b4dd48a441b
in sonarqube-4.5.1\web\deploy\plugins\index.txt.
It looks like doxygen has a false associated with it. If I try to edit it (to true) and restart the server nothing changes. The file is overwritten at by the sonar-runner.

sonar.doxygen.generateDocumentation is a project property, not a server property. You have to set it in your "sonar-project.properties" file if you run your analysis with the SonarQube Runner or in your pom.xml file if you run the analysis with Maven.

Here is how I solved this:
Stopped the sonar-qube server.
Replaced the old sonar-doxygen-plugin-0.1.jar, from /Downloads/sonarqube-4.5.1/extensions/plugins, with the updated doxygen plugin from here https://github.com/SonarCommunity/sonar-doxygen/releases/download/1.0/sonar-doxygen-plugin-1.0-SNAPSHOT.jar.
Commented out the old configuration entries for doxygen from the project sonar-project.properties file. And replaced them with the follwoing entries:
sonar.doxygen.url=http://localhost:8000/
sonar.doxygen.enable=true
Used a simple python script to post the documentation html on that server (http://localhost:8000/).
Restarted the sonar-qube server.
Run the sonar-runner.bat again.
The documentation is in its place now.

Related

JPAM Configuration for Apache Drill

I'm trying to configure PLAIN authentification based on JPAM 1.1 and am going crazy since it doesnt work after x times checking my syntax and settings. When I start drill with cluster-id and zk-connect only, it works, but with both options of PLAIN authentification it fails. Since I started with pam4j and tried JPAM later on, I kept JPAM for this post. In general I don't have any preferences. I just want to get it done. I'm running Drill on CentOS in embedded mode.
I've done anything required due to the official documentation:
I downloaded JPAM 1.1, uncompressed it and put libjpam.so into a specific folder (/opt/pamfile/)
I've edited drill-env.sh with:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
I edited drill-override.conf with:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "local",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms: ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
It throws the subsequent error:
Error: Failure in starting embedded Drillbit: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path (state=,code=0)
I've run that *.sh file by hand to make sure that the necessary path is exported since I don't know if Drill is expecting that. The path to libjpam should be know known. I've started Sqlline with sudo et cetera. No chance. Documentation doesn't help. I don't get it why it's so bad and imo incomplete. Sadly there is 0 explanation how to troubleshoot or configure basic user authentification in detail.
Or do I have to do something which is not told but expected? Are there any Prerequsites concerning PLAIN authentification which aren't mentioned by Apache Drill itself?
Try change:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
to:
export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS -Djava.library.path=/opt/pamfile/"
It works for me.

Why DreamFactory 2.1.1 LOG_LEVEL .env parameter is ignored?

Since i've upgraded my Dreamfactory DSP from 2.0.2 to 2.1.1-2 , some configuration's parameters seems to be ignored!
DF_LOG_LEVEL is one of them and even if i change it, the value stay to WARNING as defined as the default value in the config/df.php
here is a part of my .env file:
##------------------------------------------------------------------------------
## DreamFactory Settings
##------------------------------------------------------------------------------
## LOG Level. This is hierarchical and goes in the following order.
## DEBUG -> INFO -> NOTICE -> WARNING -> ERROR -> CRITICAL -> ALERT -> EMERGENCY
## If you set log level to WARNING then all WARNING, ERROR, CRITICAL, ALERT, and EMERGENCY
## will be logged. Setting log level to DEBUG will log everything. Default is WARNING.
DF_LOG_LEVEL=DEBUG
(I've check that there is no other line in my .env file about the LOG_LEVEL)
Here is the config/df.php section about LOG_LEVEL: (where default is WARNING)
'version' => '2.1.1',
// General API version number, 1.x was earlier product and may be supported by most services
'api_version' => '2.0',
// Name of this DreamFactory instance. Defaults to server name.
'instance_name' => env('DF_INSTANCE_NAME', gethostname()),
// Log level
'log_level' => env('DF_LOG_LEVEL', 'WARNING'),
When i change the DF_LOG_LEVEL to other value in my .env file, Even after restarting my server nothing change in my Log file and in the Admin section Config/System Info i still have:
DreamFactory Instance
Admin Application Version: 2.1.5
DreamFactory Version: 2.1.1
System Database: mysql
Install Path: /opt/df2/apps/dreamfactory/htdocs/
Log Path: /opt/df2/apps/dreamfactory/htdocs/storage/logs/
Log Mode: single
Log Level: WARNING
I have noticed the same trouble with other parameters like the DF_ALLOW_FOREVER_SESSIONS=true
That is also no more effective !
Any help or suggestion ?
After making changes to the .env file in DreamFactory it is recommended to issue the following commands for the changes to be read from .env file (from htdocs folder or DF2 installation directory):
php artisan config:clear
php artisan cache:clear

Jetty Web Server unable to start "java.io.IOException: cannot read file:.."

015-04-08 12:56:30 Commons Daemon procrun stderr initialized
java.io.IOException: Cannot read file: C:\Streem\web\modules\annotations.mod
at org.eclipse.jetty.start.Modules.registerModule(Modules.java:549)
at org.eclipse.jetty.start.Modules.registerAll(Modules.java:486)
at org.eclipse.jetty.start.Main.processCommandLine(Main.java:608)
at org.eclipse.jetty.start.Main.main(Main.java:111)
I checked that installed Java version 1.7.0_25 and npn-1.7.0_25.mod do exist under web\modules\protonego-impl\
I am using jetty-9.2.5.v20141112 on windows 2008 R2 server
Does annotations.mod need something special regarding this case?
We found the same problem on Windows Server 2008. It happens when Jetty is trying to read the module configuration files and is due to a fault in the check for readability.
In the jetty source file FS.java line 39 a check is made using java.nio, to see if the file is readable:
public static boolean canReadFile(Path path)
{
return Files.exists(path) && Files.isRegularFile(path) && Files.isReadable(path);
}
The call to isReadable is slow and fails, see also:
http://mail.openjdk.java.net/pipermail/nio-discuss/2012-July/000672.html
The file itself is in fact readable and can be successfully read from Java, but the isReadable incorrectly returns false.
There are two possible workarounds:
Upgrade to Java 8
Remove the check for isReadable from the Jetty source (in any case if the file wasn't readable the reading will fail with an exception).
(See also similar question unable to start jetty service through command in window 7)
This is a fundamental I/O error, something prevented Jetty from reading that file.
Try some basic troubleshooting ...
File permissions?
Windows File Locking issue? (a different process has that file open?)

Error running hadoop application in Eclipse on Windows

I'm trying to set up an Eclipse environment for developing and debugging hadoop. I'm following Tom White's Definitive Hadoop 3rd ed. What I would like to do is get the MaxTemperature app working locally on my Windows within Eclipse before moving it to my Hortonworks sandbox VM. The comment on page 158 about using the local job runner seems to be what I want. I don't want to set up a full hadoop implementation on Windows. I'm hoping with the right config params I can convince it to run as a java application inside Eclipse.
Windows: 7
Eclipse: Luna
Hadoop: 2.4.0
JDK: 7
When I set the Run configuration for MaxTemperatureDriver (Source code on page 157) to
inputfile outputdir foo (deliberate bogus 3rd parameter)
I get the usage message so I know I'm running my program with those params.
If I remove the bogus third param I get
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1255)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1251)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1250)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1279)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at mark.MaxTemperatureDriver.run(MaxTemperatureDriver.java:52)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at mark.MaxTemperatureDriver.main(MaxTemperatureDriver.java:56)
I've tried inserting -conf but it seems to be ignored. There is no error message if I specify a nonexistent path.
I've tried inserting -fs file:/// -jt local, but it makes no difference
I've tried inserting -D mapreduce.framework.name=local
I've tried specifying the input and output with the file: format
Note. I'm not asking about how to configure eclipse to connect to a remote Hadoop installation. I want the application to run within eclipse.
Is this possible? Any ideas?
Additional info:
I turned on debugging. I saw:
582 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
583 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Cannot pick org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider - returned null protocol
I'm wondering not why YarnClientProtocolProvider failed, but why it didn't try LocalClientProtocolProvider.
New info:
It seems that this is an issue with Hadoop 2.4.0. I recreated my environment with Hadoop 1.2.1, followed the instructions in
http://gerrymcnicol.com/index.php/2014/01/02/hadoop-and-cassandra-part-4-writing-your-first-mapreduce-job/
added the Windows hack from
http://bigdatanerd.wordpress.com/2013/11/14/mapreduce-running-mapreduce-in-windows-file-system-debug-mapreduce-in-eclipse
and it all started working.
Following blog will be useful.
Running mapreduce in Windows filesystem

FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException

I'm trying to run pig locally, installed using homebrew, to test a script. However, I get the following error when I attempt to run a simple dump from the interactive prompt pig -x local:
2012-07-16 23:20:40,447 [Thread-7] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
[Fatal Error] :63:85: Character reference "&#2" is an invalid XML character.
2012-07-16 23:20:40,688 [Thread-7] FATAL org.apache.hadoop.conf.Configuration - error parsing conf file: org.xml.sax.SAXParseException: Character reference "&#2" is an invalid XML character.
The same load/dump works fine on Elastic MapReduce.
I can't find any XML config files, and I've tried with both version 0.9.2 and 0.10.0
What am I missing?
Edit: Just checked a direct download (vs. homebrew) and it doesn't seem to work either
You should check that your Hadoop configuration files have correct configuration data.
Have a look in your hadoop/conf directory.
Have a look inside:
hdfs-site.xml
mapred-site.xml
core-site.xml
Finally worked out what the problem was. I ended up having to use dtruss -p on the pig/java process. This revealed a temporary directory and dynamically generated xml files. Once the temporary directory was discovered, it all fell quickly into place.
It was picking up the proxy excludes from my network connections, which had, as far as I can tell, &#2 (http://www.fileformat.info/info/unicode/char/02/index.htm) embedded in it. How this invalid value came to be in my network preferences in the first place, I haven't the faintest clue.
The value was then being pulled into dynamically generated files, for example /tmp/hadoop-vertis/mapred/staging/vertis-1005847898/.staging/job_local_0001/job.xml.
The offending lines:
<property><name>ftp.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>socksNonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>
<property><name>http.nonProxyHosts</name><value>localhost|*.localhost|127.0.0.1|h|*.h</value></property>