i would like to change the apperance of the log file, generated by ccnet. It is useful, if the error messages are separated from the original Log Messages, but in order to debug, it is a bit tricky to see, when the error really happened. Our powershell skript runs for 6-8 hours and creates about 38k lines in the log file, so i would really apprechiate a solution, how i could list the errors with the other lines in the log files. Additionally it would be cool, if all the errors would still appear separatedly.
So far i have not found a lot documentary that explained how to change the log file output...
Simon
Not sure how this is logged, but in the end, logs produced during the build are put into the build-log file, that you will find in artifacts folder.
Then this logs are transposed into html output using xsl transforms. If none of the built-in reports is useful to you, you can create a custom xsl and plug it in, see the dashboard.config file, the following section allows for adding additional xsl transforms:
<buildReportBuildPlugin>
<xslReportBuildPlugin description="MSBuild Log" actionName="MSBuildBuildReport" xslFileName="xsl\MSBuild4Log.xsl"/>
...
If you know what the error messages are going to be you can parse them with an xsl file and generate some html that will show up in the build emails. The following goes in ccservice.exe.config.
<xslFiles>
<file name="c:\path\to\custom_errors.xsl"/>
</xslFiles>
custom_errors.xsl is an xsl file that finds the error messages in the raw build log xml and then generates html from them. This html will show up in the build emails. You have to create custom_errors.xsl. It's a significant amount of work to get working the first time especially if you're new to xml/xsl/html/css. If you undertake this I suggest doing all the testing outside of ccnet using a xsl transformer and inputting a sample ccnet build log. ccnet uses a css file to style the html so be aware of that. You can edit this too.
Note you have to restart the ccnet service after editing ccservice.exe.config.
Related
I'm sorry if this doesn't have enough information. I don't typically ask for help online like this.
I'm using DITA Open Toolkit 3.4 on Windows. I generated a plugin called "vcr2" using Jarno's (very excellent and helpful) PDF Plugin Generator and then made a handful of customizations. The plugin uses the pdf2 plugin as a base. When I try to use the vcr2 plugin, my images are not working. I've tracked the problem down to malformed image filenames in the image's href attribute.
For example:
In my source file (a DITA Task), the markup for one of my images looks like this:
<image href="MyRemindersChooseReminder.png"/>
If I run a transform with the pdf2 plugin, the images work fine. In the merged stage1.xml file in the Temp folder, the XML for that same image looks like this:
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
It is processed into a file Topic.fo, and looks like this:
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/MyRemindersChooseReminder.png')"/>
Everything works fine and the image looks fine.
If I run the same file through my 'vcr2' plugin, which just calls the same pdf2 plugin with some overrides, all the images get broken:
stage1.xml
<image class="- topic/image " href="df2d132af27436c59c5c8c4282e112d62bec8201.png" placement="inline" xtrc="image:1;10:66" xtrf="file:/V:/Vasont/Extract/t12340879-minimal/t12340879.xml"/>
Topic.fo
<fo:external-graphic
src="url('file:/V:/Vasont/Extract/t12340879-minimal/df2d132af27436c59c5c8c4282e112d62bec8201.png')"
/>
As I track this down further, it appears that somewhere in the map-reader Ant task, this filename gets changed to that cryptic string of pseudo-hexadecimal. I think later on it's supposed to be changed back or resolved to a complete URI or something.
So, the two-part question is: Why does Open Toolkit change my filenames, and what's supposed to change them back?
DITA-OT's preprocess uses hashes for temporary filenames because it allows the code to not deal with directory structures. This enables preprocess to work in so-called "map-first" mode, where it first processes all DITA map resources and only then starts to process DITA topic and image resources.
The preprocess has a step called clean-preprocess that can rewrite the temporary file names to match source resource files names. However, this rewrite operation is disabled for PDF output because the original file names are not used for anything in that output type.
I have allure reporting setup for my c# selenium framework, and everything is working fine, but I have noticed something that bothers me that I'd like to change. In every single test, there is always an attachment called "console output" that is empty and 0kb in size. My question is, Is there any way to remove/disable this?
You can see what I mean in the picture below:
I'm guessing this is the confluence of two minor bugs, one in nunit and one in allure.
On the NUnit side, the XML that is created for a test result contains an <output> element to hold the text output by the test. It sounds as if an empty element is produced when there is no output. You can check whether this is the case with your version of NUnit by examining the XML output.
On the allure side, an empty element could be ignored, but apparently, it isn't.
Either or both of these should be reported to the respective projects.
In the SuiteCloud Eclipse IDE for NetSuite, what is the Ignore List setting under Preferences > NetSuite > Validation? Is it a single file that behaves like, say, a .gitignore? Or is it an explicit list of files to ignore?
I suspect this setting is why Eclipse is always building libraries and other files I've explicitly told it not to in my NetSuite projects.
Can anyone provide some clarity on the usage of this field?
Attempt 1
I tried setting this preference to a single file with the following contents:
**/*.min.js
**/*.lib.js
**/docs/**
**/Third Party/**
**/node_modules/**
**/bower_components/**
**/*jquery*
**/*moment*
**/*lodash*
But that does not seem to work as expected. Files that should be caught by these regexes are still validated. One of them in particular (docstrap.lib.js) crashes the entire IDE every single time when the SuiteScript validator encounters it.
Attempt 2
I tried to put a similar string of regexes directly into the field itself:
**/*.min.js,**/*.lib.js,**/docs/**,...
but this just yields an error directly in the dialog itself: Value must be an existing file
Attempt 3
Created a new SuiteScript project with only blanket.min.js in the project root. Added an ignore file with the following contents:
/blanket.min.js
./blanket.min.js
*blanket.min.js
blanket.min.js
"blanket.min.js"
*blanket*
**/blanket*
*/blanket*
.\blanket.min.js
**\blanket*
*\blanket*
\blanket.min.js
\blanket*
.\blanket*
C:\Development\Projects\validator-test\blanket.min.js
C:/Development/Projects/validator-test/blanket.min.js
blanket.min.js still gets validated. Completely lost as to how this ignore file should be formatted.
The ignore list is used by the SuiteCloud IDE (IDE) to avoid having errors in the IDE for non-standard script ids in SuiteScript 1.0 APIs.
As an example...
nlapiLogRecord('customrecord_foo');
Since customrecord_foo is a non-standard record, it will be marked as an error by the IDE.
To tell the IDE to ignore customrecord_foo, the ignore list can be used.
It's a text file, with one script id per line.
customrecord_foo
customrecord_bar
The specified non-standard script ids in the ignore list file will not be flagged as an error by the IDE.
We have several applications that use log4j for logging. I need to get a log4j parser working so we can combine multiple log files and run automated analysis on them. I'm not looking to reinvent the wheel, so can someone point me to a decent pre-existing parser? I do have the log4j conversion pattern if that helps.
If not, I'll have to roll our own.
I didn't realize that Log4J ships with an XML appender.
Solution was: specify an XML appender in the logging configuration file, include that output XML file as an entity into a well formed XML file, then parse the XML using your favorite technique.
The other methods had the following limitations:
Apache Chainsaw - not automated enough
jdbc - poor performance in a high performance distributed app
You can use OtrosLogViewer with batch processing. You have to:
Define you log format, you can use Log4j pattern layout parser or Log4j XmlLayout
Create java class that implements LogDataParsedListener. Method public void logDataParsed(LogData data, BatchProcessingContext context) will be called on every parsed log event.
Create jar
Run OtrosLogViewer with specifying your log processing jar, LogDataParsedListener implementation and log files.
What you are looking for is called SawMill, or something like it.
Log4j log files aren't really suitable for parsing, they're too complex and unstructured. There are third party tools that can do it, I believe (e.g. Sawmill).
If you need to perform automated, custom analysis of the logs, you should consider logging to a database, and analysing that. JDBC ships with the JdbcAppender which appends all messages to a database of your choice, but it has performance implications, and it's a bit flaky. There are other, similar, alternatives on the interweb, though (like this one).
You -can- use Log4j's Chainsaw V2 to process the various log files and collect them into one table, and either output those events as xml or use Chainsaw's built-in expression-based filtering, searching & colorizing support to slice & dice the logs.
Steps:
- Start Chainsaw V2
- Create a chainsaw configuration file by copying the example configuration file available from the Welcome tab - define one LogFilePatternReceiver 'plugin' entry for each log file that you want to process
- Start Chainsaw with that configuration
- Each log file will end up as a separate tab in the UI
- Pause the chainsaw-log tab and clear the events from that tab
- Create a new tab which aggregates the events from the various tabs by going to the 'view, crate custom expression logpanel' menu item and enter 'level >= DEBUG' in the box. It will create a new tab containing events from all of the tabs with level >= debug (which is why you cleared the chainsaw-log tab).
You can get an overview of the expression syntax used to filter, colorize and search from the tutorial (available from the Help menu).
If you don't want to use Chainsaw, you can do something similar - start a simple app that doesn't log but loads a log4j.xml config file with the 'plugin' entries you defined for the Chainsaw configuration, but also define a FileAppender with an xmllayout - all of the events received by the 'receivers' will be sent to the single appender.
I've setup some Nunit tests for validating my statistical formulas within my .net v2 application, for company records i need to have a printed copy of this output. Is anyone aware of any commands in NUnit to automatically print the XML to default printer?
If printing isn't possible saving to a folder may work for us.
thanks in advance
The NUnit console automatically gives the results as xml. To state your own name on the xml file, this is what you need to do:
nunit-console /xml:someFileNameHere.xml yourFileWithNUnitTestsHere.dll