How do I format jenkins build server emails so that the content is not all on the same line? - email

I have used hudson in that past and am very happy with it. It seemed to work well.
I recently installed jenkins and set up the editable email plug in.
Jenkins Version: 1.513
Email-ext plugin version: 2.28
Unfortunately when I try to add other tokens/over ride the default email it just appends all the tokens to the same line.
This is confusing. I have the email set up for html.
Any hints on how to format this nicer?
The default email sent (not the editable one) works ok, but I would like more useful information.
Unfortunately the format of this email makes it close to useless.
here is my editable content:
$BUILD_TAG
$BUILD_ID
$SVN_REVISION
$CHANGES
$CAUSE
$DEFAULT_CONTENT
$WARNINGS_NEW
$WARNINGS_COUNT
Here is the email received:
jenkins-DotNet-43 2013-05-13_16-09-40 7481 [kevin] -help layout Started by an SCM change DotNet - Build # 43 - Successful: Check console output at http://[buildserver]:8080/job/DotNet/43/ to view the results. [kevin] -help layout Started by an SCM change [...truncated 142 lines...] CopyFilesToOutputDirectory: Copying file from "obj\Release\Model.Wpf.dll" to "bin\Release\Model.Wpf.dll". Model.Wpf -> C:\Jenkins.jenkins\jobs\DotNet\workspace\dotnet\Messenger\Model\Model.Generic\bin\Release\Model.Wpf.dll Copying file from "obj\Release\Model.Wpf.pdb" to "bin\Release\Model.Wpf.pdb". Done Building Project "C:\Jenkins.jenkins\jobs\DotNet\workspace\dotnet\Messenger\Model\Model.Ge
EDIT
Note: when I put in "< BR >" entries between items they are separated by linefeeds in the email. Unfortunately though within the tokens themselves (like the change list) the are NO line separators - for example multiple commits are listed all on one line.
The content is there, but it is difficult to decipher. It seems there is a bug in the mail plugin or some other related system.

You already noticed that you need to actually use HTML line breaks between tokens so they don't show up on the same line, so I'll just answer the part about the multiple change log entries on the same line.
From the Content Token Reference, bold emphasis mine:
${CHANGES, showPaths, showDependencies, format, pathFormat}
Displays the changes since the last build.
showDependencies - if true, changes to projects this build depends on are shown.
Defaults to false.
showPaths - if true, the paths modified by a commit are shown.
Defaults to false.
format - for each commit listed, a string containing %X, where %X is one of %a for author, %d for date, %m for message, %p for paths,
or %r for revision. Not all revision systems support %d and %r. If
specified, showPaths is ignored.
Defaults to "[%a] %m\n".
pathFormat - a string containing %p to indicate how to print paths.
Defaults to "\t%p\n".
The unparameterized ${CHANGES} token is set up for display in a plain text email. You need to configure it so it displays properly in an HTML environment.
Example: <ul>${CHANGES, format="<li>[%a] %m</li>"}</ul>

One may try
mimeType:'HTML/text'
with the emailext plugin and use HTML <br> tag for new lines.
Surprisingly mimeType:'text/html' didn't work in my case whereas mimeType:'HTML/text' did.

Related

Dataprep import dataset does not detect headers in first row automatically

I am importing a dataset from Google Cloud Storage (parameterized) into Dataprep. So far, this worked perfectly fine and one of the feature that I liked is that it auto detects that the first row in my (application/octet-stream) .csv file are my headers.
However, today I tried to import a new dataset and it did not detect the headers, but it auto assigned column1, column2...
What has changed and or why is this the case. I have checked the box auto-detect and use UTF-8:
While the auto-detect option is usually pretty good, there are times that it fails for numerous reasons. I've specifically noticed this when the field names contain certain characters (e.g. comma, invisible characters like zero-width-non-joiners, null bytes), or when multiple different styles of newline delimiters are used within the same file.
Another case I saw this is when there were more columns of data than there were headers.
As you already hit on, you can use the following snippet to do mostly the same thing:
rename type: header method: filter sanitize: true
. . . or make separate recipe steps to convert the first row to header and then bulk-rename to your own liking.
More often than not, however, I've found that when auto-detect fails on a previously working file, it tends to be a sign of some sort of issue with the source file. I would look for mismatched data, as well as misplaced commas within the output, as well as comparing the header and some data rows to the original source using a plaintext editor.
When all else fails, you can try a CSV validator . . . but in my experience they tend to be incredibly opinionated when it comes to the formatting options of the fileā€”so depending on the system generating the CSV, it could either miss any errors or give false-positives. I have had two experiences where auto-detect fails for no apparent reason on perfectly clean files, so it is possible that process was just skipped for some reason.
It should also be noted that if you have a structured file that was correctly detected but want to revert it, you can go to the dataset details, select the "..." (More) button, and choose "Remove structure..." (I'm hoping that one day they'll let you do the opposite when you want to add structure to a raw dataset or work around bugs like this!)
Best of luck!
Can be resolved as a transformation within a Flow:
rename type: header method: filter sanitize: true

tSendMail - New Line Trouble

I am trying to create an email with the some job status information, which I wish to put across multiple lines. However, whatever I do, I get the output in one line. Have changed the MIME type to HTML, used "\n", "\r", "\r\n", String Objects newline. Nothing seems to work.
Although I noticed that these characters do get processed, even though the outcome isn't as expected. I don't see them in the email body, which suggests that the text processor accepts them. Just doesn't process them they way it should. Do I see a bug in the component?
I am on Talend Open Studio 7.0.1, on Ubutntu 16.04.4 VM, on Windows 10 system (if that helps).
HTML < BR > works.
I tried it earlier but looks like I didn't structure my html tags well so it failed. Did it from start and got it right.
Guess what - The more you try, the more you learn. :)

Publish an R code in github

I have written some codes in R that I have compiled into a package. Unfortunately
I am not able to publish it as a package such that any user may download it with the
install_github() function.
Kindly help.
I have shared the path for the repository below.
https://github.com/Kagereki/RPerio
When I try to install your package I get the following error:
Error: Line starting 'Population-Based Sur ...' is malformed!
The specification for DESCRIPTION files states that
DESCRIPTION uses a simple file format called DCF, the Debian control format. You can see most of the structure in the simple example below. Each line consists of a field name and a value, separated by a colon. When values span multiple lines, they need to be indented:
Description: The description of a package is usually long,
spanning multiple lines. The second and subsequent lines
should be indented, usually with four spaces.
Inspecting your DESCRIPTION file shows that its Description field is indeed formatted incorrectly, with the second line not indented:
Description: A collection of tools for Case Definitions and prevalences in
Population-Based Surveillance of Periodontitis.
The functionality is experimental and functions are likely to ...
Note that this line begins with "Population-Based Sur ...", as suggested by the error message.
Make sure your DESCRIPTION is properly formatted and see if that fixes things.
You should be able to use the check() function from devtools to make sure that everything is working locally before you push up a new version.

Fail2ban add more info to email notificationd

I'd like to append the relevant fail2ban log entry to the notification email I already receive for any given incident.
Does anybody know how this can be done?
It depends on what information you would like - you may edit the appropriate action.d configuration file's actionban segment by copying the .conf version to a .local version which will override the .conf version as per the fail2ban documentation, and edit it to include whatever information you would like. For example, I have personally amended my sendmail-whois.conf (which is the main sendmail action I use - you could do likewise with sendmail.conf however if you use that for example) by copying it to sendmail-whois.local which I then edited to include the server hostname on the 'From:' line.
You could also include commands to be executed with their output passed to the email to be sent, as long as you follow the correct syntax and fully qualify the path to the relevant commands - for example, you will see that the sendmail-whois action configuration contains the line, within the actionban segment;
`/usr/bin/whois <ip>`\n
Note, as I have mentioned above - the full path to the relevant command is included (in this case, for whois), and the entire command with its options must be delimited by backquotes. the \n at the end of the line indicates that a new line be printed following this one in the output.
Hope that clarifies things for you!

how can we identify notepad file?

how can we identify notepad files which is created in two computer, is there a any way to get any information about in which computer it was created.Or whether it is build in xp or linux.
If you right click on the file, you should be able to see the permissions and attributes of the file.
Check at the end of the line. Under GNU/Linux lines end with \n (ascii: 0x0A) while under Miscrosoft W$ndos it is \r\n (ascii: 0x0D 0x0A).
Wikipedia: https://en.wikipedia.org/wiki/Newline
found this: http://bit.ly/J258Mr
for identifying a word document but some of the info is relevant
To see on which computer the document had been created, open the Word
document in a hex editor and look for "PID_GUID". This is followed by
a globally unique identifier that, depending upon the version of Word
used, may contain the MAC address of the system on which the file was
created.
Checking the user properties (as already mentioned) is a good way to
see who the creator of the original file was...so, if the document was
not created from scratch and was instead originally created on another
system, then the user information will be for the original file.
Another way to locate the "culprit" in this case is to parse the
contents of the NTUSER.DAT files for each user on each computer. While
this sounds like a lot of work, it really isn't...b/c you're only
looking for a couple of pieces of information. Specifically, you're
interested in the MRU keys for the version of Word being used, as well
as perhaps the RecentDocs keys."
The one thing I can think on the top of my mind is inspecting the newline characters on your file - I'm assuming your files do have multiple lines. If the file was generated using Windows then a newline would be characterized by the combination of carriage return and line feed characters (CR+LF) whereas a simple line feed (LF) would be a hint that the file was generated in a Linux machine.
Right click one the file--> Details . You can see the computer name where it was created and the date.