how to get particular error logs from diaglog in db2 - db2

I am trying the db2diag command to get the all the logs ,captured in diaglog, containing a particular SQLCODE. Can any one help me with the command?

Use the db2diag command and filter the DATA section for "sqlcode" followed by the particular code:
db2diag -g 'data:=sqlcode: -1063' would search for the SQLCODE -1063 (error SQL1063N).
You probably have the full documentation of the db2diag tool. You could then format the output and extract only the part of the log records you need.

Related

log assertion result in NoN GUI mode[Jmeter]

How to log assertion result to a csv file in Non Gui mode
i tried command
jmeter -n -t user.jmx -l D:/Reports/TestReport.csv -e -o D:/Reports/htmlReport/ -j Reports/jmeter.log
Assertion result is present in my jmx file but it is not log to any file.
What exactly do you want to "log" and how?
By default JMeter logs assertion failure messages into the .jtl file, there is failureMessage column where all assertion failures go
Demo:
If you don't see this failureMessage column in the .jtl results file most probably you (or somebody else) modified default results file configuration, in order to get the value back add the next line to user.properties file:
jmeter.save.saveservice.assertions=true
and upon JMeter restart you will start seeing assertion results in your .jtl file.
More information:
Configuring JMeter
Apache JMeter Properties Customization Guide

Perl Script for Bulk Adding Users to a Group after it Reads from a CSV Line

I'm attempting to bulk add users to a group using a CSV file and I'm running into a few frustrating errors that I can't seem to find online elsewhere:
useradd: invalid shell '-d/home/jbower11' --(for all users in the list)
Use of uninitialized value $fields[0] in concatenation (.) or string at csvreader.pl line 14, line 6. --(for all users in the list)
useradd: invalid shell '-d/home/' --(An additional error that pops up after the script has run.)
system("useradd -gstudents -c $fields[0],$fields[1] -s -d/home/$fields[2] -m $fields[2]");
The -s option specifies the new user's default shell. You are passing -d/home/$fields[2] which is not a valid shell name
There is also a blank line at the end of your data file which you are not discarding
Please use the passive facilities available on the internet to diagnose your problems before resorting to personal assistance, and make an attempt to format your questions properly if you really do need help

Can we give two files as input while using JasperStarter

I am using JasperStarter to create pdf from several jrprint files and then print it using JasperStarter functtions.
I want to create one single pdf file with all the .jrprint files.
If I give command like:
jasperstarter pr a.jprint b.jprint -f pdf -o rep
It does not recognise the files after the first input file.
Can we create one single output file with many input jasper/jrprint files?
Please help.
Thanks,
Oshin
Looking at the documentation, this is not possible:
The command process (pr)
The command process is for processing a report.
In direct comparison to the command for compiling:
The command compile (cp)
The command compile is for compiling one report or all reports in a directory.

Logging a PostgreSQL session in text file

I am trying to log a complete session in psql into a .txt file. The command given to me was initially this:
psql db_name| tee file_name.txt
However, my SSH client does nothing until I quit it. That means, it does not recognize any command. More like a document, no action happens no matter what I write. So far, only '\q' is recognised which lets me get out of it. Any ideas what is happening? How am I to write the query if shell will not read anything. Also, I tried the following (this is before connecting to database) :
script filename.txt
It does show the message : script started, file is filename.txt, but I dont know where this file is stored and how to retrieve it.
Any help with the above will be welcome and really appreciated! Thanks a lot :)
There is option to psql for log query and results:
-L filename
--log-file filename
Write all query output into file filename, in addition to the normal output destination.
Try this:
psql db_name -L file_name.txt

What to look for in bad pg_dump log

We want to programmatically detect errors in cron scheduled pg_dumps.
Apart from checking whether or not the log file ends with pg_dump: saving database definition:
What other tell-tale strings are there to grep for in order to programmatically check if the dump is OK?
grep the output for mentions of "ERROR:".