ScalaTest in Intellij does not print out console messages - scala

I am running Spark tests that use ScalaTest. They are very chatty on the command line using the following command (as an aside the -Dtest= is apparently ignored - all core tests are being run..):
mvn -Pyarn -Phive test -pl core -Dtest=org.apache.spark.MapOutputTrackerSuite
There are thousands of lines of output, here is a taste:
7:03:30.251 INFO org.apache.spark.scheduler.TaskSetManager: Finished TID 4417 in 23 ms on localhost (progress: 4/4)
17:03:30.252 INFO org.apache.spark.scheduler.TaskSchedulerImpl: Removed TaskSet 38.0, whose tasks have all completed, from pool
17:03:30.252 INFO org.apache.spark.scheduler.DAGScheduler: Completed ResultTask(38, 3)
17:03:30.252 INFO org.apache.spark.scheduler.DAGScheduler: Stage 38 (apply at Transformer.scala:22) finished in 0.050 s
17:03:30.288 INFO org.apache.spark.ui.SparkUI: Stopped Spark web UI at http://localhost:4041
17:03:30.289 INFO org.apache.spark.scheduler.DAGScheduler: Stopping DAGScheduler
However in IJ only tests Pass/Fail are printed out. So how to view the same chatty INFO level output as on command line?

The log4j.properties was not on the classpath. The way I fixed this:
(a) create a log4j.properties inside the test/resources folder
(b) the Following log4j.properties file worked for me:
# Set everything to be logged to the file bagel/target/unit-tests.log
log4j.rootCategory=DEBUG, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Ignore messages below warning level from Jetty, because it's a bit verbose
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

Related

SAP WebIDE MTA New Build fails since last days with weird zipping issue

Our SAP MTA project fails to build since a day or 2 using the new cloud mta build tool option with the following strange error ( I have renamed our actual project name with 'xxx') :
11:00:22 (Executor) [2020-02-06 10:00:22] INFO generating the MTA archive...
11:00:35 (Executor) [2020-02-06 10:00:35] INFO the MTA archive generated at: /projects/xxx/mta_archives/xxx_cloud_v1_1.0.0.mtar
11:00:35 (Executor) [2020-02-06 10:00:35] INFO cleaning temporary files...
11:00:35 (Executor) /usr/local/scripts/mbt/webide_mbt_build.sh: line 105: [: mta_archives/xxx_cloud_v1: binary operator expected
11:00:35 (Executor) zip warning: name not matched: mta_archives/xxx_cloud_v1
11:00:35 (Executor) zip warning: name not matched: true_1.0.0.mtar
11:00:35 (Executor)
11:00:35 (Executor) zip error: Nothing to do! (try: zip -r mta_archives/mta_archives.zip . -i mta_archives/xxx_cloud_v1 true_1.0.0.mtar)
11:00:35 (Executor) ERROR:The build of project xxx_cloud_v1 true failed, (Error Code=12, Error Msg=Failed to compress the mta_archives/xxx_cloud_v1 true_1.0.0.mtar source file to the mta_archives/mta_archives.zip .zip file.)
11:00:35 (Executor) ERROR:Function call stack
11:00:35 (Executor) exitOnError
11:00:35 (Executor) zipFile
11:00:35 (Executor) main`
It seems like it is confused about the name and adds a space and the value true in between .. causing the ZIP script command to faile.
I have rechecked our MTA.yaml file but don't see anything weird with it.
Other test projects build just fine so it has to be somethinig with either our project work space and/or the script ?
If I try to use the other older build tool option then I am getting a similar script issue:
11:19:56 (Executor) The "Task for mta build" process started.
11:19:59 (Executor) Starting process: "cd /projects/xxx_cloud_v1; webide_mta_build.sh"
11:19:59 (Executor) Incorrect command line syntax
11:19:59 (Executor) SAP Multitarget Application Archive Builder 1.1.20
We are on SAP Web IDE Full-Stack - Version: 200116. Production env.
Thanks,
Steven
I have been going through all the changes we made on the project over the last couple of days and found the potential source of the issue - although I cannot explain why this is happening.
So for future reference - here is what I found:
We are using quite some properties in the MTA.yaml file and a couple new ones where added.
So
properties:
XS_APP_LOG_LEVEL: debug
CUSTOM_PROP_1: true
CUSTOM_PROP_2: true
It seems that one of those properties is causing the issue - although there is nothing strange with it and it has a boolean value like many others we have.
I first thought it would be too long or the combined length would exceed some script variable ... but I can't come to a conclusive answer.
Will be checking with our SAP contacts for further clarification.

Perl script not working through EPIC plug in in Eclipse

I have the below Perl script in my eclipse:
#!/usr/bin/perl
use Selenium::Remote::Driver;
print "Hello, World!\n";
my $name = "king";
print "Hello, $name!\n";
my $driver = new Selenium::Remote::Driver('browser_name' => 'firefox');
$driver->get('http://www.google.com');
print $driver->get_title();
$driver->quit();
Output :
Hello, World!
Hello, king!
Selenium server did not return proper status at (eval 91) line 64.
Why I am getting the error message:
"Selenium server did not return proper status at (eval 91) line 64"
The browser is not starting. Kindly help if anyone knows the solution.
Your code looks correct. Add use strict; use warnings; at the top after shebang and make sure the selenium standalone server is running.
So steps would be
Run selenium server: java -jar selenium-server-standalone-2.44.0.jar
Observe below output
➤ java -jar selenium-server-standalone-2.44.0.jar
15:18:56.677 INFO - Launching a standalone server
15:18:56.900 INFO - Java: Oracle Corporation 25.40-b25
15:18:56.900 INFO - OS: Windows 7 6.1 x86
15:18:56.914 INFO - v2.44.0, with Core v2.44.0. Built from revision 76d78cf
15:18:57.174 INFO - RemoteWebDriver instances should connect to: http://127.0.0.1:4444/wd/hub
15:18:57.175 INFO - Version Jetty/5.1.x
15:18:57.176 INFO - Started HttpContext[/selenium-server,/selenium-server]
15:18:57.325 INFO - Started org.openqa.jetty.jetty.servlet.ServletHandler#af7cc2
15:18:57.325 INFO - Started HttpContext[/wd,/wd]
15:18:57.325 INFO - Started HttpContext[/selenium-server/driver,/selenium-server/driver]
15:18:57.325 INFO - Started HttpContext[/,/]
15:18:57.329 INFO - Started SocketListener on 0.0.0.0:4444
15:18:57.329 INFO - Started org.openqa.jetty.jetty.Server#133314b
Run your script.
I tested this on selenium-server-standalone-2.44.0 and firefox 33.0. It worked fine.
Update: According to the bug mentioned here you must be using Selenium 2.42 or greater.

How to reduce the verbosity of Spark's runtime output?

How to reduce the amount of trace info the Spark runtime produces?
The default is too verbose,
How to turn off it, and turn on it when I need.
Thanks
Verbose mode
scala> val la = sc.parallelize(List(12,4,5,3,4,4,6,781))
scala> la.collect
15/01/28 09:57:24 INFO SparkContext: Starting job: collect at <console>:15
15/01/28 09:57:24 INFO DAGScheduler: Got job 3 (collect at <console>:15) with 1 output
...
15/01/28 09:57:24 INFO Executor: Running task 0.0 in stage 3.0 (TID 3)
15/01/28 09:57:24 INFO Executor: Finished task 0.0 in stage 3.0 (TID 3). 626 bytes result sent to driver
15/01/28 09:57:24 INFO DAGScheduler: Stage 3 (collect at <console>:15) finished in 0.002 s
15/01/28 09:57:24 INFO DAGScheduler: Job 3 finished: collect at <console>:15, took 0.020061 s
res5: Array[Int] = Array(12, 4, 5, 3, 4, 4, 6, 781)
Silent mode(expected)
scala> val la = sc.parallelize(List(12,4,5,3,4,4,6,781))
scala> la.collect
res5: Array[Int] = Array(12, 4, 5, 3, 4, 4, 6, 781)
Spark 1.4.1
sc.setLogLevel("WARN")
From comments in source code:
Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
Spark 2.x - 2.3.1
sparkSession.sparkContext().setLogLevel("WARN")
Spark 2.3.2
sparkSession.sparkContext.setLogLevel("WARN")
quoting from 'Learning Spark' book.
You may find the logging statements that get printed in the shell
distracting. You can control the verbosity of the logging. To do this,
you can create a file in the conf directory called log4j.properties.
The Spark developers already include a template for this file called
log4j.properties.template. To make the logging less verbose, make a
copy of conf/log4j.properties.template called conf/log4j.properties
and find the following line:
log4j.rootCategory=INFO, console
Then
lower the log level so that we only show WARN message and above by
changing it to the following:
log4j.rootCategory=WARN, console
When
you re-open the shell, you should see less output.
Logging configuration at the Spark app level
With this approach no need of code change in cluster for a spark application.
Let's create a new file log4j.properties from log4j.properties.template.
Then change verbosity with log4j.rootCategory property.
Say, we need to check ERRORs of given jar then, log4j.rootCategory=ERROR, console
Spark submit command would be
spark-submit \
... #Other spark props goes here
--files prop/file/location \
--conf 'spark.executor.extraJavaOptions=-Dlog4j.configuration=prop/file/location' \
--conf 'spark.driver.extraJavaOptions=-Dlog4j.configuration=prop/file/location' \
jar/location \
[application arguments]
Now you would see only the logs which are ERROR categorised.
Plain Log4j way wo Spark(but needs code change)
Set Logging OFF for packages org and akka
import org.apache.log4j.{Level, Logger}
Logger.getLogger("org").setLevel(Level.ERROR)
Logger.getLogger("akka").setLevel(Level.ERROR)
If you are invoking a command from a shell, there is a lot you can do without changing any configurations. That is by design.
Below are a couple of Unix examples using pipes, but you could do similar filters in other environments.
To completely silence the log (at your own risk)
Pipe stderr to /dev/null, i.e.:
run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999 2> /dev/null
To ignore INFO messages
run-example org.apache.spark.examples.streaming.NetworkWordCount localhost 9999 | awk '{if ($3 != "INFO") print $0}'

How to use Hadoop streaming input parameter for matlab shell script

Actually i want to execute my matlab code in hadoop streaming. My doubt is how to use hadoop streaming input parameter value for my matlab input. For example ,
This is my matlab file imreadtest.m (simple coding)
rgbImage = imread('/usr/new.jpg');
imwrite(rgbImage,'/usr/OT/testedimage1.jpg');
my shell script is
#!/bin/sh
matlabbg imreadtest.m -nodisplay
Normally this works well in my ubuntu. (Not in hadoop). I have stored these two files in my HDFS using hue. now my matlab script looks like (imrtest.m)
rgbImage = imread(STDIN);
imwrite(rgbImage,STDOUT);
My shell script is (imrtest.sh).
#!/bin/sh
matlabbg imrtest.m -nodisplay
I have tried to execute this in hadoop streaming
hadoop#xxx:/usr/local/master/hadoop$ $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar -mapper /usr/OT/imrtest.sh -file /usr/OT/imrtest.sh -input /usr/OT/testedimage.jpg -output /usr/OT/opt
But i got error like this
packageJobJar: [/usr/OT/imrtest.sh, /usr/local/master/temp/hadoop- unjar4018041785380098978/] [] /tmp/streamjob7077345699332124679.jar tmpDir=null
14/03/06 15:51:41 WARN snappy.LoadSnappy: Snappy native library is available
14/03/06 15:51:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/06 15:51:41 INFO snappy.LoadSnappy: Snappy native library loaded
14/03/06 15:51:41 INFO mapred.FileInputFormat: Total input paths to process : 1
14/03/06 15:51:42 INFO streaming.StreamJob: getLocalDirs(): [/usr/local/master/temp/mapred/local]
14/03/06 15:51:42 INFO streaming.StreamJob: Running job: job_201403061205_0015
14/03/06 15:51:42 INFO streaming.StreamJob: To kill this job, run:
14/03/06 15:51:42 INFO streaming.StreamJob: /usr/local/master/hadoop/bin/hadoop job -Dmapred.job.tracker=slave3:8021 -kill job_201403061205_0015
14/03/06 15:51:42 INFO streaming.StreamJob: Tracking URL: http://slave3:50030/jobdetails.jsp?jobid=job_201403061205_0015
14/03/06 15:51:43 INFO streaming.StreamJob: map 0% reduce 0%
14/03/06 15:52:15 INFO streaming.StreamJob: map 100% reduce 100%
14/03/06 15:52:15 INFO streaming.StreamJob: To kill this job, run:
14/03/06 15:52:15 INFO streaming.StreamJob: /usr/local/master/hadoop/bin/hadoop job -Dmapred.job.tracker=slave3:8021 -kill job_201403061205_0015
14/03/06 15:52:15 INFO streaming.StreamJob: Tracking URL: http://slave3:50030/jobdetails.jsp?jobid=job_201403061205_0015
14/03/06 15:52:15 ERROR streaming.StreamJob: Job not successful. Error: NA
14/03/06 15:52:15 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
jobtracker error log for this job is
HOST=null
USER=hadoop
HADOOP_USER=null
last Hadoop input: |null|
last tool output: |null|
Date: Thu Mar 06 15:51:51 IST 2014
java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:297)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.streaming.io.TextInputWriter.writeUTF8(TextInputWriter.java:72)
at org.apache.hadoop.streaming.io.TextInputWriter.writeValue(TextInputWriter.java:51)
at org.apache.hadoop.streaming.PipeMapper.map(PipeMapper.java:110)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.streaming.Pipe
java.io.IOException: log:null
.
.
.
Please suggest me how to get input from hadoop streaming input for my matlab script input, Similarly output.

Hudson failing build w/o revealing cause

Every build has failed as of Tuesday. I'm not exactly sure what happened. The Phing targets (clean/prepare) are being executed properly. Additionally, the unit tests are passing with flying colors, with only a warning for duplicate code (not a reason for a fail). I tried removing the phpDoc target to see if that was causing the error, but the build still failed.
Started by user chris Updating
file://localhost/projects/svn/ips-com/trunk
At revision 234 no change for
file://localhost/projects/svn/ips-com/trunk
since the previous build [trunk] $
/opt/phing/bin/phing clean prepare
-logger phing.listener.NoBannerLogger Buildfile:
/var/lib/hudson/.hudson/jobs/IPS/workspace/trunk/build.xml
IPS > clean:
[echo] Clean... [delete] Deleting directory
/var/lib/hudson/.hudson/jobs/IPS/workspace/build
IPS > prepare:
[echo] Prepare...
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/coverage-html
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/docs
[mkdir] Created dir: /var/lib/hudson/.hudson/jobs/IPS/workspace/build/app
BUILD FINISHED
Total time: 1.0244 second
[workspace] $ /bin/bash -xe
/tmp/hudson3259012225710915845.sh
+ cd trunk/tests
+ /usr/local/bin/phpunit --verbose -d memory_limit=512M --log-junit
../../build/logs/phpunit.xml
--coverage-clover ../../build/logs/coverage/clover.xml
--coverage-html ../../build/logs/coverage-html/
PHPUnit 3.5.0 by Sebastian Bergmann.
IPS Default_IndexControllerTest .
Default_AuthControllerTest ......
Manage_UsersControllerTest .....
testDeleteInvalidUserId ..
testGetPermissionsForInvalidUserId .. Audit_OverviewControllerTest
............
Time: 14 seconds, Memory: 61.00Mb
[30;42m[2KOK (28 tests, 198
assertions) [0m[2K Writing code
coverage data to XML file, this may
take a moment.
Generating code coverage report, this
may take a moment.
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
Warning: Unknown: Error occured while
closing statement in Unknown on line 0
[workspace] $ /bin/bash -xe
/tmp/hudson1439023061736436000.sh
+ /usr/local/bin/phpcpd --log-pmd ./build/logs/cpd.xml ./trunk phpcpd
1.3.2 by Sebastian Bergmann.
Found 1 exact clones with 6 duplicated
lines in 2 files:
library/Ips/Form/Decorator/SplitInput.php:8-14
library/Ips/Form/Decorator/FeetInches.php:10-16
0.04% duplicated lines out of 16585 total lines of code.
Time: 4 seconds, Memory: 19.50Mb [DRY]
Skipping publisher since build result
is FAILURE Publishing Javadoc [xUnit]
[INFO] - Starting to record. [xUnit]
[WARNING] - Can't create the path
/var/lib/hudson/.hudson/jobs/IPS/workspace/generatedJUnitFiles.
Maybe the directory already exists.
[xUnit] [INFO] - Processing
PHPUnit-3.4 (default) [xUnit] [INFO] -
[PHPUnit-3.4 (default)] - 1 test
report file(s) were found with the
pattern 'build/logs/phpunit.xml'
relative to
'/var/lib/hudson/.hudson/jobs/IPS/workspace'
for the testing framework 'PHPUnit-3.4
(default)'. [xUnit] [INFO] -
Converting
'/var/lib/hudson/.hudson/jobs/IPS/workspace/build/logs/phpunit.xml'
. [xUnit] [INFO] - Stopping recording.
Publishing Clover coverage report...
Publishing Clover XML report...
Publishing Clover coverage results...
Finished: FAILURE
What changed since Tuesday? Try to manually run exactly the same commands that Hudson tries to run from the same directory that Hudson starts it from (usually the jobs workspace directory). Of course with the user account that Hudson is started under.
There are several possibilities. ranging from standard groups for a directory, to permission, or other things outside of Hudson. Was Hudson upgraded? Was a plugin upgraded? Was the OS or php upgraded? Was there a change in the default or user .profile or .env (or the equivalent files)? Does another process accesses the workspace? ......
Once I had the problem that all of the sudden my deployment scripts did not run anymore. The mystery was, that I could still run the script from command line with the Hudson user account. The reason was simple but took a while to uncover. There was a java upgrade from 5 to 6. Both versions were available. After comparing the environment variables, there was a difference in the path. The problem was that the new path was set in the global .profile. But Hudson does not open an interactive shell, therefore the .profile will not be executed. If you have a problem like this, you can put the initialization in the .env file (or whatever the filename is for your system), because this will be run regardless if it is a interactive shell or not. Alternatively you can configure Hudson to set it on master or node/slave level.
if you want a command to not break the 'build' as a failure you have to add #! in front of the command to prevent the flags -xe which produce this behaviour.