How to prevent logs on Console ( STDOUT) for TypeSafe Activator - scala

I am using TypeSafe Activator on Windows 7 for running Spark Scala. The problem is every time I run a program it prints lots of verbose logs on the console (STDOUT), I don’t want to see them because my specific messages are getting lost in the clutter.
Any idea how the verbose logs can be turned off, or even better if I can have my messages on a different console.
I also keeep getting the following error every time I make an code changes and try to run
registered more than once? Task((taskDefinitionKey:
ScopedKey(Scope(Select(ProjectRef(file:/C:/Users/E498164/Documents/Hadoop/SparkProject/hello-apache-spark/,hello-apache-spark)),Select(ConfigKey(compile)),Select(run),Global),runner)))
Use 'last' for the full log.

Related

Get test execution logs during test run by Nunit Test Engine

We are using NUnit Test Engine to run test programatically.
Lokks like that after we add FrameworkPackageSettings.NumberOfTestWorkers to the Runner code, the test run for our Ui test hangs in execution. I'm not able to see at what time or event the execuiton hangs because Test Runned returns test result logs (in xml) only when entire execution ends
Is there a way to get test execution logs for each test?
I've added InternalTraceLevel and InternalTraceWriter but these logs are something different (BTW, looks like ParallelWorker#9 hangs even to write to console :) )
_package.AddSetting(FrameworkPackageSettings.InternalTraceLevel, "Debug");
var nunitInternalLogsPath = Path.GetDirectoryName(Uri.UnescapeDataString(new Uri(Assembly.GetExecutingAssembly().CodeBase).AbsolutePath)) + "\\NunitInternalLogs.txt";
Console.WriteLine("nunitInternalLogsPath: "+nunitInternalLogsPath);
StreamWriter writer = File.CreateText(nunitInternalLogsPath);
_package.AddSetting(FrameworkPackageSettings.InternalTraceWriter, writer);
The result file, with default name TestResult.xml is not a log. That is, it is not a file produced, line by line, as execution proceeds. Rather, it is a picture of the result of your entire run and therefore is only created at the end of the run.
InternalTrace logs are actual logs in that sense. They were created to allow us to debug the internal workings of NUnit. We often ask users to create them when an NUnit bug is being tracked. Up to four of them may be produced when running a test of a single assembly under nunit3-console...
A log of the console runner itself
A log of the engine.
A log of the agent used to run tests (if an agent is used)
A log received from the test framework running the tests
In your case, #1 is not produced, of course. Based on the content of the trace log, we are seeing #4, triggered by the package setting passed to the framework. I have seen the situation where the log is incomplete in the past but not recently. The logs normally use auto-flush to ensure that all output is actually written.
If you want to see a complete log #2, then set the WorkDirectory and InternalTrace properties of the engine when you create it.
However, as stated, these logs are all intended for debugging NUnit, not for debugging your tests. The console runner produces another "log" even though it isn't given that name. It's the output written to the console as the tests run, especially that produced when using the --labels option.
If you want some similar information from your own runner, I suggest producing it yourself. Create either console output or a log file of some kind, by processing the various events received from the tests as they execute. To get an idea of how to do this, I suggest examining the code of the NUnit3 console runner. In particular, take a look at the TestEventHandler class, found at https://github.com/nunit/nunit-console/blob/version3/src/NUnitConsole/nunit3-console/TestEventHandler.cs

Running scalapbc command from a thread pool

I am trying to run the scalapb command from a threadpool with each thread running the scalapbc command;
When I do that, I get an error of the form :
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0x7) at pc=0x00007f32dd71fd70, pid=8346, tid=0x00007f32e0018700
As per my google search, this issue occurs when the /tmp folder is full or it is trying to be accessed by multiple code simultaneously.
My question is, is there a way to issue scalapbc commands using threading without getting above error? how to make sure that the temp folders being used by the individual threads don't interfere with each other?
This issue occurs most of the times when I run the code but sometimes the build passes as well.

jBoss is hanging at "Loading profile..." output line

I am trying to start a jBoss 5.1.0.GA instance and the output console is hanging on the [ProfileServiceBootstrap] Loading profile: ProfileKey#3f5f852e[domain=default, server=default, name=default] line.
The jBoss instance is copied from a remote server on which it works well.
There is not so much work logged (no more than 50 rows) and No error is displayed in console while starting up.
I understand that there may be some dependencies/connections/etc that it needs and are not satisfied, but I would expect an error to be thrown. Instead, it only hangs, without any other issue being reported.
I hope that this message will sound familiar to others that have worked more with older versions of jBoss and may direct me to investigate potential root causes.
Not proud of my findings... :) the issue was in fact that the logs were set not to write into the console. So, after looking for other log files, I found that the server started with the very well known statement Started in 1m:21s:836ms...
It is not really an issue nor an answer, but I leave this here in case others will find themselves in the same "I do not see the logs" situation (which should be also the title).
Note: in order logs to be shown, I have modified /server/default/conf/jboss-log4j.xml

Scala Play app always times out on first request

After starting my app for the first time, the first request always times out. If I tail the logs when this request is invoked, Play appears to be doing some kind of required post compilation work- resolving the same list of dependencies that were resolved on startup and initiating the database connection. Is there any way to force this extra work on startup?
When you run in prod mode this will not happen.
Even if your not building yet for production you can
run a test instance
You will need to be sure to set an application secret

Where do 'normal' println go in a scala jar, under Spark

I'm running a simple jar through spark, everything is working fine, but as a crude way to debug, I often find println pretty helpful, unless I really have to attach a debugger
However, output from println statements are nowhere to be found when run under Spark.
The main class in the jar begins like this:
import ...
object SimpleApp {
def main(args: Array[String]) {
println("Starting up!")
...
Why does something as simple as this not show in the driver process.
If it matters, I've tested this running spark locally, as well as under Mesos
update
as Proper way to provide spark application a parameter/arg with spaces in spark-submit I've dumbed down the question scenario, I was actually submitting (with spark-submit) the command through SSH.
The actual value parameter was a query from the BigDataBenchmark, namely:
"SELECT pageURL, pageRank FROM rankings WHERE pageRank > 1000"
Now that wasn't properly escaped on the remote ssh command:
ssh host spark-submit ... "$query"
Became, on the host:
spark-submit ... SELECT pageURL, pageRank FROM rankings WHERE pageRank > 1000
So there you have it, all my stdout was going to a file, whereas "normal" spark output was still appearing as it is stderr, which I only now realise.
This would appear in the stdout of the driver. As an example see SparkPi. I know on Yarn this appears locally in the stdout when in client mode or in the application master stdout log when in cluster mode. Local mode should appear just on the normal stdout (though likely mixed in with lots of logging noise).
I can't say for sure about Spark, but based on what Spark is, I would assume that it starts up child processes, and the standard output of those processes is not sent back to the main process for you to see. You can get around this in a number of ways, such as opening a file to write messages to, or a network connection over your localhost to another process that displays messages it receives. If you're just trying to learn the basics, this may be sufficient. However, if you're going to do a larger project, I'd strongly recommend doing some research into what the Spark community has already developed for that purpose, as it will benefit you in the long run to have a more robust setup for debugging.