Scala junit picking up wrong log4j.properties file - scala

I have a test written in Scala, using junit. The test is in a module of a multi-pom with many other modules.
Here is the code of the test:
import org.apache.log4j.Logger
import org.apache.logging.log4j.scala.Logging
import org.junit._
class MyTest extends Logging {
#Test
def mainTest() = {
//val logger = Logger.getLogger("MyTest")
logger.fatal("fatal")
logger.error("error")
logger.warn("warn")
logger.info("info")
logger.debug("debug")
logger.trace("trace")
}
}
And here is the log4j.properties file, which is in the resources folder:
log4j.rootCategory=ALL, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
The maven dependencies are:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api-scala_2.10</artifactId>
<version>2.8.2</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.8.2</version>
</dependency>
When I run the test, the debug and trace levels are not printed.
It seems to me that the logger might be picking up a files from one of the other projects. why?
If I uncomment the first line of the test, all the levels get printed.
Tried adding -Dlog4j.debug to the run command, but log4j seems to be ignoring it.
Any idea what I'm missing?

You are using log4j2.
Your file name should be log4j2.properties.
Also, the syntax of the .properties file has changes. The following example, taken from here, will get you started:
name=PropertiesConfig
property.filename = logs
appenders = console, file
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
appender.file.type = File
appender.file.name = LOGFILE
appender.file.fileName=${filename}/propertieslogs.log
appender.file.layout.type=PatternLayout
appender.file.layout.pattern=[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n
loggers=file
logger.file.name=guru.springframework.blog.log4j2properties
logger.file.level = debug
logger.file.appenderRefs = file
logger.file.appenderRef.file.ref = LOGFILE
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT

Related

How to append Spark ApplicationID in filename of log4j log file - Scala

I am trying to append the Spark applicationId to the filename of log4j log file. Below is log4j.properties file
log4j.rootLogger=info,file
# Redirect log messages to console
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L -%m%n
# Redirect log messages to log file, support file rolling
log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.file.rollingPolicy.FileNamePattern=log4j//Data_Quality.%d{yyyy-MM-dd}.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L -%m%n
# set the immediate flush to true
log4j.appender.FILE.ImmediateFlush=true
# set the threshold to debug mode INFO
log4j.appender.FILE.Threshold=INFO
#Set the append to false, overwrite
log4j.appender.FILE.Append=true
Spark-submit command:
spark2-submit -conf "spark.driver.extraJavaOptions=-Dconfig.file=./input.conf -Dlog4j.configuration=log4j.properties" --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=log4j.properties" --files "input.conf,log4j.properties" --master yarn --class "DataCheckImplementation" Data_quality.jar
Logfiles are created with name : Data_Quality.2020-07-21.log, which is working correctly.
I want to add Spark ApplicationID to filename
Expected filename : Data_Quality.(ApplicationID).2020-07-21.log
Example: Data_Quality.(application_1595144411765_20000).2020-07-21.log
Is it possible? Need help!
I don't know/think if this can be at configuration level (e.g lo4j.properties, etc), but there are ways we can achieve this. Here is one approach:
You will need to have a logger class/trait where you deal with all you logger management, something like :
trait SparkContextProvider {
def spark: SparkSession
}
trait Logger extends SparkContextProvider {
lazy val log = Logger.getLogger(...)
lazy val applicationId = spark.sparkContext.applicationId
val appender = new RollingFileAppender();
appender.setAppend(true);
appender.setMaxFileSize("1MB");
appender.setMaxBackupIndex(1);
appender.setFile("Data_Quality" + applicationId + "_" + dateFormat.format(date) + ".log");
appender.activateOptions();
val layOut = new PatternLayout();
layOut.setConversionPattern("%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n");
appender.setLayout(layOut);
log.addAppender(appender);
}

Problem with configuring log4j2.properties into spring boot( using gradle)

I added a log4j2.properties file in scr/main/resources but it is not getting affected. Shouldn't log4j2.properties get detected on its own. How can I check if it's not getting detected??
Log4j2.properties file
status = error
name = PropertiesConfig
filters = threshold
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appenders = console
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
Spring Boot is using Logback as logging framework.
If you want to use Log4j2 you have to do some configuration.
Exclude the default logger and add log4j2 starter dependency:
dependencies {
compile 'org.springframework.boot:spring-boot-starter-web'
compile 'org.springframework.boot:spring-boot-starter-log4j2'
}
configurations {
all {
exclude group: 'org.springframework.boot', module: 'spring-boot-starter-logging'
}
}
And as far as I know Log4j2 is configured usting a XML file not a property file.
Please find all the information in the official Spring Boot Reference Documentation:
https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-log4j-for-logging

Scala on eclipse : reading csv as dataframe throw a java.lang.ArrayIndexOutOfBoundsException

Trying to read a simple csv file and load it in a dataframe throw a java.lang.ArrayIndexOutOfBoundsException.
As I am new to Scala I may have missed something trivial, however a thorough search both in google and stackoverflow lead nothing.
The code is the following:
import org.apache.spark.sql.SparkSession
object TransformInitial {
def main(args: Array[String]): Unit = {
val session = SparkSession.builder.master("local").appName("test").getOrCreate()
val df = session.read.format("csv").option("header", "true").option("inferSchema", "true").option("delimiter",",").load("data_sets/small_test.csv")
df.show()
}
}
small_test.csv is as simple as possible:
v1,v2,v3
0,1,2
3,4,5
Here is the actual pom of this Maven project:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>Scala_tests</groupId>
<artifactId>Scala_tests</artifactId>
<version>0.0.1-SNAPSHOT</version>
<build>
<sourceDirectory>src</sourceDirectory>
<resources>
<resource>
<directory>src</directory>
<excludes>
<exclude>**/*.java</exclude>
</excludes>
</resource>
</resources>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.0</version>
</dependency>
</dependencies>
</project>
Execution of the code throw the following
java.lang.ArrayIndexOutOfBoundsException:
18/11/09 12:03:31 INFO FileSourceStrategy: Pruning directories with:
18/11/09 12:03:31 INFO FileSourceStrategy: Post-Scan Filters: (length(trim(value#0, None)) > 0)
18/11/09 12:03:31 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
18/11/09 12:03:31 INFO FileSourceScanExec: Pushed Filters:
18/11/09 12:03:31 INFO CodeGenerator: Code generated in 413.859722 ms
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 10582
at com.thoughtworks.paranamer.BytecodeReadingParanamer$ClassReader.accept(BytecodeReadingParanamer.java:563)
at com.thoughtworks.paranamer.BytecodeReadingParanamer$ClassReader.access$200(BytecodeReadingParanamer.java:338)
at com.thoughtworks.paranamer.BytecodeReadingParanamer.lookupParameterNames(BytecodeReadingParanamer.java:103)
at com.thoughtworks.paranamer.CachingParanamer.lookupParameterNames(CachingParanamer.java:90)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.getCtorParams(BeanIntrospector.scala:44)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.$anonfun$apply$1(BeanIntrospector.scala:58)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.$anonfun$apply$1$adapted(BeanIntrospector.scala:58)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.Iterator.foreach(Iterator.scala:929)
at scala.collection.Iterator.foreach$(Iterator.scala:929)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
at scala.collection.IterableLike.foreach(IterableLike.scala:71)
at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.findConstructorParam$1(BeanIntrospector.scala:58)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.$anonfun$apply$19(BeanIntrospector.scala:176)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:234)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:191)
at scala.collection.TraversableLike.map(TraversableLike.scala:234)
at scala.collection.TraversableLike.map$(TraversableLike.scala:227)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:191)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.$anonfun$apply$14(BeanIntrospector.scala:170)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.$anonfun$apply$14$adapted(BeanIntrospector.scala:169)
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:241)
at scala.collection.immutable.List.foreach(List.scala:389)
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:241)
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:238)
at scala.collection.immutable.List.flatMap(List.scala:352)
at com.fasterxml.jackson.module.scala.introspect.BeanIntrospector$.apply(BeanIntrospector.scala:169)
at com.fasterxml.jackson.module.scala.introspect.ScalaAnnotationIntrospector$._descriptorFor(ScalaAnnotationIntrospectorModule.scala:22)
at com.fasterxml.jackson.module.scala.introspect.ScalaAnnotationIntrospector$.fieldName(ScalaAnnotationIntrospectorModule.scala:30)
at com.fasterxml.jackson.module.scala.introspect.ScalaAnnotationIntrospector$.findImplicitPropertyName(ScalaAnnotationIntrospectorModule.scala:78)
at com.fasterxml.jackson.databind.introspect.AnnotationIntrospectorPair.findImplicitPropertyName(AnnotationIntrospectorPair.java:467)
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector._addFields(POJOPropertiesCollector.java:351)
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector.collectAll(POJOPropertiesCollector.java:283)
at com.fasterxml.jackson.databind.introspect.POJOPropertiesCollector.getJsonValueMethod(POJOPropertiesCollector.java:169)
at com.fasterxml.jackson.databind.introspect.BasicBeanDescription.findJsonValueMethod(BasicBeanDescription.java:223)
at com.fasterxml.jackson.databind.ser.BasicSerializerFactory.findSerializerByAnnotations(BasicSerializerFactory.java:348)
at com.fasterxml.jackson.databind.ser.BeanSerializerFactory._createSerializer2(BeanSerializerFactory.java:210)
at com.fasterxml.jackson.databind.ser.BeanSerializerFactory.createSerializer(BeanSerializerFactory.java:153)
at com.fasterxml.jackson.databind.SerializerProvider._createUntypedSerializer(SerializerProvider.java:1203)
at com.fasterxml.jackson.databind.SerializerProvider._createAndCacheUntypedSerializer(SerializerProvider.java:1157)
at com.fasterxml.jackson.databind.SerializerProvider.findValueSerializer(SerializerProvider.java:481)
at com.fasterxml.jackson.databind.SerializerProvider.findTypedValueSerializer(SerializerProvider.java:679)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:107)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3559)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2927)
at org.apache.spark.rdd.RDDOperationScope.toJson(RDDOperationScope.scala:52)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:142)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:339)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3384)
at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2545)
at org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:3365)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3365)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2545)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2759)
at org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$.infer(CSVDataSource.scala:232)
at org.apache.spark.sql.execution.datasources.csv.CSVDataSource.inferSchema(CSVDataSource.scala:68)
at org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:63)
at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$getOrInferFileFormatSchema$12(DataSource.scala:183)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:180)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
at TransformInitial$.main(TransformInitial.scala:9)
at TransformInitial.main(TransformInitial.scala)
For the record eclipse version is 2018-09 (4.9.0).
I've hunted for special characters in the csv with a cat -A. It yield nothing.
I'm out of options, something trivial must be missing but I can't put a finger on it.
I'm not sure exactly what is causing your error, since the code works for me. It could be related to the version of the Scala compiler that you are using, since there's no information about that in your Maven file.
I have posted my complete solution—using SBT— to GitHub. To exectute the code, you'll need to install SBT, cd to the checked out source's root folder, then run the following command:
$ sbt run
BTW, I changed your code to take advantage of more idiomatic Scala conventions, and also used the csv function to load your file. The new Scala code looks like this:
import org.apache.spark.sql.SparkSession
// Extending App is more idiomatic than writing a "main" function.
object TransformInitial
extends App {
val session = SparkSession.builder.master("local").appName("test").getOrCreate()
// As of Spark 2.0, it's easier to read CSV files.
val df = session.read.option("header", "true").option("inferSchema", "true").csv("data_sets/small_test.csv")
df.show()
// Shutdown gracefully.
session.stop()
}
Note that I also removed the redundant delimiter option.
Downgrading scala version to 2.11 fixed for me.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.4.0</version>
</dependency>

Cucumber - Content Assistance in Feature file not working

So, I've been having a tough time getting the dependencies down with Cucumber/Scala integration. I finally have a simple step definition running, but when I press control + space bar, the list of step definitions do not show up in my feature file. However, when I run the feature file, it runs successfully.
Test Runner
package CucumberTest
import cucumber.api.CucumberOptions
import cucumber.api.junit.Cucumber
import org.junit.runner.RunWith
#RunWith(classOf[Cucumber])
#CucumberOptions(
features = Array("Feature")
,glue= Array("stepDefinition")
,plugin = Array ("pretty", "html:target/cucumber/html")
)
class TestRunner {
def main(args: Array[String]): Unit = {
println("Hi")
}
}
Step Definition file
package stepDefinition
import cucumber.api.scala.{ ScalaDsl, EN }
class Test_Steps extends ScalaDsl with EN{
Given("""^this pre condition$""") { () =>
println("YOOOOOOOOO!!!")
}
When("""^I do this$""") { () =>
//// Write code here that turns the phrase above into concrete actions
}
Then("""^I can verify that$""") { () =>
//// Write code here that turns the phrase above into concrete actions
}
Then("""^I can also verify that$""") { () =>
//// Write code here that turns the phrase above into concrete actions
}
This is what my feature looks like. "this pre condition" is highlighted in yellow, indicating that the feature file is not finding the glue code.
When I hover my mouse over the Given statement, I get this message
Step 'this pre condition' does not have a matching glue code
But when I run it, I get this as the output.
Scala Console output
Since YOOOOOOOOO!!! printed in the console, it's seeing my glue code and running successfully, but I don't get a list of step definitions and the phrase "this pre condition" is highlighted yellow.
Does anyone know what the issue could be?
Here are some dependencies relating to cucumber/scala
<!-- https://mvnrepository.com/artifact/info.cukes/cucumber-java -->
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-java</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-scala_2.11</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-junit</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>gherkin</artifactId>
<version>2.12.2</version>
</dependency>
<dependency>
<groupId>info.cukes</groupId>
<artifactId>cucumber-core</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-nop</artifactId>
<version>1.7.12</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.8</version>
</dependency>
So I think the issue was a combination on having dependency mismatches as well as not adding Junit to my project path.
This is what my Test_Steps class looks like now. I imported libraries from Cucumber java api.
package stepDefinition
//import org.slf4j.LoggerFactory
import cucumber.api.java.en.Given;
import cucumber.api.scala._
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;
import cucumber.api.java8._
import cucumber.api.scala.{ ScalaDsl, EN }
import cucumber.runtime.java.StepDefAnnotation
#StepDefAnnotation
class Test_Steps extends ScalaDsl with EN {
//this works
#Given("""^this pre condition$""")
def this_pre_condition() = {
println("Hello")
}
#When("""^blah condition$""")
def when_condition() = {
println("In the when statement -- ")
}
}
My feature file's content assistance feature works now.
Junit output
Console output

LOGBACK: No context given for ch.qos.logback.classic.gaffer.ConfigurationDelegate#

I get the above message on the console at the start of program execution, and am unclear how to resolve it. For information, the program is written in Scala, uses the grizzled-slf4j adapter over slf4j with the logback provider, and has a logback.groovy file in the classpath. Here's the latter's contents:
import static ch.qos.logback.classic.Level.*
import ch.qos.logback.classic.encoder.*
import ch.qos.logback.classic.filter.*
import ch.qos.logback.core.*
import ch.qos.logback.core.status.OnConsoleStatusListener
statusListener(OnConsoleStatusListener)
appender("STDOUT", ConsoleAppender) {
filter(ThresholdFilter) {
level = INFO
}
encoder(PatternLayoutEncoder) {
pattern = "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
}
}
root(INFO, ['WARN'])
Ideas welcome (the logging itself works fine, the issue is just the initial message).
Change
root(INFO, ['WARN'])
to
root(INFO, ['STDOUT'])