Scala project stopped building due to aspectj.dtd error with aopMerge - scala

My project stopped working from one day to another, without any changes in the project. I'm suspecting one of the dependencies updated, but it's unclear from the error message.
[warn] Merging 'META-INF/aop.xml' with strategy 'aopMerge'
[error] org.xml.sax.SAXParseExceptionpublicId: -//AspectJ//DTD//EN; systemId: http://www.eclipse.org/aspectj/dtd/aspectj.dtd; lineNumber: 1; columnNumber: 2; The markup declarations contained or pointed to by the document type declaration must be well-formed.
The aopMerge strategy in my build.sbt is
val aopMerge: MergeStrategy = new MergeStrategy {
val name = "aopMerge"
import scala.xml._
import scala.xml.dtd._
def apply(tempDir: File, path: String, files: Seq[File]): Either[String, Seq[(File, String)]] = {
val dt = DocType("aspectj", PublicID("-//AspectJ//DTD//EN", "https://www.eclipse.org/aspectj/dtd/aspectj.dtd"), Nil)
val file = MergeStrategy.createMergeTarget(tempDir, path)
val xmls: Seq[Elem] = files.map(XML.loadFile)
val aspectsChildren: Seq[Node] = xmls.flatMap(_ \\ "aspectj" \ "aspects" \ "_")
val weaverChildren: Seq[Node] = xmls.flatMap(_ \\ "aspectj" \ "weaver" \ "_")
val options: String = xmls.map(x => (x \\ "aspectj" \ "weaver" \ "#options").text).mkString(" ").trim
val weaverAttr = if (options.isEmpty) Null else new UnprefixedAttribute("options", options, Null)
val aspects = new Elem(null, "aspects", Null, TopScope, false, aspectsChildren: _*)
val weaver = new Elem(null, "weaver", weaverAttr, TopScope, false, weaverChildren: _*)
val aspectj = new Elem(null, "aspectj", Null, TopScope, false, aspects, weaver)
XML.save(file.toString, aspectj, "UTF-8", xmlDecl = false, dt)
IO.append(file, IO.Newline.getBytes(IO.defaultCharset))
Right(Seq(file -> path))
}
}
I tried changing http://www.eclipse.org/aspectj/dtd/aspectj.dtd into https://www.eclipse.org/aspectj/dtd/aspectj.dtd based on a similar question on StackOverflow. Unfortunately, I'm getting the same error, even with http:// instead of https://. Note that I did run sbt reload after the change.
How can I find out why this is happening? And what can I do to solve this?

The workaround I used was generating a new file with the HTTPS url based on the files passed in files: Seq[File]) parameter. Then changing val xmls: Seq[Elem] = files.map(XML.loadFile) to use this new files.
val xmls: Seq[Elem] = files.map(generateFileWithSecureDtdUrl).map(XML.loadFile)
with the following declared on build.sbt
def generateFileWithSecureDtdUrl(originalFile: File): File = {
val fixedFileName = originalFile.getPath + ".fixed"
val ps: PrintStream = new PrintStream(fixedFileName) {
val originalFileSource: BufferedSource = Source.fromFile(originalFile)
originalFileSource
.getLines()
.map { line =>
if (line.contains("DOCTYPE") && line.contains("http://www.eclipse.org/aspectj/dtd/aspectj.dtd"))
line.replace("http://www.eclipse.org/aspectj/dtd/aspectj.dtd", "https://www.eclipse.org/aspectj/dtd/aspectj.dtd")
else
line
}
.foreach(line => write(line.getBytes("UTF8")))
originalFileSource.close()
}
ps.close()
new File(fixedFileName) }

Related

Can Record class be used to create RegInit?

I am trying to use Record class to generate RegInit. This can help to dynamically create RegInit with other information in Module.
I am trying to use Record class to generate RegInit.
Here are my codes.
class MyReg(reg_info: List[(String, UInt)]) extends Record {
override def elements: ListMap[String, UInt] = ListMap(
reg_info.map(
x =>
x._1 -> x._2
):_*
)
override def cloneType: MyReg.this.type = new MyReg(reg_info).asInstanceOf[this.type]
}
class reg_test(reg_list: List[(String, String, List[(String, Int)])]) extends Module with RegSugar {
val io = IO(new Bundle() {
val a = Input(UInt(16.W))
val b = Input(UInt(16.W))
val d = Output(UInt(16.W))
val e = Output(UInt(16.W))
})
val myBundle = new Bundle() {
val a = UInt(16.W)
val b = UInt(16.W)
}
val reg_init_bundle = RegInit(myBundle, 0.U.asTypeOf(myBundle))
println(reg_init_bundle)
reg_init_bundle.a := io.a
reg_init_bundle.b := io.b
val list_b = List(("a", UInt(16.W)), ("b", UInt(16.W)))
val myRecord = new MyReg(list_b)
val reg_init = RegInit(myRecord, 0.U.asTypeOf(myRecord))
reg_init.elements("a") := io.a
reg_init.elements("b") := io.b
io.d := reg_init.elements("a") + reg_init.elements("b")
io.e := reg_init_bundle.a + reg_init_bundle.b
}
In my sight, Bundle extends Record and if these code work on Bundle, they should also work on Record.
However, the error is reported on the line:
val reg_init = RegInit(myRecord, 0.U.asTypeOf(myRecord))
Exception in thread "main" chisel3.package$RebindingException: Attempted reassignment of binding to reg_test.reg_init_?.a: Wire[UInt<16>], from: ChildBinding(reg_test.reg_init_?: Reg[MyReg])
at ... ()
at implicit_test.reg_test.$anonfun$reg_init$2(reg_test.scala:61)
at chisel3.internal.prefix$.apply(prefix.scala:48)
at implicit_test.reg_test.$anonfun$reg_init$1(reg_test.scala:61)
at chisel3.internal.plugin.package$.autoNameRecursively(package.scala:33)
at implicit_test.reg_test.<init>(reg_test.scala:61)
at implicit_test.reg_test$.$anonfun$verilogString$1(reg_test.scala:77)
at ... ()
at ... (Stack trace trimmed to user code only. Rerun with --full-stacktrace to see the full stack trace)
It seems that these codes can successfully generate RegInit with Bundle but failed to generate RegInit with Record.
Is that a way to generate RegInit with Record? or there is a compiler error?
https://github.com/chipsalliance/chisel3/issues/2998. Thanks, guys. Here is the explanation for this Question. Case closed.

Load constraints from csv-file (amazon deequ)

I'm checking out Deequ which seems like a really nice library. I was wondering if it is possible to load constraints from a csv file or an orc-table in HDFS?
Lets say I have a table with theese types
case class Item(
id: Long,
productName: String,
description: String,
priority: String,
numViews: Long
)
and I want to put constraints like:
val checks = Check(CheckLevel.Error, "unit testing my data")
.isComplete("id") // should never be NULL
.isUnique("id") // should not contain duplicates
But I want to load the ".isComplete("id")", ".isUnique("id")" from a csv file so the business can add the constraints and we can run te tests based on their input
val verificationResult = VerificationSuite()
.onData(data)
.addChecks(Seq(checks))
.run()
I've managed to get the constraints from suggestionResult.constraintSuggestion
val allConstraints = suggestionResult.constraintSuggestions
.flatMap { case (_, suggestions) => suggestions.map { _.constraint }}
.toSeq
which gives a List like for example:
allConstraints = List(CompletenessConstraint(Completeness(id,None)), ComplianceConstraint(Compliance('id' has no negative values,id >= 0,None))
But it gets generated from suggestionResult.constraintSuggestions. But I want to be able to create a List like that based on the inputs from a csv file, can anyone help me?
To sum things up:
Basically I just want to add:
val checks = Check(CheckLevel.Error, "unit testing my data")
.isComplete("columnName1")
.isUnique("columnName1")
.isComplete("columnName2")
dynamically based on a file where the file has for example:
columnName;isUnique;isComplete (header)
columnName1;true;true
columnName2;false;true
I chose to store the CSV in src/main/resources as it's very easy to read from there, and easy to maintain in parallel with the code being QA'ed.
def readCSV(spark: SparkSession, filename: String): DataFrame = {
import spark.implicits._
val inputFileStream = Try {
this.getClass.getResourceAsStream("/" + filename)
}
.getOrElse(
throw new Exception("Cannot find" + filename + "in src/main/resources")
)
val readlines =
scala.io.Source.fromInputStream(inputFileStream).getLines.toList
val csvData: Dataset[String] =
spark.sparkContext.parallelize(readlines).toDS
spark.read.option("header", true).option("inferSchema", true).csv(csvData)
}
This loads it as a DataFrame; this can easily be passed to code like gavincruick's example on GitHub, copied here for convenience:
//code to build verifier from DF that has a 'Constraint' column
type Verifier = DataFrame => VerificationResult
def generateVerifier(df: DataFrame, columnName: String): Try[Verifier] = {
val constraintCheckCodes: Seq[String] = df.select(columnName).collect().map(_(0).toString).toSeq
def checkSrcCode(checkCodeMethod: String, id: Int): String = s"""com.amazon.deequ.checks.Check(com.amazon.deequ.checks.CheckLevel.Error, "$id")$checkCodeMethod"""
val verifierSrcCode = s"""{
|import com.amazon.deequ.constraints.ConstrainableDataTypes
|import com.amazon.deequ.{VerificationResult, VerificationSuite}
|import org.apache.spark.sql.DataFrame
|
|val checks = Seq(
| ${constraintCheckCodes.zipWithIndex
.map { (checkSrcCode _).tupled }
.mkString(",\n ")}
|)
|
|(data: DataFrame) => VerificationSuite().onData(data).addChecks(checks).run()
|}
""".stripMargin.trim
println(s"Verification function source code:\n$verifierSrcCode\n")
compile[Verifier](verifierSrcCode)
}
/** Compiles the scala source code that, when evaluated, produces a value of type T. */
def compile[T](source: String): Try[T] =
Try {
val toolbox = currentMirror.mkToolBox()
val tree = toolbox.parse(source)
val compiledCode = toolbox.compile(tree)
compiledCode().asInstanceOf[T]
}
//example usage...
//sample test data
val testDataDF = Seq(
("2020-02-12", "England", "E10000034", "Worcestershire", 1),
("2020-02-12", "Wales", "W11000024", "Powys", 0),
("2020-02-12", "Wales", null, "Unknown", 1),
("2020-02-12", "Canada", "MADEUP", "Ontario", 1)
).toDF("Date", "Country", "AreaCode", "Area", "TotalCases")
//constraints in a DF
val constraintsDF = Seq(
(".isComplete(\"Area\")"),
(".isComplete(\"Country\")"),
(".isComplete(\"TotalCases\")"),
(".isComplete(\"Date\")"),
(".hasCompleteness(\"AreaCode\", _ >= 0.80, Some(\"It should be above 0.80!\"))"),
(".isContainedIn(\"Country\", Array(\"England\", \"Scotland\", \"Wales\", \"Northern Ireland\"))")
).toDF("Constraint")
//Build Verifier from constraints DF
val verifier = generateVerifier(constraintsDF, "Constraint").get
//Run verifier against a sample DF
val result = verifier(testDataDF)
//display results
VerificationResult.checkResultsAsDataFrame(spark, result).show()
It depends on how complicated you want to allow the constraints to be. In general, deequ allows you to use arbitrary scala code for the validation function of a constraint, so its difficult (and dangerous from a security perspective) to load that from a file.
I think you would have to come up with your own schema and semantics for the CSV file, at least it is not directly supported in deequ.

Sequential input task in SBT

I am trying to write an SBT task that may be used like this:
> deploy key1=value1 key2=value2 ...
and does the following:
reads a default properties file
adds the parsed keys and values to the properties object
writes the new properties object to resources/config.properties
packages a fat .jar withsbt-assembly
writes the default properties to resources/config.properties
I have been trying to achieve it with Def.sequential, but I cant seem to find a way to use it withinputKey. Mybuild.sbtlooks like this:
val config = inputKey[Unit] (
"Set configuration options before deployment.")
val deploy = inputKey[Unit](
"assemble fat .jar with configuration options")
val defaultProperties = settingKey[Properties](
"default application properties.")
val propertiesPath = settingKey[File]("path to config.properties")
val writeDefaultProperties = taskKey[Unit]("write default properties file.")
val parser = (((' ' ~> StringBasic) <~ '=') ~ StringBasic).+
lazy val root = (project in file("."))
.settings(
propertiesPath := {
val base = (resourceDirectory in Compile).value
base / "config.properties"
},
defaultProperties := {
val path = propertiesPath.value
val defaultConfig = new Properties
IO.load(defaultConfig, path)
defaultConfig
},
config := {
val path = propertiesPath.value
val defaultConfig = defaultProperties.value
val options = parser.parsed
val deployConfig = new Properties
deployConfig.putAll(defaultConfig)
options.foreach(option =>
deployConfig
.setProperty(option._1, option._2))
IO.write(deployConfig, "", path)
},
writeDefaultProperties := {
val default = defaultProperties.value
val path = propertiesPath.value
IO.write(default, "", path)
},
deploy := Def.sequential(
config.parsed, // does not compile
assembly,
writeDefaultProperties),
...)
Can I makeDef.sequential work with input keys, or do I need to do something more involved?
See Defining a sequential task with Def.sequential. It uses scalastyle as an example:
(scalastyle in Compile).toTask("")

playframework 2.4 - Unspecified value parameter headers error

I am upgrading playframework 2.4 from 2.3, I changed versions then if I compile same code, I see following error. Since I am novice at Scala, I am trying to learn Scala to solve this issue but still don't know what is the problem. What I want to do is adding a request header value from original request headers. Any help will be appreciated.
[error] /mnt/garner/project/app-service/app/com/company/playframework/filters/LoggingFilter.scala:26: not enough arguments for constructor Headers: (headers: Seq[(String, String)])play.api.mvc.Headers.
[error] Unspecified value parameter headers.
[error] val newHeaders = new Headers { val data = (requestHeader.headers.toMap
The LoggingFilter class
class LoggingFilter extends Filter {
val logger = AccessLogger.getInstance();
def apply(next: (RequestHeader) => Future[Result])(requestHeader: RequestHeader): Future[Result] = {
val startTime = System.currentTimeMillis
val requestId = logger.createLog();
val newHeaders = new Headers { val data = (requestHeader.headers.toMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> Seq(requestId))).toList }
val newRequestHeader = requestHeader.copy(headers = newHeaders)
next(newRequestHeader).map { result =>
val endTime = System.currentTimeMillis
val requestTime = endTime - startTime
val bytesToString: Enumeratee[ Array[Byte], String ] = Enumeratee.map[Array[Byte]]{ bytes => new String(bytes) }
val consume: Iteratee[String,String] = Iteratee.consume[String]()
val resultBody : Future[String] = result.body |>>> bytesToString &>> consume
resultBody.map {
body =>
logger.finish(requestId, result.header.status, requestTime, body)
}
result;
}
}
}
Edit
I updated codes as following and it compiled well
following codes changed
val newHeaders = new Headers { val data = (requestHeader.headers.toMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> Seq(requestId))).toList }
to
val newHeaders = new Headers((requestHeader.headers.toSimpleMap
+ (AccessLogger.X_HEADER__REQUEST_ID -> requestId)).toList)
It simply states that if you want to construct Headers you need to supply a field named headers which is of type Seq[(String, String)]. If you omit the inital new you will be using the apply function of the corresponding object for Headers which will just take a parameter of a vararg of (String, String) and your code should work. If you look at documentation https://www.playframework.com/documentation/2.4.x/api/scala/index.html#play.api.mvc.Headers and flip between the docs for object and class it should become clear.

Scala run time code compilation

I am trying to port my java code to pure scala, so would appreciate any help in this regard.
The code below works by first translating my business logic into java code. Here I'm using freemarker template to do template based code generation. After file creation I then use the java comiler to compile the code and create a jar file, which is persisted in a temporary directory
I am currently using the javax.tools.* package that provides runtime compilation. What is the Scala equiavalent to that approach? I'd like to generate pure Scala code using freemarker templates and then run Scala compilation to create a jar file.
Below is sample Java code I am using to make this happen.
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
Iterable<? extends JavaFileObject> compilationUnits = Arrays.asList(javaFileObjects);
StringBuilder builder = new StringBuilder();
builder.append(service.getConfig().getProp("coreLib"));
builder.append(";" +result.getCodeContext().getOmClasspath());
builder.append(";" +jarBuilder.toString());
builder.append(";" +service.getConfig().getProp("tempCodeGen"));
String[] compileOptions = new String[]{"-d", result.getCodeContext().getOmClasspath(),"-cp",builder.toString()} ;
Iterable<String> compilationOptionss = Arrays.asList(compileOptions);
DiagnosticCollector<JavaFileObject> diagnostics = new DiagnosticCollector<JavaFileObject>();
CompilationTask compilerTask = compiler.getTask(null, stdFileManager, diagnostics, compilationOptionss, null, compilationUnits) ;
boolean status = compilerTask.call();
Here's some methods from my own code to compile a project and package it into a jar. Its very far from polished, or properly commented, but hopefully it will indicate where you need to start. I don't think you need to use String Builder as this is not performance critical:
def buildAll(name: String, projDir: String, mainClass: String = ""): Unit =
{
import scala.tools.nsc.{Settings,Global}
val relSrc : List[String] = List()
val maniVersion = "None"
def targDir: String = projDir + "/targ"
def srcDir: String = projDir + "/src"
def srcDirs: List[String] = srcDir :: relSrc
import java.io._
val sings = new scala.tools.nsc.Settings
new File(targDir).mkdir
sings.outputDirs.setSingleOutput(targDir.toString)
val comp = new Global(sings)
val crun: comp.Run = new comp.Run
def getList(fName: String): List[String] =
{
println("starting getList " + fName)
val file = new File(fName)
if (file.isDirectory) file.listFiles.toList.flatMap(i => getList(fName + "/" + i.getName))
else List(fName)
}
crun.compile(srcDirs.flatMap(i => getList(i)))
import sys.process._
("cp -r /sdat/projects/ScalaLibs/scala " + targDir + "/scala").!
import java.util.jar._
val manif = new Manifest
val mf = manif.getMainAttributes
mf.put(Attributes.Name.MANIFEST_VERSION, maniVersion)
if (mainClass != "") mf.put(Attributes.Name.MAIN_CLASS, mainClass)
val jarName = name + ".jar"
val jarOut: JarOutputStream = new JarOutputStream(new FileOutputStream(projDir + "/" + jarName), manif)
AddAllToJar(targDir, jarOut)
jarOut.close
}
def addToJar(jarOut: JarOutputStream, file: File, reldir: String): Unit =
{
val fName = reldir + file.getName
val fNameMod = if (file.isDirectory) fName + "/" else fName
val entry = new JarEntry(fNameMod)
entry.setTime(file.lastModified)
jarOut.putNextEntry(entry)
if (file.isDirectory)
{
jarOut.closeEntry
file.listFiles.foreach(i => addToJar(jarOut, i, fName + "/"))
}
else
{
var buf = new Array[Byte](1024)
val in = new FileInputStream(file)
Stream.continually(in.read(buf)).takeWhile(_ != -1).foreach(jarOut.write(buf, 0, _))
in.close
jarOut.closeEntry()
}
}
def AddAllToJar(targDir: String, jarOut: JarOutputStream): Unit =
new java.io.File(targDir).listFiles.foreach(i => addToJar(jarOut, i, ""))
You need to add the Scala Compiler to the build path. The Scala compiler takes a list of sourcefiles and produces the compiled class files in the directory set in the output directory. Getting to grips with the full capabilities of the compiler is a major task though.
And when using scala you don't need to use any free-marker similar tools. Since scala version 2.10 there is string interpolation feature.
val traitName = "MyTrait"
val packageName = "my.pack"
val typeParams = List("A", "B", "C")
s"""
|package ${packageName}
|
|trait ${traitName}[${typeParams.mkString(",")}] {
| ${typeParams.map(t => s"val ${t.toLowerCase()}: ${t}")}
|}
|
""".stripMargin
will yield:
package my.pack
trait MyTrait[A,B,C] {
List(val a: A, val b: B, val c: C)
}
No dependencies needed :)
If you would like to compile generated code in runtime, the simplest solution is twitter eval utility
See the suggestion here - Generating a class from string and instantiating it in Scala 2.10
Or the Eval class from twitter's util library -
https://github.com/twitter/util/blob/master/util-eval/src/main/scala/com/twitter/util/Eval.scala