Stars and string combination pattern in Scala - scala

I am very new to scala programming , and I'm stuck with this particular task.
So this is my scala code:
object task {
def main(args: Array[String]) = {
val mystring: Array[String] = Array("This\nis\nrandom\ntext\n\n")
val s = mystring.mkString.split("\n")
println("**********")
for(element <- s)
{
println("* " + element + " *")
}
println("**********")
}
}
and I get something like this:
**********
* This *
* is *
* random *
* text *
**********
but the main goal is to get this:
**********
* This *
* is *
* random *
* text *
**********
Does anyone know how to solve this?

You need to apply a little math.
val mystring = "This\nis\nrandom\ntext\n\n"
val s = mystring.split("\n")
val mxLen = s.map(_.length).max
println("*" * (mxLen+4))
for(element <- s) {
val eLen = element.length
println("* " + element + " "*(mxLen-eLen) + " *")
}
println("*" * (mxLen+4))

Related

Is there any way I can rewrite this line of code in Scala?

I try to rewrite this line of Scala + Figaro using my function sum_ but I have some errors.
val sum = Container(vars:_*).reduce(_+_)
It uses the reduce() method to calculate the sum. I want to rewrite this line but I have errors because of the Chain return type [Double, Int]:
import com.cra.figaro.language._
import com.cra.figaro.library.atomic.continuous.Uniform
import com.cra.figaro.language.{Element, Chain, Apply}
import com.cra.figaro.library.collection.Container
object sum {
def sum_(arr: Int*) :Int={
var i=0
var sum: Int =0
while (i < arr.length) {
sum += arr(i)
i += 1
}
return sum
}
def fillarray(): Int = {
scala.util.Random.nextInt(10) match{
case 0 | 1 | 2 => 3
case 3 | 4 | 5 | 6 => 4
case _ => 5
}
}
def main(args: Array[String]) {
val par = Array.fill(18)(fillarray())
val skill = Uniform(0.0, 8.0/13.0)
val shots = Array.tabulate(18)((hole: Int) => Chain(skill, (s:Double) =>
Select(s/8.0 -> (par(hole)-2),
s/2.0 -> (par(hole)-1),
s -> par(hole),
(4.0/5.0) * (1.0 - (13.0 * s)/8.0)-> (par(hole)+1),
(1.0/5.0) * (1.0 - (13.0 * s)/8.0) -> (par(hole)+2))))
val vars = for { i <- 0 until 18} yield shots(i)
//this line I want to rewrite
val sum1 = Container(vars:_*).reduce(_+_)
//My idea was to implement in this way the line above
val sum2 = sum_(vars)
}
}
If you want use your function you can do so:
val sum2 = sum_(vars.map(chain => chain.generateValue()):_*)
or
val sum2 = sum_(vars.map(_.generateValue()):_*)
but I'd recommend to dive deeper into your library and functional paradigm.

slick db.run is not called

I want to insert a record in database but the db.run is not called
my code looks like this
val insertQueryStep = processStepTemplates returning processStepTemplates.map(_.id) into ((processStep, id) => processStep.copy(id = Some(id)))
/**
* Generates a new ProcessStepTemplate
*
* #param step
* #return
*/
def addProcessStepTemplateToProcessTemplate(step: ProcessStepTemplatesModel, processId: Int): Future[Some[ProcessStepTemplatesModel]] = {
println("In DTO: " + step + ", processtemplate: " + processId)
//val p = processStepTemplates returning processStepTemplates.map(_.id) += step
val p = insertQueryStep += step
db.run(p).map(id => {
println("Die Query lautet: " + p)
println("Die erzeugte ID lautet: " + id)
//Update the foreign key
val q = for { p <- processStepTemplates if p.id == id } yield p.processtemplate
val updateAction = q.update(Some(processId))
db.run(updateAction).map(id => {
println("Der neue Prozesschritt lautet: " + step)
Some(step)
})
Some(step)
})
}
What could be a problem in this case?
You should compose your futures as monads (with flatMap). Because the inner future will not complete.
Try to change your code in the following way (see comments #1, #2):
def addProcessStepTemplateToProcessTemplate(step: ProcessStepTemplatesModel, processId: Int): Future[Some[ProcessStepTemplatesModel]] = {
println("In DTO: " + step + ", processtemplate: " + processId)
//val p = processStepTemplates returning processStepTemplates.map(_.id) += step
val p = insertQueryStep += step
db.run(p).flatMap(id => { // #1 change map to flatMap
println("Die Query lautet: " + p)
println("Die erzeugte ID lautet: " + id)
//Update the foreign key
val q = for { p <- processStepTemplates if p.id == id } yield p.processtemplate
val updateAction = q.update(Some(processId))
val innerFuture = db.run(updateAction).map(id => {
println("Der neue Prozesschritt lautet: " + step)
Some(step)
})
innerFuture // # 2 return inner future
})
}
Also use a logging for detecting another issues (connected with db-schema, queries, etc).

Scala - append RDD to itself

for (fordate <- 2 to 30) {
val dataRDD = sc.textFile("s3n://mypath" + fordate + "/*")
val a = 1
val c = fordate - 1
for (b <- a to c) {
val cumilativeRDD1 = sc.textFile("s3n://mypath/" + b + "/*")
val cumilativeRDD : org.apache.spark.rdd.RDD[String] = sc.union(cumilativeRDD1, cumilativeRDD)
if (b == c) {
val incrementalDEviceIDs = dataRDD.subtract(cumilativeRDD)
val countofIDs = incrementalDEviceIDs.distinct().count()
println(s"201611 $fordate $countofIDs")
}
}
}
i have a data set where i get deviceIDs on daily basis. i need to figure out the incremental count per day but when i join cumilativeRDD to itself it saysthrows following error:
forward reference extends over definition of value cumilativeRDD
how can i overcome this.
The problem is this line:
val cumilativeRDD : org.apache.spark.rdd.RDD[String] = sc.union(cumilativeRDD1 ,cumilativeRDD)
You're using cumilativeRDD before it's declaration. Variable assignment works from right to left. The right side of = defines the variable on the left. Therefore you cannot use the variable inside it's own definition. Because on the right side of the equation the variable does not yet exist.
You have to init cumilativeRDD in the first run and then you can you use it in following runs:
var cumilativeRDD: Option[org.apache.spark.rdd.RDD[String]] = None
for (fordate <- 2 to 30) {
val DataRDD = sc.textFile("s3n://mypath" + fordate + "/*")
val c = fordate - 1
for (b <- 1 to c) {
val cumilativeRDD1 = sc.textFile("s3n://mypath/" + b + "/*")
if (cumilativeRDD.isEmpty) cumilativeRDD = Some(cumilativeRDD1)
else cumilativeRDD = Some(sc.union(cumilativeRDD1, cumilativeRDD.get))
if (b == c) {
val IncrementalDEviceIDs = DataRDD.subtract(cumilativeRDD.get)
val countofIDs = IncrementalDEviceIDs.distinct().count()
println("201611" + fordate + " " + countofIDs)
}
}
}

Cannot call methods on a stopped SparkContext

When I run the following test, it throws "Cannot call methods on a stopped SparkContext". The possible problem is that I use TestSuiteBase and Streaming Spark Context. At the line val gridEvalsRDD = ssc.sparkContext.parallelize(gridEvals) I need to use SparkContext that I access via ssc.sparkContext and this is where I have the problem (see the warning and error messages below)
class StreamingTest extends TestSuiteBase with BeforeAndAfter {
test("Test 1") {
//...
val gridEvals = for (initialWeights <- gridParams("initialWeights");
stepSize <- gridParams("stepSize");
numIterations <- gridParams("numIterations")) yield {
val lr = new StreamingLinearRegressionWithSGD()
.setInitialWeights(initialWeights.asInstanceOf[Vector])
.setStepSize(stepSize.asInstanceOf[Double])
.setNumIterations(numIterations.asInstanceOf[Int])
ssc = setupStreams(inputData, (inputDStream: DStream[LabeledPoint]) => {
lr.trainOn(inputDStream)
lr.predictOnValues(inputDStream.map(x => (x.label, x.features)))
})
val output: Seq[Seq[(Double, Double)]] = runStreams(ssc, numBatches, numBatches)
val cvRMSE = calculateRMSE(output, nPoints)
println(s"RMSE = $cvRMSE")
(initialWeights, stepSize, numIterations, cvRMSE)
}
val gridEvalsRDD = ssc.sparkContext.parallelize(gridEvals)
}
}
16/04/27 10:40:17 WARN StreamingContext: StreamingContext has already
been stopped 16/04/27 10:40:17 INFO SparkContext: SparkContext already
stopped.
Cannot call methods on a stopped SparkContext
UPDATE:
This is the base class TestSuiteBase:
trait TestSuiteBase extends SparkFunSuite with BeforeAndAfter with Logging {
// Name of the framework for Spark context
def framework: String = this.getClass.getSimpleName
// Master for Spark context
def master: String = "local[2]"
// Batch duration
def batchDuration: Duration = Seconds(1)
// Directory where the checkpoint data will be saved
lazy val checkpointDir: String = {
val dir = Utils.createTempDir()
logDebug(s"checkpointDir: $dir")
dir.toString
}
// Number of partitions of the input parallel collections created for testing
def numInputPartitions: Int = 2
// Maximum time to wait before the test times out
def maxWaitTimeMillis: Int = 10000
// Whether to use manual clock or not
def useManualClock: Boolean = true
// Whether to actually wait in real time before changing manual clock
def actuallyWait: Boolean = false
// A SparkConf to use in tests. Can be modified before calling setupStreams to configure things.
val conf = new SparkConf()
.setMaster(master)
.setAppName(framework)
// Timeout for use in ScalaTest `eventually` blocks
val eventuallyTimeout: PatienceConfiguration.Timeout = timeout(Span(10, ScalaTestSeconds))
// Default before function for any streaming test suite. Override this
// if you want to add your stuff to "before" (i.e., don't call before { } )
def beforeFunction() {
if (useManualClock) {
logInfo("Using manual clock")
conf.set("spark.streaming.clock", "org.apache.spark.util.ManualClock")
} else {
logInfo("Using real clock")
conf.set("spark.streaming.clock", "org.apache.spark.util.SystemClock")
}
}
// Default after function for any streaming test suite. Override this
// if you want to add your stuff to "after" (i.e., don't call after { } )
def afterFunction() {
System.clearProperty("spark.streaming.clock")
}
before(beforeFunction)
after(afterFunction)
/**
* Run a block of code with the given StreamingContext and automatically
* stop the context when the block completes or when an exception is thrown.
*/
def withStreamingContext[R](ssc: StreamingContext)(block: StreamingContext => R): R = {
try {
block(ssc)
} finally {
try {
ssc.stop(stopSparkContext = true)
} catch {
case e: Exception =>
logError("Error stopping StreamingContext", e)
}
}
}
/**
* Run a block of code with the given TestServer and automatically
* stop the server when the block completes or when an exception is thrown.
*/
def withTestServer[R](testServer: TestServer)(block: TestServer => R): R = {
try {
block(testServer)
} finally {
try {
testServer.stop()
} catch {
case e: Exception =>
logError("Error stopping TestServer", e)
}
}
}
/**
* Set up required DStreams to test the DStream operation using the two sequences
* of input collections.
*/
def setupStreams[U: ClassTag, V: ClassTag](
input: Seq[Seq[U]],
operation: DStream[U] => DStream[V],
numPartitions: Int = numInputPartitions
): StreamingContext = {
// Create StreamingContext
val ssc = new StreamingContext(conf, batchDuration)
if (checkpointDir != null) {
ssc.checkpoint(checkpointDir)
}
// Setup the stream computation
val inputStream = new TestInputStream(ssc, input, numPartitions)
val operatedStream = operation(inputStream)
val outputStream = new TestOutputStreamWithPartitions(operatedStream,
new ArrayBuffer[Seq[Seq[V]]] with SynchronizedBuffer[Seq[Seq[V]]])
outputStream.register()
ssc
}
/**
* Set up required DStreams to test the binary operation using the sequence
* of input collections.
*/
def setupStreams[U: ClassTag, V: ClassTag, W: ClassTag](
input1: Seq[Seq[U]],
input2: Seq[Seq[V]],
operation: (DStream[U], DStream[V]) => DStream[W]
): StreamingContext = {
// Create StreamingContext
val ssc = new StreamingContext(conf, batchDuration)
if (checkpointDir != null) {
ssc.checkpoint(checkpointDir)
}
// Setup the stream computation
val inputStream1 = new TestInputStream(ssc, input1, numInputPartitions)
val inputStream2 = new TestInputStream(ssc, input2, numInputPartitions)
val operatedStream = operation(inputStream1, inputStream2)
val outputStream = new TestOutputStreamWithPartitions(operatedStream,
new ArrayBuffer[Seq[Seq[W]]] with SynchronizedBuffer[Seq[Seq[W]]])
outputStream.register()
ssc
}
/**
* Runs the streams set up in `ssc` on manual clock for `numBatches` batches and
* returns the collected output. It will wait until `numExpectedOutput` number of
* output data has been collected or timeout (set by `maxWaitTimeMillis`) is reached.
*
* Returns a sequence of items for each RDD.
*/
def runStreams[V: ClassTag](
ssc: StreamingContext,
numBatches: Int,
numExpectedOutput: Int
): Seq[Seq[V]] = {
// Flatten each RDD into a single Seq
runStreamsWithPartitions(ssc, numBatches, numExpectedOutput).map(_.flatten.toSeq)
}
/**
* Runs the streams set up in `ssc` on manual clock for `numBatches` batches and
* returns the collected output. It will wait until `numExpectedOutput` number of
* output data has been collected or timeout (set by `maxWaitTimeMillis`) is reached.
*
* Returns a sequence of RDD's. Each RDD is represented as several sequences of items, each
* representing one partition.
*/
def runStreamsWithPartitions[V: ClassTag](
ssc: StreamingContext,
numBatches: Int,
numExpectedOutput: Int
): Seq[Seq[Seq[V]]] = {
assert(numBatches > 0, "Number of batches to run stream computation is zero")
assert(numExpectedOutput > 0, "Number of expected outputs after " + numBatches + " is zero")
logInfo("numBatches = " + numBatches + ", numExpectedOutput = " + numExpectedOutput)
// Get the output buffer
val outputStream = ssc.graph.getOutputStreams.
filter(_.isInstanceOf[TestOutputStreamWithPartitions[_]]).
head.asInstanceOf[TestOutputStreamWithPartitions[V]]
val output = outputStream.output
try {
// Start computation
ssc.start()
// Advance manual clock
val clock = ssc.scheduler.clock.asInstanceOf[ManualClock]
logInfo("Manual clock before advancing = " + clock.getTimeMillis())
if (actuallyWait) {
for (i <- 1 to numBatches) {
logInfo("Actually waiting for " + batchDuration)
clock.advance(batchDuration.milliseconds)
Thread.sleep(batchDuration.milliseconds)
}
} else {
clock.advance(numBatches * batchDuration.milliseconds)
}
logInfo("Manual clock after advancing = " + clock.getTimeMillis())
// Wait until expected number of output items have been generated
val startTime = System.currentTimeMillis()
while (output.size < numExpectedOutput &&
System.currentTimeMillis() - startTime < maxWaitTimeMillis) {
logInfo("output.size = " + output.size + ", numExpectedOutput = " + numExpectedOutput)
ssc.awaitTerminationOrTimeout(50)
}
val timeTaken = System.currentTimeMillis() - startTime
logInfo("Output generated in " + timeTaken + " milliseconds")
output.foreach(x => logInfo("[" + x.mkString(",") + "]"))
assert(timeTaken < maxWaitTimeMillis, "Operation timed out after " + timeTaken + " ms")
assert(output.size === numExpectedOutput, "Unexpected number of outputs generated")
Thread.sleep(100) // Give some time for the forgetting old RDDs to complete
} finally {
ssc.stop(stopSparkContext = true)
}
output
}
/**
* Verify whether the output values after running a DStream operation
* is same as the expected output values, by comparing the output
* collections either as lists (order matters) or sets (order does not matter)
*/
def verifyOutput[V: ClassTag](
output: Seq[Seq[V]],
expectedOutput: Seq[Seq[V]],
useSet: Boolean
) {
logInfo("--------------------------------")
logInfo("output.size = " + output.size)
logInfo("output")
output.foreach(x => logInfo("[" + x.mkString(",") + "]"))
logInfo("expected output.size = " + expectedOutput.size)
logInfo("expected output")
expectedOutput.foreach(x => logInfo("[" + x.mkString(",") + "]"))
logInfo("--------------------------------")
// Match the output with the expected output
for (i <- 0 until output.size) {
if (useSet) {
assert(
output(i).toSet === expectedOutput(i).toSet,
s"Set comparison failed\n" +
s"Expected output (${expectedOutput.size} items):\n${expectedOutput.mkString("\n")}\n" +
s"Generated output (${output.size} items): ${output.mkString("\n")}"
)
} else {
assert(
output(i).toList === expectedOutput(i).toList,
s"Ordered list comparison failed\n" +
s"Expected output (${expectedOutput.size} items):\n${expectedOutput.mkString("\n")}\n" +
s"Generated output (${output.size} items): ${output.mkString("\n")}"
)
}
}
logInfo("Output verified successfully")
}
/**
* Test unary DStream operation with a list of inputs, with number of
* batches to run same as the number of expected output values
*/
def testOperation[U: ClassTag, V: ClassTag](
input: Seq[Seq[U]],
operation: DStream[U] => DStream[V],
expectedOutput: Seq[Seq[V]],
useSet: Boolean = false
) {
testOperation[U, V](input, operation, expectedOutput, -1, useSet)
}
/**
* Test unary DStream operation with a list of inputs
* #param input Sequence of input collections
* #param operation Binary DStream operation to be applied to the 2 inputs
* #param expectedOutput Sequence of expected output collections
* #param numBatches Number of batches to run the operation for
* #param useSet Compare the output values with the expected output values
* as sets (order matters) or as lists (order does not matter)
*/
def testOperation[U: ClassTag, V: ClassTag](
input: Seq[Seq[U]],
operation: DStream[U] => DStream[V],
expectedOutput: Seq[Seq[V]],
numBatches: Int,
useSet: Boolean
) {
val numBatches_ = if (numBatches > 0) numBatches else expectedOutput.size
withStreamingContext(setupStreams[U, V](input, operation)) { ssc =>
val output = runStreams[V](ssc, numBatches_, expectedOutput.size)
verifyOutput[V](output, expectedOutput, useSet)
}
}
/**
* Test binary DStream operation with two lists of inputs, with number of
* batches to run same as the number of expected output values
*/
def testOperation[U: ClassTag, V: ClassTag, W: ClassTag](
input1: Seq[Seq[U]],
input2: Seq[Seq[V]],
operation: (DStream[U], DStream[V]) => DStream[W],
expectedOutput: Seq[Seq[W]],
useSet: Boolean
) {
testOperation[U, V, W](input1, input2, operation, expectedOutput, -1, useSet)
}
/**
* Test binary DStream operation with two lists of inputs
* #param input1 First sequence of input collections
* #param input2 Second sequence of input collections
* #param operation Binary DStream operation to be applied to the 2 inputs
* #param expectedOutput Sequence of expected output collections
* #param numBatches Number of batches to run the operation for
* #param useSet Compare the output values with the expected output values
* as sets (order matters) or as lists (order does not matter)
*/
def testOperation[U: ClassTag, V: ClassTag, W: ClassTag](
input1: Seq[Seq[U]],
input2: Seq[Seq[V]],
operation: (DStream[U], DStream[V]) => DStream[W],
expectedOutput: Seq[Seq[W]],
numBatches: Int,
useSet: Boolean
) {
val numBatches_ = if (numBatches > 0) numBatches else expectedOutput.size
withStreamingContext(setupStreams[U, V, W](input1, input2, operation)) { ssc =>
val output = runStreams[W](ssc, numBatches_, expectedOutput.size)
verifyOutput[W](output, expectedOutput, useSet)
}
}
}
These are a few things that you should check -
Verify if you have resources available that you are specifying in spark-config
Do a search for stop() keyword in your codebase and check it should not be on sparkcontext
Spark has Spark-UI component where you can see what job ran, if it failed or succeeded, along with its log. That will tell you why is it failing.
Cannot call methods on a stopped SparkContext it is consequence of some error which happend earlier. Look at the logs in $SPARK_HOME$/logs and $SPARK_HOME$/work.
Restarting the spark context on interpreter binding panel worked for me it looks like something bellow you just have to click on the refresh button and save.
The issue is only one SparkSession or SparkContext is allowed per JVM. Make sure to the instance being used is a singleton. For instance, wrap the singleton of SparkSession (or SparkContext) in an SharedSparkSession (or SharedSparkContext) object.

org.apache.spark.SparkException: Task not serializable (scala)

I am new for scala as well as FOR spark, Please help me to resolve this issue.
in spark shell when I load below functions individually they run without any exception, when I copy this function in scala object, and load same file in spark shell they throws task not serialization exception in "processbatch" function when trying to parallelize.
PFB code for the same:
import org.apache.spark.sql.Row
import org.apache.log4j.Logger
import org.apache.spark.sql.hive.HiveContext
object Process {
val hc = new HiveContext(sc)
def processsingle(wait: Int, patient: org.apache.spark.sql.Row, visits: Array[org.apache.spark.sql.Row]) : String = {
var out = new StringBuilder()
val processStart = getTimeInMillis()
for( x <- visits ) {
out.append(", " + x.getAs("patientid") + ":" + x.getAs("visitid"))
}
}
def processbatch(batch: Int, wait: Int, patients: Array[org.apache.spark.sql.Row], visits: Array[org.apache.spark.sql.Row]) = {
val out = sc.parallelize(patients, batch).map( r=> processsingle(wait, r, visits.filter(f=> f.getAs("patientid") == r.getAs("patientid")))).collect()
for(x <- out) println(x)
}
def processmeasures(fetch: Int, batch: Int, wait: Int) = {
val patients = hc.sql("SELECT patientid FROM tableName1 order by p_id").collect()
val visit = hc.sql("SELECT patientid, visitid FROM tableName2")
val count = patients.length
val fetches = if(count % fetch > 0) (count / fetch + 1) else (count / fetch)
for(i <- 0 to fetches.toInt-1){
val startFetch = i*fetch
val endFetch = math.min((i+1)*fetch, count.toInt)-1
val fetchSize = endFetch - startFetch + 1
val fetchClause = "patientid >= " + patients(startFetch).get(0) + " and patientid <= " + patients(endFetch).get(0)
val fetchVisit = visit.filter( fetchClause ).collect()
val batches = if(fetchSize % batch > 0) (fetchSize / batch + 1) else (fetchSize / batch)
for(j <- 0 to batches.toInt-1){
val startBatch = j*batch
val endBatch = math.min((j+1)*batch, fetch.toInt)-1
println(s"Batch from $startBatch to $endBatch");
val batchVisits = fetchVisit.filter(g => g.getAs[Long]("patientid") >= patients(i*fetch + startBatch).getLong(0) && g.getAs[Long]("patientid") <= patients(math.min(i*fetch + endBatch + 1, endFetch)).getLong(0))
processbatch(batch, wait, patients.slice(i*fetch + startBatch, i*fetch + endBatch + 1), batchVisits)
}
}
println("Processing took " + getExecutionTime(processStart) + " millis")
}
}
You should make Process object Serializable:
object Process extends Serializable {
...
}