Scala liftweb framework - scala

I have two checkboxes and on submit I want the user to download two different files, how do I go about this? I need help as am not able to get any online material on this. I am new to this framework.
This is what I've tried so far
def render = {
def onSubmit1() : LiftResponse = {
val userInput1 = S.param("getit").openOr("")
val userInput2 = S.param("getit2").openOr("")
val checkbox1 = S.param("yes").openOr("")
val checkbox2 = S.param("yes").openOr("")
val fileToDownload = funFile(as, art, top)
val fileName = "My File"
if (fileToDownload.equals()) {
return S.redirectTo("/Somewhere")
} else {
InMemoryResponse(
fileToDownload.mkString("\n").getBytes("UTF-8"),
"Content-Type" -> "text/plain; charset=utf8" ::
"Content-Disposition" -> s"attachment; filename=$fileName" :: Nil,
cookies = Nil, code = 200)
}
"#submitButton" #> SHtml.onSubmitUnit(onSubmit1)

I found a way to do that and that is to zip the two files using this function
How do I archive multiple files into a .zip file using scala? ,
if (checkbox1.getOrElse("") == "checked" && checkbox2.getOrElse("") == "checked") {
val checkboxIterable1: Iterable[String] = filegetter1
val checkboxFile1 = new File("checkboxFile1.csv")
using(new FileWriter(checkboxFile1))(writer =>
fileGetter1.foreach { d =>
writer.write(d)
})
fileGetter1
val checkboxIterable2: Iterable[String] = fileGetter2
val checkboxFile2 = new File("checkboxFile2.csv")
using(new FileWriter(checkboxFile2))(writer =>
fileGetter2.foreach { d =>
writer.write(d)
})
fileGetter2
val zipFile = zip("Zipped file", List("checkboxFile1.csv", "checkboxFile2.csv"))
val zipFileName = "Zipped file"
val zipToBArray = new BufferedInputStream(new FileInputStream(zipFileName))
val getByteArray = Stream.continually(zipToBArray.read).takeWhile(-1 !=).map(_.toByte).toArray
InMemoryResponse(
getByteArray,
"Content-Type" -> "application/zip; charset=utf8" ::
"Content-Disposition" -> s"attachment; filename=file.zip" :: Nil,
cookies = Nil, code = 200)
}

Related

why mapPartitions does not see my val - SCALA/SPARK?

I define val like this :
val config = Config(args)
val product_type = config.product_type
then I send product_type as "AA"
and my code is this :
val scores = df.mapPartitions(iterator => {
val inputStream =
if(product_type == "AA" ) {
getClass().getClassLoader().getResourceAsStream("my_aa.hdf5")
}
else {
getClass().getClassLoader().getResourceAsStream("my_bb.hdf5")
}
val multiLayerNetwork: MultiLayerNetwork = KerasModelImport.importKerasSequentialModelAndWeights(inputStream, false)
val wrapped: ParallelInference = new ParallelInference.Builder(multiLayerNetwork).build()
val res = iterator.map(row => {
wrapped.output(row).toDoubleVector
})
res
})
But my inputStream equals "my_bb.hdf5" which is not correct. This value comes from else statement. So why my product_type variable cant read in mappartition?
I print my product_type value before code and I checked it , it is : "AA"
it occurs because of i get this variable from argument in spark submit.sh
and it can not read from mappartition.
It works like this:
val scores =
if (product_type == "AA") {
df.mapPartitions(iterator => {
val inputStream = getClass().getClassLoader().getResourceAsStream("AA.hdf5")
val multiLayerNetwork: MultiLayerNetwork = KerasModelImport.importKerasSequentialModelAndWeights(inputStream, false)
val wrapped: ParallelInference = new ParallelInference.Builder(multiLayerNetwork).build()
val res = iterator.map(row => {
wrapped.output(row).toDoubleVector
})
res
})
} else {
df.mapPartitions(iterator => {
val inputStream = getClass().getClassLoader().getResourceAsStream("BB.hdf5")
val multiLayerNetwork: MultiLayerNetwork = KerasModelImport.importKerasSequentialModelAndWeights(inputStream, false)
val wrapped: ParallelInference = new ParallelInference.Builder(multiLayerNetwork).build()
val res = iterator.map(row => {
wrapped.output(row).toDoubleVector
})
res
})
}

Close InputStream wrapped into IO

I'm using IO (cats/scalaz does not matter). And I want to use bracket to close InputStream after I'm done with it. The problem is that I'm reading gzipped files. Here is what I tried:
I (Incorrect).
val io1 = IO(Files.newInputStream(Paths.get("/tmp/file")))
val io2 = io1.map(is => new GZIPInputStream(is))
val io3 = io2.bracket{_ =>
IO(println("use"))
//empty usage
}{ is =>
println("close")
IO(is.close())
}
This is incorrect because of if /tmp/file is a broken zip-file with invalid magic we will never reach "resource release" bracket.
II (Incorrect).
val io1 = IO(Files.newInputStream(Paths.get("/tmp/file")))
val io3 = io1.bracket{is =>
val gzis = new GZIPInputStream(is)
IO(println("use"))
//empty usage
}{ is =>
println("close")
IO(is.close())
}
This is incorrect because we are closing the underlying stream, but not the GzipInputStream so we may end up losing some buffered data inside.
In java I could simply do this without flushing:
var is: InputStream = null
try{
is = Files.newInputStream(Paths.get("/tmp/file"))
is = new GZIPInputStream(is)
//use
} finally {
if(is ne null)
is.close()
}
Can you suggest some approach for dealing with GzipInputStream?
It is not a problem to call close on input stream several times, so you can close InputStream and GZIPInputStream separatelly.
In Java it is common to let try with resources hanlde both streams
try (InputStream is = Files.newInputStream(Paths.get("/tmp/file"));
GZIPInputStream gzis = new GZIPInputStream(is)){
//use gzis
}
// both streams are closed in implicit finaly clause
You can translate this approach to IO brackets
val io1 = IO(Files.newInputStream(Paths.get("/tmp/file")))
val io2 = io1.bracket { is =>
IO(new GZIPInputStream(is)).bracket { gzis =>
IO(println("using gzis"))
}(gzis => IO(gzis.close()))
}(is => IO(is.close()))
To awoid nested brackets you can use Resource
def openFile(path: Path) = Resource(IO {
val is = Files.newInputStream(path)
(is, IO(is.close()))
})
def openGZIP(is: InputStream) = Resource(IO {
val gzis = new GZIPInputStream(is)
(gzis, IO(gzis.close()))
})
val gzip: Resource[IO, GZIPInputStream] = for {
is <- openFile(Paths.get("/tmp/file"))
gzis <- openGZIP(is)
} yield gzis
gzip.use {
gzis => IO(println("using gzis"))
}

how can i write the test cases for file Uploading through extractRequestContext in akka http services

Pls suggest here I have an upload service in Akka HTTP micro service it's working fine. now I need to write the test cases for below code
path( "file-upload") {
extractClientIP { ip =>
optionalHeaderValueByName(Constants.AUTH) { auth =>
(post & extractRequestContext) { request =>
extractRequestContext {
ctx => {
implicit val materializer = ctx.materializer
implicit val ec = ctx.executionContext
val currentTime = TimeObject.getCurrentTime()
fileUpload("fileUpload") {
case (fileInfo, fileStream) =>
val localPath = Configuration.excelFilePath
val uniqueidString = "12345"
val filename = uniqueidString + fileInfo.fileName
val sink = FileIO.toPath(Paths.get(localPath) resolve filename)
val writeResult = fileStream.runWith(sink)
onSuccess(writeResult) { result =>
result.status match {
case Success(_) =>
var excelPath = localPath + File.separator + uniqueidString + fileInfo.fileName
var doc_count = itemsExcelParse(excelPath, companyCode, subCompanyId, userId)
val currentTime2 = TimeObject.getCurrentTime()
var upload_time = currentTime2 - currentTime
val resp: JsValue = Json.toJson(doc_count)
complete {
val json: JsValue = Json.obj("status" -> Constants.SUCCESS,
"status_details" -> "null", "upload_details" -> resp)
HttpResponse(status = StatusCodes.OK, entity = HttpEntity(ContentType(MediaTypes.`application/json`), json.toString))
}
case Failure(e) =>
complete {
val json: JsValue = Json.obj("status" -> Constants.ERROR, "status_details" -> Constants.ERROR_445)
HttpResponse(status = StatusCodes.BandwidthLimitExceeded, entity = HttpEntity(ContentType(MediaTypes.`application/json`), json.toString))
}
}
}
}
}
}
}
}
}
}
I have tried test cases but it's not working
" File upload " should "be able to upload file" in {
val p: Path = Paths.get("E:\\Excel\\Tables.xlsx")
val formData = Multipart.FormData.fromPath("fileUpload", ContentTypes.NoContentType, p, 1000)
Post(s"/file-upload", formData) -> route -> check {
println(" File upload - file uploaded successfully")
status shouldBe StatusCodes.OK
responseAs[String] contains "File successfully uploaded"
}
}
I have changed content type also into application/octet-stream. File not uploaded to the server please suggest here how can I write the test case for file uploading.

Split a list in several files scala

I have the following list,
10,44,22
10,47,12
15,38,3
15,41,30
16,44,15
16,47,18
22,38,21
22,41,42
34,44,40
34,47,36
40,38,39
40,41,42
45,38,27
45,41,30
46,44,45
46,47,48
Then I am creating one file with it is content with the following code:
val fstream:FileWriter = new FileWriter("patSPO.csv")
var out:BufferedWriter = new BufferedWriter(fstream)
val sl = listSPO.sortBy(l => (l.sub, l.pre))
for ( a <- 0 to listSPO.size-1){
out.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
out.close()
However I want to divide the content in a n files, then I try the following for 4 files:
val fstream:FileWriter = new FileWriter("patSPO.csv")
val fstream1:FileWriter = new FileWriter("patSPO1.csv")
val fstream2:FileWriter = new FileWriter("patSPO2.csv")
val fstream3:FileWriter = new FileWriter("patSPO3.csv")
val fstream4:FileWriter = new FileWriter("patSPO4.csv")
var out:BufferedWriter = new BufferedWriter(fstream)
var out1:BufferedWriter = new BufferedWriter(fstream1)
var out2:BufferedWriter = new BufferedWriter(fstream2)
var out3:BufferedWriter = new BufferedWriter(fstream3)
var out4:BufferedWriter = new BufferedWriter(fstream4)
val b :Int = listSPO.size/4
val sl = listSPO.sortBy(l => (l.sub, l.pre))
for ( a <- 0 to listSPO.size-1){
out.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
for ( a <- 0 to b-1){
out1.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
for ( a <- b to (b*2)-1){
out2.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
for ( a <- b*2 to (b*3)-1){
out3.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
for ( a <- b*3 to (b*4)-1){
out4.write(sl(a).sub.toString+","+sl(a).pre.toString+","+sl(a).obj.toString+"\n")
}
out.close()
out1.close()
out2.close()
out3.close()
out4.close()
Then my question is if exist a general code where I put the number of files to generate, for example 32, and not to write 32 times the out, the for and the fstream?
First, let's make some utilities to eliminate ugly boilerplate:
def open(
number: Int = 1,
prefix: String,
ext: String
) = (0 to n-1)
.iterator
.map { i =>
"%s%s%s".format(prefix, if(i == 0) "" else i.toString, postfix)
}.map(new FileWriter(_))
.map(new BufferedWriter(_))
def toString(l: WhateverTheType) = Seq(
l.sub,
l.pre,
l.obj
).mkString(",") + "\n"
And now to the implementation:
writeToFiles(
listSPO: List[WhateverTheType],
numFiles: Int = 1,
prefix: String = "pasSPO",
ext: String = ".csv"
) = listSPO
.grouped((listSPO.size+1)/numFiles)
.zip(open(numFiles, prefix, ext))
.foreach { case (input, file) =>
try {
input.foreach(file.write(toString(input)))
} finally {
file.close
}
}
Assuming the objects in the list have this type (otherwise, relpace SPO in the below code with your type):
case class SPO(sub: Int, pre: Int, obj: Int)
This should do it:
val sl = listSPO.sortBy(l => (l.sub, l.pre))
val files = 5 // whatever number you want
// helper function to write a single list - similar to your implementation but shorter
def writeListToFile(name: String, list: List[SPO]): Unit = {
val writer = new BufferedWriter(new FileWriter(name))
list.foreach(spo => writer.write(s"${spo.sub},${spo.pre},${spo.obj}\n"))
writer.close()
}
sl.grouped(sl.size / files) // split into sublists
.zipWithIndex // add index of sublists for file name
.foreach {
case (sublist, 0) => writeListToFile(s"pasSPO.csv", sublist) // if indeed you want the first file name NOT to include the index
case (sublist, index) => writeListToFile(s"pasSPO$index.csv", sublist)
}

How to download a HTTP resource to a file with Akka Streams and HTTP?

Over the past few days I have been trying to figure out the best way to download a HTTP resource to a file using Akka Streams and HTTP.
Initially I started with the Future-Based Variant and that looked something like this:
def downloadViaFutures(uri: Uri, file: File): Future[Long] = {
val request = Get(uri)
val responseFuture = Http().singleRequest(request)
responseFuture.flatMap { response =>
val source = response.entity.dataBytes
source.runWith(FileIO.toFile(file))
}
}
That was kind of okay but once I learnt more about pure Akka Streams I wanted to try and use the Flow-Based Variant to create a stream starting from a Source[HttpRequest]. At first this completely stumped me until I stumbled upon the flatMapConcat flow transformation. This ended up a little more verbose:
def responseOrFail[T](in: (Try[HttpResponse], T)): (HttpResponse, T) = in match {
case (responseTry, context) => (responseTry.get, context)
}
def responseToByteSource[T](in: (HttpResponse, T)): Source[ByteString, Any] = in match {
case (response, _) => response.entity.dataBytes
}
def downloadViaFlow(uri: Uri, file: File): Future[Long] = {
val request = Get(uri)
val source = Source.single((request, ()))
val requestResponseFlow = Http().superPool[Unit]()
source.
via(requestResponseFlow).
map(responseOrFail).
flatMapConcat(responseToByteSource).
runWith(FileIO.toFile(file))
}
Then I wanted to get a little tricky and use the Content-Disposition header.
Going back to the Future-Based Variant:
def destinationFile(downloadDir: File, response: HttpResponse): File = {
val fileName = response.header[ContentDisposition].get.value
val file = new File(downloadDir, fileName)
file.createNewFile()
file
}
def downloadViaFutures2(uri: Uri, downloadDir: File): Future[Long] = {
val request = Get(uri)
val responseFuture = Http().singleRequest(request)
responseFuture.flatMap { response =>
val file = destinationFile(downloadDir, response)
val source = response.entity.dataBytes
source.runWith(FileIO.toFile(file))
}
}
But now I have no idea how to do this with the Future-Based Variant. This is as far as I got:
def responseToByteSourceWithDest[T](in: (HttpResponse, T), downloadDir: File): Source[(ByteString, File), Any] = in match {
case (response, _) =>
val source = responseToByteSource(in)
val file = destinationFile(downloadDir, response)
source.map((_, file))
}
def downloadViaFlow2(uri: Uri, downloadDir: File): Future[Long] = {
val request = Get(uri)
val source = Source.single((request, ()))
val requestResponseFlow = Http().superPool[Unit]()
val sourceWithDest: Source[(ByteString, File), Unit] = source.
via(requestResponseFlow).
map(responseOrFail).
flatMapConcat(responseToByteSourceWithDest(_, downloadDir))
sourceWithDest.runWith(???)
}
So now I have a Source that will emit one or more (ByteString, File) elements for each File (I say each File since there is no reason the original Source has to be a single HttpRequest).
Is there anyway to take these and route them to a dynamic Sink?
I'm thinking something like flatMapConcat, such as:
def runWithMap[T, Mat2](f: T => Graph[SinkShape[Out], Mat2])(implicit materializer: Materializer): Mat2 = ???
So that I could complete downloadViaFlow2 with:
def destToSink(destination: File): Sink[(ByteString, File), Future[Long]] = {
val sink = FileIO.toFile(destination, true)
Flow[(ByteString, File)].map(_._1).toMat(sink)(Keep.right)
}
sourceWithDest.runWithMap {
case (_, file) => destToSink(file)
}
The solution does not require a flatMapConcat. If you don't need any return values from the file writing then you can use Sink.foreach:
def writeFile(downloadDir : File)(httpResponse : HttpResponse) : Future[Long] = {
val file = destinationFile(downloadDir, httpResponse)
httpResponse.entity.dataBytes.runWith(FileIO.toFile(file))
}
def downloadViaFlow2(uri: Uri, downloadDir: File) : Future[Unit] = {
val request = HttpRequest(uri=uri)
val source = Source.single((request, ()))
val requestResponseFlow = Http().superPool[Unit]()
source.via(requestResponseFlow)
.map(responseOrFail)
.map(_._1)
.runWith(Sink.foreach(writeFile(downloadDir)))
}
Note that the Sink.foreach creates Futures from the writeFile function. Therefore there's not much back-pressure involved. The writeFile could be slowed down by the hard drive but the stream would keep generating Futures. To control this you can use Flow.mapAsyncUnordered (or Flow.mapAsync) :
val parallelism = 10
source.via(requestResponseFlow)
.map(responseOrFail)
.map(_._1)
.mapAsyncUnordered(parallelism)(writeFile(downloadDir))
.runWith(Sink.ignore)
If you want to accumulate the Long values for a total count you need to combine with a Sink.fold:
source.via(requestResponseFlow)
.map(responseOrFail)
.map(_._1)
.mapAsyncUnordered(parallelism)(writeFile(downloadDir))
.runWith(Sink.fold(0L)(_ + _))
The fold will keep a running sum and emit the final value when the source of requests has dried up.
Using the play Web Services client injected in ws, remmebering to import scala.concurrent.duration._:
def downloadFromUrl(url: String)(ws: WSClient): Future[Try[File]] = {
val file = File.createTempFile("my-prefix", new File("/tmp"))
file.deleteOnExit()
val futureResponse: Future[WSResponse] =
ws.url(url).withMethod("GET").withRequestTimeout(5 minutes).stream()
futureResponse.flatMap { res =>
res.status match {
case 200 =>
val outputStream = java.nio.file.Files.newOutputStream(file.toPath)
val sink = Sink.foreach[ByteString] { bytes => outputStream.write(bytes.toArray) }
res.bodyAsSource.runWith(sink).andThen {
case result =>
outputStream.close()
result.get
} map (_ => Success(file))
case other => Future(Failure[File](new Exception("HTTP Failure, response code " + other + " : " + res.statusText)))
}
}
}