Why is it when I don't include the file I get this
[IOException: can not replace a non-empty directory: Path(./public/upload)]
request.body.file("resourceFile").map { k =>
val t = new java.io.File(s"./public/upload/${k.filename}")
k.ref.moveTo(t, true)
println("Ok File Upload" + k.filename)
How do you stop this from happening?
Ta
I don't understand why it is happening.
You could add an ugly if statement to prevent the error :
request.body.file("resourceFile").map { k =>
if (!k.filename.isEmpty) {
val t = new java.io.File(s"./public/upload/${k.filename}")
k.ref.moveTo(t, true)
println("Ok File Upload" + k.filename)
}
Related
I'm trying to setup an Alpakka S3 for files upload purpose. Here is my configs:
alpakka s3 dependency:
...
"com.lightbend.akka" %% "akka-stream-alpakka-s3" % "0.20"
...
Here is application.conf:
akka.stream.alpakka.s3 {
buffer = "memory"
proxy {
host = ""
port = 8000
secure = true
}
aws {
credentials {
provider = default
}
}
path-style-access = false
list-bucket-api-version = 2
}
File upload code example:
private val awsCredentials = new BasicAWSCredentials("my_key", "my_secret_key")
private val awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials)
private val regionProvider = new AwsRegionProvider { def getRegion: String = "us-east-1" }
private val settings = new S3Settings(MemoryBufferType, None, awsCredentialsProvider, regionProvider, false, None, ListBucketVersion2)
private val s3Client = new S3Client(settings)(system, materializer)
val fileSource = Source.fromFuture(ByteString("ololo blabla bla"))
val fileName = UUID.randomUUID().toString
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] = s3Client.multipartUpload("my_basket", fileName)
fileSource.runWith(s3Sink)
.map {
result => println(s"${result.location}")
} recover {
case ex: Exception => println(s"$ex")
}
When I run this code I get:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
What can be a reason?
The certificate problem arises for bucket names containing dots.
You may switch to
akka.stream.alpakka.s3.path-style-access = true to get rid of this.
We're considering making it the default: https://github.com/akka/alpakka/issues/1152
I am deleting output folders before running my spark job .
Sometime it deletes it but sometime it delete all the files inside it but top level folder still remains .
I have sub folders kind of structure .
This is how i am deleting the folders .
def DeleteDescrFolder(fs: org.apache.hadoop.fs.FileSystem, descrFileURL: String) = {
val bDescr = fs.exists(new Path(descrFileURL))
if (true.equals(bDescr)) {
val outputFile = fs.globStatus(new Path(descrFileURL))
for (DeleteFilePath <- outputFile) {
fs.delete(DeleteFilePath.getPath)
}
println("Descr File is delete from " + descrFileURL)
} else {
println(descrFileURL + "Decsr Does not Exist")
}
}
How can i remove folder name also ?
You are deleting the files inside the folder specified. Try the following code which will delete the folder as well
def DeleteDescrFolder(fs: org.apache.hadoop.fs.FileSystem, descrFileURL: String) = {
if (fs.exists(new Path(descrFileURL))) {
try{
fs.delete(new Path(descrFileURL),true)
println("Descr folder is deleted " + descrFileURL)
}catch{case e: Exception =>
print("exeption "+e)
}
} else {
println(descrFileURL + "Decsr Does not Exist")
}
}
I am new to Scala/Spray/AKKA so please excuse this dumb requests.
I have the following Directive and it is being called as the first
logger line ("inside") is showing up in logs.
However, anything inside mapRequest{} is skipped over. The logging line ("headers:") isn't showing up
private def directiveToGetHeaders(input: String) : Directive0 = {
logger.info("inside")
mapRequest { request =>
val headList: List[HttpHeader] = request.headers
logger.info("headers: " + headList.size)
request
}
}
I am not sure what I did wrong. My goal is to pull out all the HTTP headers. Any tip/pointer much appreciated. Thanks
-v
You can use extractRequest directive for getting headers.
private def directiveToGetHeaders(input: String) : Directive0 = {
logger.info("inside")
extractRequest { request =>
val headList: Seq[HttpHeader] = request.headers
logger.info("headers: " + headList.size)
complete(HttpResponse())
}
}
I could extract value for
val ranking_score_path = cf.getString(stg + ".input.path.ranking_score")
.replaceAll("_replace_date_", this_date)
and
val output_path = cf.getString(stg + ".output.path.hdfs") + tomz_date + "/"
but not
val AS_HOST = cf.getString(stg + ".output.path.aerospike.host")
println("AS_HOST = " + AS_HOST)
I have tried
replacing . with _,
adding commas
but didnt work.
error log
Exception in thread "main" com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'production.output.path.aerospike'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152)
at ...
application.conf
production {
input {
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/tmp_vincent/CPA/_replace_date_/intermediate/l1/"
ranking_score = "/home/aduser/plt/item_performance/pipeline/cpa/output/_replace_date_/predict_output/ranking_score.csv"
}
}
output {
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/dyson/display/"
aerospike {
host = "0.0.0.0"
port = 3000
namespace = "test"
set = "spark-test2"
}
}
}
}
Reply # Comment #1
the cf is very long but the important part is as follows
... ore.csv"}},"output":{"path":{"hdfs":"/user/aduser/dyson/display/","local":"/home/aduser/tmp/"}}},"sun":{"arch": ...
Effort #1: Replaced part of the application.conf
path {
local = "/home/aduser/tmp/"
hdfs = "/user/aduser/dyson/display/"
ae_host = "0.0.0.0"
ae_port = 3000
ae_namespace = "test"
ae_set = "spark-test2"
}
and changed the calling method
val AS_HOST = cf.getString(stg + ".output.path.ae_host")
println("AS_HOST = " + AS_HOST)
but still getting errors
Exception in thread "main" com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'production.output.path.ae_host'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:152)
I have found the problem after several hours, my application.conf was at the same level with src.main.scala and it worked partially for reasons which I don't know. It worked perfectly after creating src.main.resources and putting application.conf inside.
i would like merging from local file from /opt/one.txt with file at my hdfs hdfs://localhost:54310/dummy/two.txt.
contains at one.txt : f,g,h
contains at two.txt : 2424244r
my code :
val cfg = new Configuration()
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"))
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"))
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/mapred-site.xml"))
try
{
val srcPath = "/opt/one.txt"
val dstPath = "/dumCBF/two.txt"
val srcFS = FileSystem.get(URI.create(srcPath), cfg)
val dstFS = FileSystem.get(URI.create(dstPath), cfg)
FileUtil.copyMerge(srcFS,
new Path(srcPath),
dstFS,
new Path(dstPath),
true,
cfg,
null)
println("end proses")
}
catch
{
case m:Exception => m.printStackTrace()
case k:Throwable => k.printStackTrace()
}
i was following tutorial from : http://deploymentzone.com/2015/01/30/spark-and-merged-csv-files/
and it's not working at all, error would below this :
java.io.FileNotFoundException: File does not exist: /opt/one.txt
i dont why, sounds of error like that? BTW the file one.txt is exist
and then, im add some code to check exist file :
if(new File(srcPath).exists()) println("file is exist")
any idea or references? thanks!
EDIT 1,2 : typo extensions