I am trying to write unit test cases for awazon s3 to upload files.I have used mokito.
Following is my code for inserting files in S3:
def insertingFilesInS3(path: String, file: File): Boolean = {
try {
s3client.putObject(BUCKET_NAME, path, file)
true
} catch {
case ex: Exception => info(s"File storage failed for $path $file" + ex.printStackTrace()); false
}}
So far I have written:
val s3: AmazonS3Client = mock[AmazonS3Client]("s3")
val messageDigest = MessageDigest.getInstance("MD5")
val bucket = "bucket"
val keyName = "keyName"
val file: File = mock[File]
val expectedResult: PutObjectResult = mock[PutObjectResult]
val objectmetadata: ObjectMetadata =mock[ObjectMetadata]
"return true when inserting files in s3" in {
when(s3.putObject(bucket, keyName, file).setMetadata(objectmetadata)).thenReturn(expectedResult)
val result = S3Util.insertingFilesInS3(keyName, file)
assert(!result)
}
Assert statemnt is throwing exception and I'm getting false as result.
I am getting null pointer exception :
java.lang.NullPointerException
at com.amazonaws.services.s3.internal.Mimetypes.getMimetype(Mimetypes.java:160)
at com.amazonaws.services.s3.internal.Mimetypes.getMimetype(Mimetypes.java:201)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1642)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1604)
at s3Utility.S3Util$class.insertingFilesInS3(S3Util.scala:15)
at s3Utility.S3Util$.insertingFilesInS3(S3Util.scala:52)
at com.codesquad.test.S3UtilityTest.S3UtilTest$$anonfun$1.apply(S3UtilTest.scala:28)
at com.codesquad.test.S3UtilityTest.S3UtilTest$$anonfun$1.apply(S3UtilTest.scala:20)
at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
at org.scalatest.Transformer.apply(Transformer.scala:22)
at org.scalatest.Transformer.apply(Transformer.scala:20)
at org.scalatest.WordSpecLike$$anon$1.apply(WordSpecLike.scala:1078)
at org.scalatest.TestSuite$class.withFixture(TestSuite.scala:196)
at com.codesquad.test.S3UtilityTest.S3UtilTest.withFixture(S3UtilTest.scala:12)
at org.scalatest.WordSpecLike$class.invokeWithFixture$1(WordSpecLike.scala:1075)
at org.scalatest.WordSpecLike$$anonfun$runTest$1.apply(WordSpecLike.scala:1088)
at org.scalatest.WordSpecLike$$anonfun$runTest$1.apply(WordSpecLike.scala:1088)
at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
at org.scalatest.WordSpecLike$class.runTest(WordSpecLike.scala:1088)
at com.codesquad.test.S3UtilityTest.S3UtilTest.org$scalatest$BeforeAndAfter$$super$runTest(S3UtilTest.scala:12)
at org.scalatest.BeforeAndAfter$class.runTest(BeforeAndAfter.scala:203)
at com.codesquad.test.S3UtilityTest.S3UtilTest.runTest(S3UtilTest.scala:12)
at org.scalatest.WordSpecLike$$anonfun$runTests$1.apply(WordSpecLike.scala:1147)
at org.scalatest.WordSpecLike$$anonfun$runTests$1.apply(WordSpecLike.scala:1147)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
at org.scalatest.WordSpecLike$class.runTests(WordSpecLike.scala:1147)
at com.codesquad.test.S3UtilityTest.S3UtilTest.runTests(S3UtilTest.scala:12)
at org.scalatest.Suite$class.run(Suite.scala:1147)
at com.codesquad.test.S3UtilityTest.S3UtilTest.org$scalatest$WordSpecLike$$super$run(S3UtilTest.scala:12)
at org.scalatest.WordSpecLike$$anonfun$run$1.apply(WordSpecLike.scala:1192)
at org.scalatest.WordSpecLike$$anonfun$run$1.apply(WordSpecLike.scala:1192)
at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
at org.scalatest.WordSpecLike$class.run(WordSpecLike.scala:1192)
at com.codesquad.test.S3UtilityTest.S3UtilTest.org$scalatest$BeforeAndAfter$$super$run(S3UtilTest.scala:12)
at org.scalatest.BeforeAndAfter$class.run(BeforeAndAfter.scala:258)
at com.codesquad.test.S3UtilityTest.S3UtilTest.run(S3UtilTest.scala:12)
at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
You are trying to call a method of a mocked object without stating what should it return. So first you need to state that. Also, you are chaining method calls to the s3 object. setMetadata() has no return value as its a Unit function, and you are asking it to return expectedResult object of type PutObjectResult which will give you a compilation error. Instead, try the following.
"return true when inserting files in s3" in {
when(s3.putObject(bucket, keyName, file)).thenReturn(expectedResult)
val result = S3Util.insertingFilesInS3(keyName, file)
assert(!result)
}
Related
This question is a follow-up to Groovy/Jenkins: how to refactor sh(script:"curl ...") to URL?.
Though I intend to try invoking REST API with HTTP Request - as the one answerer has suggested - I'd like to also specifically learn about/understand the original problem RE: non-serializability.
I've reduced the code in the linked Q&A to more minimally demonstrate the problem.
This code:
#Library('my-sandbox-libs#dev') sandbox_lib
pipeline {
agent any
stages {
stage( "1" ) { steps { script { echo "hello" } } }
stage( "2" ) {
steps {
script {
try {
my_lib.v4()
}
catch(Exception e) {
echo "Jenkinsfile: ${e.toString()}"
throw e
}
}
}
}
stage( "3" ) { steps { script { echo "world" } } }
}
}
// vars/my_lib.groovy
import groovy.json.JsonOutput
def v4() {
def post = new URL("https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d").openConnection();
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
def message = JsonOutput.toJson(dict).toString()
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
println(post.getInputStream().getText());
}
}
...generates HTTP error 401. This is expected, because it invokes a Bitbucket REST API without necessary authentication.
What's missing is the "Bearer: xyz" secret-text. In the past, I've gotten this secret-text using Jenkins'/Groovy's withCredentials function, as in the modified function v4() below:
// vars/my_lib.groovy
import groovy.json.JsonOutput
def v4() {
def post = new URL("https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d").openConnection();
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
def message = JsonOutput.toJson(dict).toString()
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
withCredentials([string(credentialsId: 'bitbucket_cred_id',
variable: 'auth_token')]) {
post.setRequestProperty("Authorization", "Bearer " + auth_token)
}
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
println(post.getInputStream().getText());
}
}
...but I've discovered that the addition of that withCredentials-block, specifically, introduces a java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl runtime exception.
At the linked Stack Overflow Q&A, one commenter has informed me that the problem has to do with serializable vs. non-serializable objects in this function. But I'm having a hard time understanding what non-serializable object is introduced by use of the withCredentials-block.
As a point of reference, use of the same withCredentials-block works just fine when I invoke the same REST API using curl instead of "native" Jenkins/Groovy functions. I.e. the following code works fine:
def v1() {
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
withCredentials([string(credentialsId: 'bitbucket_cred_id',
variable: 'auth_token')]) {
def cmd = "curl -f -L " +
"-H \"Authorization: Bearer ${auth_token}\" " +
"-H \"Content-Type:application/json\" " +
"-X POST https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d " +
"-d \'${JsonOutput.toJson(dict)}\'"
sh(script: cmd, returnStatus: true)
}
}
So, in summary, this question is why does withCredentials() introduce non-serializable objects (and what is the non-serializable object), which causes NonSerializableException exceptions with use of URL and/or HttpURLConnection, and does one work around this?
I'm not unwilling to use a different solution, such as httpRequest objects, but this question is about learning the nature of this specific problem, and how to work around it with the existing objects, i.e. URL and HttpURLConnection objects.
Update: As it turns out, I'm unable to use the suggested alternate solution using httpRequest objects, because the HTTP Request plugin is only available to Jenkins versions 2.222.4 or newer, which our Jenkins does not meet. It's outside my privilege to update our Jenkins version, and basically I'll need to assume an inability to upgrade Jenkins.
Our Jenkins version is 2.190.3, which has Groovy version 2.4.12.
When having such issues I usually try to put the "low level" code that calls down to Groovy/Java API, into #NonCPS functions. Objects in such functions don't need to be serializable, so we can freely use any Groovy/Java API.
Background reading: Pipeline CPS Method Mismatches
Make sure you don't call any Jenkins pipeline steps from #NonCPS functions (nor any other function that is not marked #NonCPS) - such code could silently fail or do unexpected stuff! (There are some "safe" functions like echo though.)
As withCredentials is a pipeline step, it has to be called from a "regular" function (not marked as #NonCPS), one level up the call chain.
Note that I'm passing auth_token as an argument to v4_internal. If you need other Jenkins variables in the code, these should also be passed as arguments.
// vars/my_lib.groovy
import groovy.json.JsonOutput
def v4() {
withCredentials([string(credentialsId: 'bitbucket_cred_id',
variable: 'auth_token')]) {
v4_internal(auth_token)
}
}
#NonCPS
def v4_internal( def auth_token ) {
def post = new URL("https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d").openConnection();
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
def message = JsonOutput.toJson(dict).toString()
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
post.setRequestProperty("Authorization", "Bearer " + auth_token)
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
println(post.getInputStream().getText());
}
}
This sucks - and I hope someone comes along with a better answer - but it looks like the only way I can pull this off is as follows:
#Library('my-sandbox-libs#dev') sandbox_lib
pipeline {
agent any
stages {
stage( "1" ) { steps { script { echo "hello" } } }
stage( "2" ) {
steps {
script {
try {
my_lib.v5(my_lib.getBitbucketCred())
}
catch(Exception e) {
echo "Jenkinsfile: ${e.toString()}"
throw e
}
}
}
}
stage( "3" ) { steps { script { echo "world" } } }
}
}
// vars/my_lib.groovy
import groovy.json.JsonOutput
def getBitbucketCred() {
withCredentials([string(credentialsId: 'bitbucket_cred_id',
variable: 'auth_token')]) {
return auth_token
}
}
def v5(auth_token) {
def post = new URL("https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d").openConnection();
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
def message = JsonOutput.toJson(dict).toString()
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
req.setRequestProperty("Authorization", "Bearer " + auth_token)
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
println(post.getInputStream().getText());
}
}
Specifically, I must invoke withCredentials() completely separate from the scope of the function that uses URL and/or HttpURLConnection.
I don't know if this is considered acceptable in Jenkins/Groovy, but I'm dissatisfied by the inability to call withCredentials() from the v5() function itself. I'm also unable to call a withCredentials()-wrapper function from v5().
When I was trying to call withCredentials() either directly in v5() or from a wrapper function called by v5(), I tried every combination of #NonCPS between v5() and the wrapper function, and that didn't work. I also tried explicitly setting the URL and HttpURLConnection objects to null before the end of the function (as suggested at Jenkins/Groovy: Why does withCredentials() introduce a NotSerializableException exception?), and neither did that work.
I'd be disappointed in Jenkins/Groovy if this is the only solution. This feels like an artificial limit on how one can choose to organize his code.
Updating with more detail in response to #daggett:
RE: calling withCredentials() directly from my_lib.v5() or calling it from a wrapper function, let's start with mylib.groovy set up as follows (let me also take the opportunity to give the functions better names):
def withCredWrapper() {
withCredentials([string(credentialsId: 'bitbucket_cred_id',
variable: 'auth_token')]) {
return auth_token
}
}
def callRestFunc() {
def post = new URL("https://bitbucket.company.com/rest/build-status/1.0/commits/86c36485c0cbf956a62cbc1c370f1f3eecc8665d").openConnection();
def dict = [:]
dict.state = "INPROGRESS"
dict.key = "foo_42"
dict.url = "http://url/to/nowhere"
def message = JsonOutput.toJson(dict).toString()
post.setRequestMethod("POST")
post.setDoOutput(true)
post.setRequestProperty("Content-Type", "application/json")
// version 1:
withCredentials([string(credentialsId: 'bb_auth_bearer_token_cred_id',
variable: 'auth_token')]) {
post.setRequestProperty("Authorization", "Bearer " + auth_token)
}
// version 2:
//post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
post.getOutputStream().write(message.getBytes("UTF-8"));
def postRC = post.getResponseCode();
println(postRC);
if (postRC.equals(200)) {
println(post.getInputStream().getText());
}
}
With the above code, function callRestFunc() can either call withCredentials() directly, as above, or indirectly by the wrapper function withCredWrapper(), i.e.:
...
// version 1:
//withCredentials([string(credentialsId: 'bb_auth_bearer_token_cred_id',
// variable: 'auth_token')]) {
// post.setRequestProperty("Authorization", "Bearer " + auth_token)
//}
// version 2:
post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
...
Further, #NonCPS can be applied to one of withCredWrapper() or callRestFunc(), both, or neither.
Below are the specific failures with all 8 combinations thereof:
1.
def withCredWrapper() {
...
}
def callRestFunc() {
...
// version 1:
withCredentials(...)
...
}
Failure: Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
2.
def withCredWrapper() {
...
}
#NonCPS
def callRestFunc() {
...
// version 1:
withCredentials(...)
...
}
Failure: expected to call my_lib.callRestFunc but wound up catching withCredentials; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/ Masking supported pattern matches of $auth_token, Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
3.
#NonCPS
def withCredWrapper() {
...
}
def callRestFunc() {
...
// version 1:
withCredentials(...)
...
}
Failure: Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
4.
#NonCPS
def withCredWrapper() {
...
}
#NonCPS
def callRestFunc() {
...
// version 1:
withCredentials(...)
...
}
Failure: expected to call my_lib.callRestFunc but wound up catching withCredentials; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/ Masking supported pattern matches of $auth_token, Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
5.
def withCredWrapper() {
...
}
def callRestFunc() {
...
// version 2:
post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
...
}
Failure: Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
6.
def withCredWrapper() {
...
}
#NonCPS
def callRestFunc() {
...
// version 2:
post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
...
}
Failure: expected to call my_lib.callRestFunc but wound up catching my_lib.withCredWrapper; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/
7.
#NonCPS
def withCredWrapper() {
...
}
def callRestFunc() {
...
// version 2:
post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
...
}
Failure: expected to call my_lib.withCredWrapper but wound up catching withCredentials; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/ Masking supported pattern matches of $auth_token, Jenkinsfile: java.io.NotSerializableException: sun.net.www.protocol.https.HttpsURLConnectionImpl
8.
#NonCPS
def withCredWrapper() {
...
}
#NonCPS
def callRestFunc() {
...
// version 2:
post.setRequestProperty("Authorization", "Bearer " + withCredWrapper())
...
}
Failure: expected to call my_lib.callRestFunc but wound up catching withCredentials; see: https://jenkins.io/redirect/pipeline-cps-method-mismatches/
I am trying to upload a large file (90 MB for now) to S3 using Akka HTTP with Alpakka S3 connector. It is working fine for small files (25 MB) but when I try to upload large file (90 MB), I got the following error:
akka.http.scaladsl.model.ParsingException: Unexpected end of multipart entity
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:108)
at akka.http.scaladsl.unmarshalling.MultipartUnmarshallers$$anonfun$1.applyOrElse(MultipartUnmarshallers.scala:103)
at akka.stream.impl.fusing.Collect$$anon$6.$anonfun$wrappedPf$1(Ops.scala:227)
at akka.stream.impl.fusing.SupervisedGraphStageLogic.withSupervision(Ops.scala:186)
at akka.stream.impl.fusing.Collect$$anon$6.onPush(Ops.scala:229)
at akka.stream.impl.fusing.GraphInterpreter.processPush(GraphInterpreter.scala:523)
at akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:510)
at akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:376)
at akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:606)
at akka.stream.impl.fusing.GraphInterpreterShell$AsyncInput.execute(ActorGraphInterpreter.scala:485)
at akka.stream.impl.fusing.GraphInterpreterShell.processEvent(ActorGraphInterpreter.scala:581)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:749)
at akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$shortCircuitBatch(ActorGraphInterpreter.scala:739)
at akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:765)
at akka.actor.Actor.aroundReceive(Actor.scala:539)
at akka.actor.Actor.aroundReceive$(Actor.scala:537)
at akka.stream.impl.fusing.ActorGraphInterpreter.aroundReceive(ActorGraphInterpreter.scala:671)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:614)
at akka.actor.ActorCell.invoke(ActorCell.scala:583)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
at akka.dispatch.Mailbox.run(Mailbox.scala:229)
at akka.dispatch.Mailbox.exec(Mailbox.scala:241)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Although, I get the success message at the end but file does not uploaded completely. It gets upload of 45-50 MB only.
I am using the below code:
S3Utility.scala
class S3Utility(implicit as: ActorSystem, m: Materializer) {
private val bucketName = "test"
def sink(fileInfo: FileInfo): Sink[ByteString, Future[MultipartUploadResult]] = {
val fileName = fileInfo.fileName
S3.multipartUpload(bucketName, fileName)
}
}
Routes:
def uploadLargeFile: Route =
post {
path("import" / "file") {
extractMaterializer { implicit materializer =>
withoutSizeLimit {
fileUpload("file") {
case (metadata, byteSource) =>
logger.info(s"Request received to import large file: ${metadata.fileName}")
val uploadFuture = byteSource.runWith(s3Utility.sink(metadata))
onComplete(uploadFuture) {
case Success(result) =>
logger.info(s"Successfully uploaded file")
complete(StatusCodes.OK)
case Failure(ex) =>
println(ex, "Error in uploading file")
complete(StatusCodes.FailedDependency, ex.getMessage)
}
}
}
}
}
}
Any help would be appraciated. Thanks
Strategy 1
Can you break the file into smaller chunks and retry, here is the sample code:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("some-kind-of-endpoint"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials("user", "pass")))
.disableChunkedEncoding()
.withPathStyleAccessEnabled(true)
.build();
// Create a list of UploadPartResponse objects. You get one of these
// for each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new
InitiateMultipartUploadRequest("bucket", "key");
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
File file = new File("filepath");
long contentLength = file.length();
long partSize = 5242880; // Set part size to 5 MB.
try {
// Step 2: Upload parts.
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++) {
// Last part can be less than 5 MB. Adjust part size.
partSize = Math.min(partSize, (contentLength - filePosition));
// Create a request to upload a part.
UploadPartRequest uploadRequest = new UploadPartRequest()
.withBucketName("bucket").withKey("key")
.withUploadId(initResponse.getUploadId()).withPartNumber(i)
.withFileOffset(filePosition)
.withFile(file)
.withPartSize(partSize);
// Upload part and add response to our list.
partETags.add(
s3Client.uploadPart(uploadRequest).getPartETag());
filePosition += partSize;
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(
"bucket",
"key",
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
s3Client.abortMultipartUpload(new AbortMultipartUploadRequest(
"bucket", "key", initResponse.getUploadId()));
}
Strategy 2
Increase the idle-timeout of the Akka HTTP server (just set it to infinite), like the following:
akka.http.server.idle-timeout=infinite
This would increase the time period for which the server expects to be idle. By default its value is 60 seconds. And if the server is not able to upload the file within that time period, it will close the connection and throw "Unexpected end of multipart entity" error.
I'm trying to setup an Alpakka S3 for files upload purpose. Here is my configs:
alpakka s3 dependency:
...
"com.lightbend.akka" %% "akka-stream-alpakka-s3" % "0.20"
...
Here is application.conf:
akka.stream.alpakka.s3 {
buffer = "memory"
proxy {
host = ""
port = 8000
secure = true
}
aws {
credentials {
provider = default
}
}
path-style-access = false
list-bucket-api-version = 2
}
File upload code example:
private val awsCredentials = new BasicAWSCredentials("my_key", "my_secret_key")
private val awsCredentialsProvider = new AWSStaticCredentialsProvider(awsCredentials)
private val regionProvider = new AwsRegionProvider { def getRegion: String = "us-east-1" }
private val settings = new S3Settings(MemoryBufferType, None, awsCredentialsProvider, regionProvider, false, None, ListBucketVersion2)
private val s3Client = new S3Client(settings)(system, materializer)
val fileSource = Source.fromFuture(ByteString("ololo blabla bla"))
val fileName = UUID.randomUUID().toString
val s3Sink: Sink[ByteString, Future[MultipartUploadResult]] = s3Client.multipartUpload("my_basket", fileName)
fileSource.runWith(s3Sink)
.map {
result => println(s"${result.location}")
} recover {
case ex: Exception => println(s"$ex")
}
When I run this code I get:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
What can be a reason?
The certificate problem arises for bucket names containing dots.
You may switch to
akka.stream.alpakka.s3.path-style-access = true to get rid of this.
We're considering making it the default: https://github.com/akka/alpakka/issues/1152
i would like merging from local file from /opt/one.txt with file at my hdfs hdfs://localhost:54310/dummy/two.txt.
contains at one.txt : f,g,h
contains at two.txt : 2424244r
my code :
val cfg = new Configuration()
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/core-site.xml"))
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/hdfs-site.xml"))
cfg.addResource(new Path("/usr/local/hadoop/etc/hadoop/mapred-site.xml"))
try
{
val srcPath = "/opt/one.txt"
val dstPath = "/dumCBF/two.txt"
val srcFS = FileSystem.get(URI.create(srcPath), cfg)
val dstFS = FileSystem.get(URI.create(dstPath), cfg)
FileUtil.copyMerge(srcFS,
new Path(srcPath),
dstFS,
new Path(dstPath),
true,
cfg,
null)
println("end proses")
}
catch
{
case m:Exception => m.printStackTrace()
case k:Throwable => k.printStackTrace()
}
i was following tutorial from : http://deploymentzone.com/2015/01/30/spark-and-merged-csv-files/
and it's not working at all, error would below this :
java.io.FileNotFoundException: File does not exist: /opt/one.txt
i dont why, sounds of error like that? BTW the file one.txt is exist
and then, im add some code to check exist file :
if(new File(srcPath).exists()) println("file is exist")
any idea or references? thanks!
EDIT 1,2 : typo extensions
I am using play-framework 2.3.x with reactivemongo-extension JSON type. following is my code for fetch the data from db as below:
def getStoredAccessToken(authInfo: AuthInfo[User]) = {
println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE"); //$doc("clientId" $eq authInfo.user.email, "userId" $eq authInfo.user._id.get)
var future = accessTokenService.findRandom(Json.obj("clientId" -> authInfo.user.email, "userId" -> authInfo.user._id.get));
println(">>>>>>>>>>>>>>>>>>>>>>: AFTER: "+future);
future.map { option => {
println("*************************** ")
println("***************************: "+option.isEmpty)
if (!option.isEmpty){
var accessToken = option.get;println(">>>>>>>>>>>>>>>>>>>>>>: BEFORE VALUE");
var value = Crypto.validateToken(accessToken.createdAt.value)
println(">>>>>>>>>>>>>>>>>>>>>>: "+value);
Some(scalaoauth2.provider.AccessToken(accessToken.accessToken, accessToken.refreshToken, authInfo.scope,
Some(value), new Date(accessToken.createdAt.value)))
}else{
Option.empty
}
}}
}
When i using BSONDao and BsonDocument for fetching the data, this code successfully run, but after converting to JSONDao i getting the following error:
Note: Some time this code will run but some it thrown an exception after converting to JSON
play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application -
Following are the logs of application full exception strack trace as below:
>>>>>>>>>>>>>>>>>>>>>>: BEFORE
>>>>>>>>>>>>>>>>>>>>>>: AFTER: scala.concurrent.impl.Promise$DefaultPromise#7f4703e3
play - Cannot invoke the action, eventually got an error: java.lang.IllegalArgumentException: bound must be positive
application -
! #6m1520jff - Internal server error, for (POST) [/oauth2/token] ->
play.api.Application$$anon$1: Execution exception[[IllegalArgumentException: bound must be positive]]
at play.api.Application$class.handleError(Application.scala:296) ~[play_2.11-2.3.8.jar:2.3.8]
at play.api.DefaultApplication.handleError(Application.scala:402) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320) [play_2.11-2.3.8.jar:2.3.8]
at scala.Option.map(Option.scala:146) [scala-library-2.11.6.jar:na]
Caused by: java.lang.IllegalArgumentException: bound must be positive
at java.util.Random.nextInt(Random.java:388) ~[na:1.8.0_40]
at scala.util.Random.nextInt(Random.scala:66) ~[scala-library-2.11.6.jar:na]
The problem is solve, but i am not sure, why this produce, I think there is problem with reactivemongo-extension JSONDao library. because when i use findOne instead of findRandom the code is run successfully, but the findRandom is run good on BSON dao. Still not found what the exact problem is that, but following is the resolved code.
def getStoredAccessToken(authInfo: AuthInfo[User]) = {
println(authInfo.user.email+" ---- "+authInfo.user._id.get)
var future = accessTokenService.findOne($doc("clientId" $eq authInfo.user.email, "userId" $eq authInfo.user._id.get)); //user findOne instead of findRandom in JsonDao
future.map { option => {
if (!option.isEmpty){
var accessToken = option.get;
var value = Crypto.validateToken(accessToken.createdAt.value)
Some(scalaoauth2.provider.AccessToken(accessToken.accessToken, accessToken.refreshToken, authInfo.scope,
Some(value), new Date(accessToken.createdAt.value)))
}else{
Option.empty
}
}}
}