Jclouds swift api upload object directly from inputstream - swift

The following snippet uploads files to object store without any problem
public void uploadObjectFromStream(String container, String name, InputStream stream) {
SwiftApi swiftApi = getApi();
createContainerIfAbsent(container, swiftApi);
ObjectApi objectApi = swiftApi.getObjectApiForRegionAndContainer(REGION, container);
Payload payload = new InputStreamPayload(stream);
objectApi.put(name, payload, PutOptions.Builder.metadata(ImmutableMap.of("X-Object-Meta-key1", "value3", "X-Object-Meta-key2", "test"))); // test
}
If I try to upload ~10Mb file I get error
o.j.h.i.HttpWire [SLF4JLogger.java:56] over limit 10485760/262144: wrote temp file
java.lang.OutOfMemoryError: Java heap space
The question is if I can upload object from input stream to object store without saving the stream in application memory or file system.

jclouds does not buffer InputStream unless you enable wire logging. Generally this should be disabled unless you are debugging an issue.

Related

The api server cannot create the file when receives a put request

I am new in APIs. I have a java api server. In put method on server side, i receive a string and i create a arff file using that string. then i do some process on that file and return the result which is another string.
The problem is that when i do a put request the file is not created in local path, but when i run the code on a local application for test the file is created so the code works.
I have to generate a file of that string because i am using a machine learning algorithm that only works with files.Does anyone know why is that?
the method Classify text is called in put method in server side
public static int ClassifyText(String trained_model, String text) throws FileNotFoundException, IOException, Exception {
String evaluation_file = "..\toBeClassified_text.arff";
//create a arff file for the text
FileWriter fileWriter = new FileWriter(new File(evaluation_file));
PrintWriter printWriter = new PrintWriter(fileWriter);
The problem is solved by modifiying thisline:
String evaluation_file = "D:\toBeClassified_text.arff";

Is it possible to stream data(Upload) to store on bucket of Google cloud storage and allow to download at the same time?

Is it possible to stream data(Upload) to store on bucket of Google cloud storage and allow to download at the same time?
I have tried to use the Cloud API to upload a 100MB file to the bucket by using the code as below, but during the upload, and i refresh the bucket in the Google cloud console, i cannot see the new uploading file until the upload is finished. I would like to upload realtime video encoded in H.264 to store on the Cloud storage, so the size is unknown and at the same time, other users can start downloading the file event it is uploading. So is it possible?
Test code:
File tempFile = new File("StorageSample");
RandomAccessFile raf = new RandomAccessFile(tempFile, "rw");
try
{
raf.setLength(1000 * 1000 * 100);
}
finally
{
raf.close();
}
uploadFile(TEST_FILENAME, "text/plain", tempFile, bucketName);
public static void uploadFile(
String name, String contentType, File file, String bucketName)
throws IOException, GeneralSecurityException
{
InputStreamContent contentStream = new InputStreamContent(
contentType, new FileInputStream(file));
// Setting the length improves upload performance
contentStream.setLength(file.length());
StorageObject objectMetadata = new StorageObject()
// Set the destination object name
.setName(name)
// Set the access control list to publicly read-only
.setAcl(Arrays.asList(
new ObjectAccessControl().setEntity("allAuthenticatedUsers").setRole("READER"))); //allUsers//
// Do the insert
Storage client = StorageFactory.getService();
Storage.Objects.Insert insertRequest = client.objects().insert(
bucketName, objectMetadata, contentStream);
insertRequest.getMediaHttpUploader().setDirectUploadEnabled(false);
insertRequest.execute();
}
Unfortunately it's not possible, as state in the documentation:
Objects are immutable, which means that an uploaded object cannot
change throughout its storage lifetime. An object's storage lifetime
is the time between successful object creation (upload) and successful
object deletion.
This means that an object in cloud storage starts to exist when the upload it's finished, so you cannot access the object until your upload it's not completed.

How to close InputStream which fed into Response(jax.rs)

#GET
#Path("/{id}/content")
#Produces({ "application/octet-stream" })
public Response getDocumentContentById(#PathParam("id") String docId) {
InputStream is = getDocumentStream(); // some method which gives stream
ResponseBuilder responseBuilder = Response.ok(is);
responseBuilder.header("Content-Disposition", "attachment; filename=" + fileName);
return responseBuilder.build();
}
Here how can I close the InputStream is ? If something(jax.rs) closes automatically. Please give me some information. Thank you.
When you're wanting to stream a custom response, the most reliable way I've found is to return an object that contains the InputStream (or which can obtain the stream in some other way at some point), and to define a MessageBodyWriter provider that will do the actual streaming at the right time.
For example, this code is part of Apache Taverna, and it streams back the zipped contents of a directory. All that the main code needs to do to use it is to return a ZipStream as the response (which can be packaged in a Response or not) and to ensure that it is dealing with returning the application/zip content type. The final point to note is that since this is dealing with CXF, you need to manually register the provider; unlike with Glassfish, they are not automatically picked up. This is a good thing in sophisticated scenarios, but it does mean that you need to do the registration.

Multiple s3 buckets in Filepicker.io

I need to upload to multiple s3 buckets with filepicker.io. I found a tweet that indicated that there was a hacky, but possible, way to do this. Support hasn't gotten back to me yet, so I'm hoping that someone here already knows the answer!
Have you tried generating a second application/API key? It looks like they lock your S3/AWS credentials to an application/API key rather than directly to the account.
Support just got back to me. There's no way to do this besides creating multiple applications, which is okay if you are just switching between prod/staging/dev, but not a good solution if you have to upload to arbitrary buckets.
My solution is to execute a PUT request with the x-amz-copy-source header after the file has been uploaded, which copies it to the correct bucket.
This is pretty hacky as it request two extra requests per file -- one filepicker.stat and one more call to s3 (or your server).
#Ben
I am developing code with same issue of files needing to go into many buckets. I'm working in ASP.net.
What I have done is have one Filepicker 'application' with it's own S3 bucket.
I already had a callback to the server in the javascript onSuccess() function (which is passed as a parameter to filepicker.store()). This callback needed to be there to do some book-keeping anyway.
So I have just added in an extra bit to the server-side callback code which uses the AWS SDK to copy the object from the bucket filepicker uploades it to, to it's final destination bucket.
This is my C# code for moving, or rather copying, an object between buckets:
public bool MoveObject(string bucket1, string key1, string bucket2, string key2 = null)
{
bool success = false;
if (key2 == null) key2 = key1;
Logger logger = new Logger(); // my logging system
try
{
RegionEndpoint region = RegionEndpoint.EUWest1; // use your region here
using (AmazonS3Client s3Client = new AmazonS3Client(region))
{
// TODO: CheckForBucketFunction
CopyObjectRequest request = new CopyObjectRequest();
request.SourceBucket = bucket1;
request.SourceKey = key1;
request.DestinationBucket = bucket2;
request.DestinationKey = key2;
S3Response response = s3Client.CopyObject(request);
logger.Info2Log("response xml = \n{0}\n", response.ResponseXml);
response.Dispose();
success = true;
}
}
catch (AmazonS3Exception ex)
{
logger.Info2Log("Error copying file between buckets: {0} - {1}",
ex.ErrorCode, ex.Message);
success = false;
}
return success;
}
There are AWS SDKs for other server languages and the good news is Amazon doesn't charge for copying objects between buckets in the same region.
Now I just have to decide how to delete the object from the filepicker application bucket. I could do it on the server using more AWS SDK code but that will be messy as it leaves links to the object in the filepicker console. Or I could do it from the browser using filepicker code.

How to see the request variables or files sent from the phonegap to MVC asp.net Controller

I have done coding
public JSONResult media(HttpPostedFileBase file)
{
=====done with some code===
}
Here i m getting the file as null always.
Note:
file is the file submitted from the Phonegap's JSON method.
My question is :
Is there any mechanism for the decoding the encoded multipart file before reading?
Got it :)
Use the request params for getting the file input and changed the method as
public JSONResult media()
added request.files and stored in HTTPcollectionfile base and got the same file information posted in the PhonegapAPI