Vaadin Flow: How to consume dragged file - drag-and-drop

I have a section (DropTarget) where the user can drop several items from within my application.
This works fine.
Now I would also like to allow the user to drag files to that DropTarget.
The drop listener that I registered gets notified when I drag a file to the DropTarget, but - as far as I see - does not offer any possibility to consume the dragged file.
Anybody knows how to get this running?
Using Vaadin flow 22.0.7

When you create an Upload component, you can specify a Receiver. You can pass one as a constructor parameter or via upload.setReceiver(Receiver). There are different types of Receivers depending on your use case; you can use a MemoryBuffer if you are ok with putting all of the data in your server memory, but there are other options, like FileBuffer, as can be seen here: https://vaadin.com/docs/latest/ds/components/upload/#handling-uploaded-files-java-only ; you can implement your own Receiver as well.
The Receiver gives you access to the actual streaming content of the file. Typically, you want to access the data in some stage of the upload process, which you can do through different upload listeners. If you just want to deal with it once the upload is fully complete, you can use a SucceededListener:
MemoryBuffer memoryBuffer = new MemoryBuffer();
Upload upload = new Upload(memoryBuffer);
upload.addSucceededListener(event -> {
InputStream fileData = memoryBuffer.getInputStream();
String fileName = event.getFileName();
File targetFile = new File("C:/tmp/" + fileName );
OutputStream outStream = null;
try {
outStream = new FileOutputStream(targetFile);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
try {
outStream.write(fileData.readAllBytes());
} catch (IOException e) {
e.printStackTrace();
}
});
Implementing your own Receiver gives you more flexibility on how you want to handle the OutputStream from the upload, and of course you might not want to save the upload as a physical file, but put it directly in a database for example.

Related

How to close InputStream which fed into Response(jax.rs)

#GET
#Path("/{id}/content")
#Produces({ "application/octet-stream" })
public Response getDocumentContentById(#PathParam("id") String docId) {
InputStream is = getDocumentStream(); // some method which gives stream
ResponseBuilder responseBuilder = Response.ok(is);
responseBuilder.header("Content-Disposition", "attachment; filename=" + fileName);
return responseBuilder.build();
}
Here how can I close the InputStream is ? If something(jax.rs) closes automatically. Please give me some information. Thank you.
When you're wanting to stream a custom response, the most reliable way I've found is to return an object that contains the InputStream (or which can obtain the stream in some other way at some point), and to define a MessageBodyWriter provider that will do the actual streaming at the right time.
For example, this code is part of Apache Taverna, and it streams back the zipped contents of a directory. All that the main code needs to do to use it is to return a ZipStream as the response (which can be packaged in a Response or not) and to ensure that it is dealing with returning the application/zip content type. The final point to note is that since this is dealing with CXF, you need to manually register the provider; unlike with Glassfish, they are not automatically picked up. This is a good thing in sophisticated scenarios, but it does mean that you need to do the registration.

Uploading BLOB/ArrayBuffer with Dropzone.js

Using SharePoint 2013 REST API, I'm successfully uploading files, such as .docx or .png's to a folder inside a document library using Dropzone.js. I have a function where I initialize my dropzone as follows:
myDropzone = new Dropzone("#dropzone");
myDropzone.on("complete", function (file) {
myDropzone.removeFile(file);
});
myDropzone.options.autoProcessQueue = false;
myDropzone.on("addedfile", function (file) {
$('.dz-message').hide();
myDropzone.options.url = String.format(
"{0}/{1}/_api/web/getfolderbyserverrelativeurl('{2}')/files" +
"/add(overwrite=true, url='{3}')",
_spPageContextInfo.siteAbsoluteUrl, _spPageContextInfo.webServerRelativeUrl, folder.d.ServerRelativeUrl, file.name);
});
myDropzone.options.previewTemplate = $('#preview-template').html();
myDropzone.on('sending', function (file, xhr, fd) {
xhr.setRequestHeader('X-RequestDigest', $('#__REQUESTDIGEST').val());
});
The problem I've encountered is that almost all the files (PDF being the only one not) are shown as corrupt files when the upload is done. This is most likely due to the fact SharePoint requires that the file being uploaded is sent as an ArrayBuffer. MSDN Source
Using a regular Ajax POST and the method above to convert the file to an arraybuffer, I've successfully uploaded content to the SharePoint document library, without them getting corrupt. Now, I would like to do the same but without having to omit the use of Dropzone.js, that adds a very nice touch to the interface of the functionality.
I've looked into modifying the uploadFiles()-method in dropzone.js, but that seems drastic. I've also tried to figure out whether or not I can use the accept option in options but that seems like a dead end.
The two most similar problems with solutions are the ones linked below, where the first seems to be applicable in my case, but at the same time looks less "clean" than I would want to use.
The second one is for uploading images with a Base64 encoding.
1 - Simulating XHR to get ArrayBuffer
2 - Upload image as Base64 with Dropzone.js
So my question in a few less words is, when a file is added, how do I intercept this, convert the data to an arraybuffer, and then POST it using Dropzone.js?
This is a late answer to my own question, but it is the solution we went for in the end.
We kept dropzone.js just for the nice interface and some help functions, but we decided to do the actual file upload using $.ajax().
We read the file as an array buffer using the HTML5 FileReader
var reader = new FileReader();
reader.onloadend = function(e) {
resolve(e.target.result);
};
reader.onerror = function(e) {
reject(e.target.error);
};
reader.readAsArrayBuffer(file);
and then pass it as the data argument in the ajax options.
I recently came across this exact issue and so did some investigation into the Dropzone library.
I managed to correct the upload process for SharePoint/Office 365 by monkey patching the Dropzone.prototype.submitRequest() method to modify it to use use my own getArrayBuffer method that returns an ArrayBuffer using the FileReader API before sending it via the Dropzone generated XMLHttpRequest.
This enables me to continue to use the features of Dropzone API.
Only tested on a single auto upload, so will need further investigation for multi file upload.
Monkey patch
Dropzone.prototype.submitRequest = function (xhr, formData, files) {
getArrayBuffer(files[0]).then(function (buffer) {
return xhr.send(buffer);
});
};
getArrayBuffer
function getArrayBuffer(file) {
return new Promise(function (resolve, reject) {
var reader = new FileReader();
reader.onloadend = function (e) {
resolve(e.target.result);
};
reader.onerror = function (e) {
reject(e.target.error);
};
reader.readAsArrayBuffer(file);
});
}
After the file is uploaded into SharePoint, I use the Dropzone 'success' event to update the file with metadata.

Multiple s3 buckets in Filepicker.io

I need to upload to multiple s3 buckets with filepicker.io. I found a tweet that indicated that there was a hacky, but possible, way to do this. Support hasn't gotten back to me yet, so I'm hoping that someone here already knows the answer!
Have you tried generating a second application/API key? It looks like they lock your S3/AWS credentials to an application/API key rather than directly to the account.
Support just got back to me. There's no way to do this besides creating multiple applications, which is okay if you are just switching between prod/staging/dev, but not a good solution if you have to upload to arbitrary buckets.
My solution is to execute a PUT request with the x-amz-copy-source header after the file has been uploaded, which copies it to the correct bucket.
This is pretty hacky as it request two extra requests per file -- one filepicker.stat and one more call to s3 (or your server).
#Ben
I am developing code with same issue of files needing to go into many buckets. I'm working in ASP.net.
What I have done is have one Filepicker 'application' with it's own S3 bucket.
I already had a callback to the server in the javascript onSuccess() function (which is passed as a parameter to filepicker.store()). This callback needed to be there to do some book-keeping anyway.
So I have just added in an extra bit to the server-side callback code which uses the AWS SDK to copy the object from the bucket filepicker uploades it to, to it's final destination bucket.
This is my C# code for moving, or rather copying, an object between buckets:
public bool MoveObject(string bucket1, string key1, string bucket2, string key2 = null)
{
bool success = false;
if (key2 == null) key2 = key1;
Logger logger = new Logger(); // my logging system
try
{
RegionEndpoint region = RegionEndpoint.EUWest1; // use your region here
using (AmazonS3Client s3Client = new AmazonS3Client(region))
{
// TODO: CheckForBucketFunction
CopyObjectRequest request = new CopyObjectRequest();
request.SourceBucket = bucket1;
request.SourceKey = key1;
request.DestinationBucket = bucket2;
request.DestinationKey = key2;
S3Response response = s3Client.CopyObject(request);
logger.Info2Log("response xml = \n{0}\n", response.ResponseXml);
response.Dispose();
success = true;
}
}
catch (AmazonS3Exception ex)
{
logger.Info2Log("Error copying file between buckets: {0} - {1}",
ex.ErrorCode, ex.Message);
success = false;
}
return success;
}
There are AWS SDKs for other server languages and the good news is Amazon doesn't charge for copying objects between buckets in the same region.
Now I just have to decide how to delete the object from the filepicker application bucket. I could do it on the server using more AWS SDK code but that will be messy as it leaves links to the object in the filepicker console. Or I could do it from the browser using filepicker code.

RPCManager RPCRequest how to capture the response in gwt

hi i have requirement to capture the data for validations. i am able to fetch the data using RPCRequest and RPCManager by using setActionUrl to the controller class.from there creating the service class and dao classes .i am able to fetch the data into controller class.but i am unable get the data back into my grid.i want the data to be fetched into a variable.i am not using asynchronus service.i used async method in grid i am able to fetch into onSuccess() method.but without using how i can fetch.the data into grid.
with regards
subodh
Here example in our ServiceImpl class to retrive datas from DB
public final String getDatas(final HashMap<String, String> param) {
List<ShippingBean> result = null;
JSONObject obj = new JSONObject();
try {
// retrieve data from DB
data = dao.selectAll();
}
catch (BusinessException e) {
throw new InvocationException("BusinessException occurs ...", e);
}
obj = JSONObject.fromObject(result);
return obj.toString();
}
We use net.sf.json to serialize as DOM and return this to presenter call as AsyncCallBack method. And then , retrieve data as like that..
AsyncCallback<String> callback = new AsyncCallback<String>() {
public void onFailure(final Throwable caught) {
Window.alert("Error!");
}
public void onSuccess(final String result) {
HTML html = new HTML(result.replace(" ", "-"));
JSONValue value = JSONParser.parseLenient(html.getText());
JSONWrapper json = new JSONWrapper(value);
System.out.println(json.get(0).get("variableName").stringValue());
}
};
I have done something similar within a project with retrieve data and posting it to GWT from database. My database setup was Microsoft SQL 2012 and Hibernate framework to retrieve it. However I created a custom try/catch block and if/else bocks for validation on client side.
I used this tutorial from Hibernate to GWT to set up the transactions between the web application and the database for both saving and retrieving. Their source code provides the web page modules for setting up the displaying which I mimicked to fit to my needs since I was not storing "records" or "users".
GWT does have a validation setup but I spent a 10 hours trying to figure out the client side validation and gave up for something much simpler such as try/catch on the data being submitted since I had no concern for the format of the numbers.
Google "GWT Validation" and you should have some documentation about it but their isn't that much to choose from since everything seems to be a copy of Google's documentation.
Link - Hibernate to GWT
Hope this helps or points you in the right direction towards your answer.

how can I check for an existing web folder

I work as software tester entry level and I was given a task to save my log files to the specific folder on my company website and this website only can be accessed internally by the company employees. So far I know how to save file onto the site, but how would I check which specific folder is already there before I save the file to it?
private void SaveLogsTogWeb(string file)
{
try
{
//create WebClient object
WebClient client = new WebClient();
client.Credentials = CredentialCache.DefaultCredentials;
client.UploadFile(#"http://myCompnay/MyProjects/TestLogs/" + file, "PUT", file);
client.Dispose();
}
catch (Exception err)
{
MessageBox.Show(err.Message);
}
}
Thanks in advance for the helps
Use this code:
if(!Directory.Exists({path}))
{
//create the directory
}
It checks to see if the directory doesn't exist. And if it doesn't then you can create it!
One way would be to put a dummy file in that folder (dummy.txt) and do an HTTP GET of the file. If you can successfully do that, you can then assume the folder exists (barring any virtual folders, etc.)