AWS S3 storage does not give a valid URL after uploading file - flutter

I am new to AWS. I use Amplify to upload a video file to S3 storage using Flutter. I want to get the URL after uploading a video to use it elsewhere. I use Amplify.Storage.getUrl(key)).url to get the URL. But when I want to go to the link via the browser, it shows the following error there:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>AccessDenied</Code>
<Message>No AWSAccessKey was presented.</Message>
<RequestId>AK49EW70AR8N1NVC</RequestId>
<HostId>Z6FBLU/GABRvJKFX827m3HoIKfVIpU0iXmH3gwSpcu04nNiqqEFjHZGGLn3VyyVNMY7ndK541ro=
</HostId>
</Error>
And also this link doesn't work when I want to play a video using VideoPlayerController.network(videoUrl).
So what is the solution to get a general URL for the video file uploaded that can be used elsewhere? Thanks in advance.

Related

Unable to render OneNote image resource after appending preAuthenticated=true option in OneNote api call

https://github.com/microsoftgraph/microsoft-graph-docs/issues/2624
I am experience the issue as the above.
I am trying to save the content of a page with a reference to an image by calling https://graph.microsoft.com/v1.0/users/{userId}/onenote/pages/{pageId}/content?preAuthenticated=true
Per this - Downloading one note page with image content as HTML
By appending "?preAuthenticated=true" when you do the fetch, it will make the image public.
But when I tried to render the html, it's giving me "Failed to load resource: the server responded with a status of 401 (Unauthorized)".
It appears that something is wrong with the official document: Get OneNote content and structure with Microsoft Graph.
We can see that the service root URL is https://graph.microsoft.com/{version}/{location}/onenote/.
But in any of the samples on this page, the URL is still https://www.onenote.com/api/v1.0/me/notes.
Currently, when you add ?preAuthenticated=true you will get such a URL for a image on this page:
https://graph.microsoft.com/v1.0/users('{userID}')/onenote/resources/{resourceID}/content?publicAuth=true&mimeType=image/png
But when you try to access it in a browser, you will get 401 error Access token is empty.
A workaround is to modify the URL to:
https://www.onenote.com/api/v1.0/resources/{resourceID}/content?publicAuth=true&mimeType=image/png
Then you will get the image.
https://github.com/microsoftgraph/microsoft-graph-docs/pull/4339/files
I think they removed the support for it.
Bit of off topic, but I figured out how to get the image to render.
https://learn.microsoft.com/en-us/graph/api/resource-get?view=graph-rest-1.0&tabs=http
When you call /onenote/pages/{id}/content, the image has a reference to a source like this
src="https://graph.microsoft.com/v1.0/users({userId})/onenote/resources/{resourceId}/$value" along with data-src-type="image/jpeg"
do a get request to this endpoint and you'll the image binary, convert the binary to base64, and then just render the html by replacing the src with base64.

How to send an image hosted on Amazon S3 through the Facebook Messenger Send API?

The messenger Send API gives me back the response
(#546) The type of file you're trying to attach isn't allowed. Please try again with a different format. error code: 546, error_subcode: 154502
However, if I host the same exact image on Google Cloud instead of Amazon S3, then the image sends fine.
My link to the AWS image:
https://s3.amazonaws.com/paloma-staging-public/files/conversation-step-56-80925.gif
My link to the google cloud image:
https://storage.googleapis.com/callparty/thumbsup.gif
are there any special reasons that a link to an image stored on S3 would not work as an image attachment, but a link to an image stored on google cloud would work?
The answer was that for the AWS link the ContentType of the file was not set.
While uploading to S3 I had to manually set the ContentType of the file appropriately ("image/gif", "image/png" etc.) and for the google cloud storage this must have been automatically set.
This is why the S3 link causes an auto-download, and the google cloud link displays the image in the browser.

File Upload: File Upload URL not provided

I've been trying to get file uploads to work, following the instructions for both Dropbox and S3 but each time I just get this message:
File Upload URL not provided
It doesn't seem to be making any calls to the server. I've found this mention of a bug around file uploads:
https://github.com/formio/ngFormio/issues/322
But I suspect that applies if you're hosting it yourself. I'm using the cloud version.
I've configured it with e.g. the S3 bucket's URL, authentication etc.
What does this error actually mean?
Update: here's the syntax I'm using:
<formio form="https://formview.io/#/xxxxxxxxxxxxxxxxxxx/applicationform" url="'https://formview.io/#/xxxxxxxxxxxxxxxxxxx/applicationform'"></formio>
Thanks
In order to make the uploads work, you need to provide the URL of your form, which is used to generate the upload token to upload the files to the 3rd party providers. This can be done in one of two ways.
<formio src="'https://examples.form.io/example'"></formio>
You would use above if you wish to render the form from the JSON REST API of the form. In many cases, you may wish to provide the actual form object (which I suspect is what you are doing) like so.
<formio form="{...}"></formio>
This works fine for rendering the form, but it does not provide the URL context for file uploads. For this reason, we have the url parameter which you can include along with your form object for file uploads to work.
<formio form="{...}" url="'https://examples.form.io/example'"></formio>
Providing the url this way is passive. The form will not try to submit to that url, but rather just use it as the url configuration for file uploads.

Get s3 filename from filepicker.io after successful upload

I've been using Filepicker.IO in order to upload files directly from the browser to the amazon s3 and most things are working fine, the only problem i'm facing now is that after the upload is done, i'm not getting the name of the file in the s3.
Filepicker js api is returning this object:
Object {url: "https://www.filepicker.io/api/file/xxxxxxxxxxxxx", filename: "xyzhi.mp4", mimetype: "video/mp4", size: 36735, isWriteable: true}
Usually this object comes with a property named 'key' which has the name of the file in the S3.
This happens when the upload is not done from the local computer, if i pick a local file everything works ok, but if i pick a file from any of the providers (e.g Dropbox, Google Drive), i can't get the filename in the S3 server.
Thanks.
You should make sure that you are using a function that is explicitly storing to S3, for instance filepicker.pickAndStore or filepicker.store. As noted in the filepicker.io pick API documentation, the "key" parameter on fpfiles returned specifically from the .pick() call are deprecated and not meant to be used.

GWT GAE Upload through Blob

If I'm using GWT File widget and form panel, can someone explain how to handle upload on blobstore on google application engine??
Take a look at gwtupload. There are examples on how to use it with GAE Blobstore.
Google blobstore is specifically designed to upload and serve blobs via http. Blobstore service (obtained using BlobstoreServiceFactory.getBlobstoreService()) generates http post action for you to use in the html form. By posting file to it you upload your blob to the blobstore. When you generate this action you provide a path to the handler (servlet) where you have access to uploaded blob key:
Map<String, BlobKey> blobs = blobstoreService.getUploadedBlobs(req);
BlobKey blobKey = blobs.get("data");
Note, that "data" is the file field in your form. All you have is a key to the blob (your file). From here you take control - you can save this key for later and/or immediately serve the blob on a page (using key):
BlobKey blobKey = new BlobKey(req.getParameter("blob-key"));
blobstoreService.serve(blobKey, res);
Of course, for details see Google documentation.
One nice feature of the blobstore that it's integrated with Google Mapper (rudimentary map-reduce) service (work in progress) which lets you process files uploaded as blobs line by line: http://ikaisays.com/2010/08/