File Upload: File Upload URL not provided - formio

I've been trying to get file uploads to work, following the instructions for both Dropbox and S3 but each time I just get this message:
File Upload URL not provided
It doesn't seem to be making any calls to the server. I've found this mention of a bug around file uploads:
https://github.com/formio/ngFormio/issues/322
But I suspect that applies if you're hosting it yourself. I'm using the cloud version.
I've configured it with e.g. the S3 bucket's URL, authentication etc.
What does this error actually mean?
Update: here's the syntax I'm using:
<formio form="https://formview.io/#/xxxxxxxxxxxxxxxxxxx/applicationform" url="'https://formview.io/#/xxxxxxxxxxxxxxxxxxx/applicationform'"></formio>
Thanks

In order to make the uploads work, you need to provide the URL of your form, which is used to generate the upload token to upload the files to the 3rd party providers. This can be done in one of two ways.
<formio src="'https://examples.form.io/example'"></formio>
You would use above if you wish to render the form from the JSON REST API of the form. In many cases, you may wish to provide the actual form object (which I suspect is what you are doing) like so.
<formio form="{...}"></formio>
This works fine for rendering the form, but it does not provide the URL context for file uploads. For this reason, we have the url parameter which you can include along with your form object for file uploads to work.
<formio form="{...}" url="'https://examples.form.io/example'"></formio>
Providing the url this way is passive. The form will not try to submit to that url, but rather just use it as the url configuration for file uploads.

Related

How to remove query string from Firebase Storage download URL

Problem:
I need to be able to remove all link decoration from the download URL that is generated for images in Firebase Storage.
However, when all link decoration is stripped away, the resulting link currently would return a JSON document of the image's metadata.
Context:
The flow goes as follows:
An image is uploaded to Firebase from an iOS app. Once that is done the download URL is then sent in a POST request to an external server.
The server that the URL is being sent to doesn't accept link decoration when submitting image URLs.
Goal:
Alter the Firebase Storage download URL such as it is stripped of all link decoration like so:
https://firebasestorage.googleapis.com/v0/b/example.appspot.com/o/[FOLDER_NAME]%[IMAGE_NAME].jpg
Notes:
The problem is twofold really, first the link needs to be manipulated to remove all the link decoration. Then the behavior of the link needs to changed, since in order to return an image, you need ?alt=media following the file extension, in this case .jpg. Currently, without link decoration, using the link with my desired structure would return a JSON document of the metadata.
The current link structure is as follows:
https://firebasestorage.googleapis.com/v0/b/example.appspot.com/o/[FOLDER_NAME]%[IMAGE_NAME].jpg?alt=media&token=[TOKEN]
Desired link structure:
https://firebasestorage.googleapis.com/v0/b/example.appspot.com/o/[FOLDER_NAME]%[IMAGE_NAME].jpg
The token is necessary for accessing the image depending security rules in place, but can be ignored with the proper read permissions. I can adjust the rules as needed, but I still need to be able to remove the ?alt=media and still return an image.
Building up on Frank's answer, if you access to your associated Google Cloud Platform project, find the bucket in the Storage tab and make this bucket public, you will be able to get the image from here with the format you wish. That is, you will not be accessing through Firebase
https://firebasestorage.googleapis.com/v0/b/example.appspot.com/o/[FOLDER_NAME]%[IMAGE_NAME].jpg
but through Google Cloud Storage, with a link like
https://storage.googleapis.com/[bucket_name]/[path_to_image]
Once in your GCP project Console, access the Storage bucket with the same name as the one you have in your Firebase project. They are the same bucket. Then make the bucket public by following these steps. After that, you will be able to construct your links as mentioned above and they will be accessible with no token and no alt=media param. If you do not want to make the public to everyone, you will be able to play around with the permissions there as you wish.
You could split the url string into two halves by using String.componentsSeparatedByString(_ separator:)
Storage.storage().reference().child(filePath).downloadURL(completion: { (url, error) in
let urlString = url.absoluteString
let urlStringWithoutQueryString = urlString.componentsSeparatedByString("?").first!
})
Calling .downloadURL on a StorageReference will return you that URL, but this method can be used to remove the query string from any URL. String.componentsSeparatedByString(_ separator:) breaks a String into an array of Strings, splitting the string by any occurrence of a given separator, in this case ?.
NOTE this method assumes that ? occurs only once within the url string, which I believe is the case for all Firebase Storage urls.
You should treat the download URL that you get back from Firebase as an opaque string. There's no way to strip the parameters from a download URL without breaking that download URL.
If you want to allow public access to the files in your bucket with simpler URLs, consider making the object in your (or even your entire) bucket public.

How to get a link to a file from a VSO repo

When I browse a GitHub repo, I can copy the URL from the browser, and I can share it like this -
https://github.com/zlatko-michailov/onesql/blob/master/lang/src/onesql.syntax.ts. The file content is returned in the http response stream without any decorations.
How can I do the same thing for a VSO repo? If I have to tweak the URL a little bit, that's OK.
I see the browser uses a REST API that is documented here - https://learn.microsoft.com/en-us/rest/api/azure/devops/git/items/get?view=azure-devops-rest-5.0. I played with different combinations of includeContent, $format, download, etc., but I could only get the content as a separate download, not in the http response body.
The subject file is some CSV data, and the client is Excel, which doesn't seem to be able to handle downloads.
I solved my own problem. There is no need to create a feed.
The API that fetches raw files is sourceProviders. The link is here: https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
It is not very well documented - examples for the required parameters are missing. The tricky one is sourceProvider. It has to be tfsgit. Skipping serviceEndpointId worked for me.
Here is the pattern:
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/tfsgit/filecontents?&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1

Notification Templating with WebHooks default payload (Appveyor)

So currently I am in the process of setting up notifications, and what I had wanted to send in my message portion was the url for the artifact zip file that was created.
I took a look at the default payload (https://www.appveyor.com/docs/notifications/#webhook-payload-default) and was able to send {{jobs}} which gave me in the email this:
System.Collections.Generic.List`1[Appveyor.Models.BuildJobNotificationTemplateData]
I figured I could traverse this in my messaging template. However, when I tried to do that it kept erroring out with different methods that I’ve tried.
Some of them include :
{{jobs[0].artifacts[0].url}}
{{jobs.artifacts.url}}
{{eventData.jobs.artifacts.url}}
{{eventData.jobs[0].artifacts[0].url}}
Etc…
What would the proper syntax be to grab the first artifacts url using the templating engine?
This syntax will work (see mustache template to understand the syntax)
<p>Artifacts:</p>
<ul>
{{#jobs}}
{{#artifacts}}
<li>{{url}}</li>
{{/artifacts}}
{{/jobs}}
</ul>
But unfortunately it will return temporary Azure blob storage URL, which will expire in 60 minutes. Please watch https://github.com/appveyor/ci/issues/1646. For now to get permanent URL please use this workaround

Downloading and Moving OneDrive files from shared link directory

I am looking for assistance to find out how I can download and move a OneDrive file that is accessed through a shared directory, via the shared link method of sharing.
I have two users:
user 'A' who is a Microsoft Consumer and has a regular OneDrive account and will host a csv file 'test.csv' in a folder 'toshare'
and user 'B' who is also a regular Microsoft Consumer who should use the graph API to download test.csv and then move the file to a subdirectory /toshare/archive
Aside: I am currently using the chrome app "advanced REST client" to manually make the REST calls, and am getting Authenticated OAuth BEARER tokens by inspecting network traffic from Microsoft's online "Graph Explorer" tool. After we understand the calls, we'll integrate it into our Java app.
I have succesfully followed the instructions here:
https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/shares_get
to view the folder contents.
To be more explicit, user 'A' has went into OneDrive and has right clicked the folder 'toshare' and selected shareLink. I have converted the shareLink to a share token and then used the following API call with the Graph API as user 'B':
GET https://graph.microsoft.com/v1.0/shares/<share-token>/root?$expand=children
this shows me all the files in the directory, which includes 'test.csv'
Now, using this information, how can I download test.csv? Assuming user 'B' doesn't know the name of the file, but can identify it by being a .csv file (we can do this in code). There does not appear to be much documentation on how to download the files through a share.
The closest I've gotten was to take the "webUrl" attribute of the children object for my file, and then turn that into a share token and call
GET https://graph.microsoft.com/v1.0/shares/<child-share-token>/root
This will show me the file meta-data. and then I try to download it by roughly following the api documentation to download https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/item_downloadcontent
GET https://graph.microsoft.com/v1.0/shares/<child-share-token>/root/content
This is interesting because this works if I make the call with user 'A' but does not work for user 'B' who instead gets a 403 in advanced REST client. (If I run it in Graph Explorer, I get "The site in the encoded share URI is invalid." instead, which I've discovered with other experimentation, really means there's an authorization issue.)
GET https://graph.microsoft.com/v1.0/shares/<share-token>/root:/test.csv:/content
Also does not work, it returns: "400 Bad Request" with message: "Resource not found for the segment 'root:'." It seems like the path style file navigation does not work for shared directories?
At this point I'm rather stuck. After downloading the file, I also would like to move it into a subdirectory, denoting that it has already been read in. I'd also like to get this working for OneDrive for Business, but that seems to be another set of challenges that I'll leave for another day.
Any insight would be great thanks,
Jeremy
It's best to consider the shares/{id} segments to be similar to drives/{id}, at which point all of the previous documentation around children access is applicable. Given your scenario I'd use the path syntax:
https://graph.microsoft.com/v1.0/shares/<share-token>/root/children/test.csv
This obviously necessitates knowing the file name, but it sounds like you already have an algorithm to do that.
Theoretically your approach for creating a child-share-token would work, but it would now require that User B both provide authentication as well as to have explicit permissions. Since your share-token was a sharing link User B is most likely getting permission by virtue of the fact that they have the URL, in which case generating a new one is probably removing the special token that allows this to work. That's why it's best to always use the original share-token where possible.
Similar rules will apply to move the file. First off, we'll assume that the sharing link provides the ability to "Edit" otherwise none of this will work :). Second, we'll assume that the archive folder already exists (if it doesn't you'd need to create it using a POST to https://graph.microsoft.com/v1.0/shares/<share-token>/root/children that looks like what we've documented here).
To move the file you'd want to PATCH to https://graph.microsoft.com/v1.0/shares/<share-token>/root/children/test.csv and provide a new parentReference as documented here. It's always best to use id values if you have them, but you should also be able to provide the path to the parent in the form of /shares/<share-token>/root/children/archive.

How to handle file uploads to a dedicated image server?

I got a webserver with a running application. There's a webpage with a form: some text data and a file upload field. Now, what I would like to have is it working like this:
The file is sent to the dedicated server, diffrent then the one application is running on. The server should return some kind of path (or anything that identifies the uploaded and saved file and allows to create an URL). Then, both this path and user-filled data should be submitted to the webserver with application, for any kind of database storage.
Problem is, there are 2 diffrent servers, so I can't upload the file with javascript, can I? Another way would be just to use iframe and put the upload form in there - but then I think I can't access the result of the upload (still inside the iframe) with javascript to pass the file path to my main server.
I could also just upload the file to same server my application is running on and then just rsync it to the other one - but I'd like to avoid it if I can, trying to minimalize the traffic actually :)
How do you handle such thing in your applications?
If you used an iframe, you could submit the upload form to the dedicated image server, and in the case of a successful result, have it in turn load a page from the original server with the info (eg. image path) "passed along" as a GET parameter.
POST to dedicated server, server stores image and calls back to web server through a web service or other to give it any info required.