Creating an attachment in SharePoint from Microsoft Forms Response - Get File Content using path not working - forms

I am trying to add contents and an attachment from a Form to a SharePoint list. However, the Get file content using path action in my flow is failing. The error I'm receiving says "Unauthorized" and in the file content box, I receive the following message:
"status": 401,
"message": "A potentially dangerous Request.Path value was detected from the client (?).",
"source": "apidod.connectorp.svc.ms"
The file path is as follows (minus the front of the path):
sites/HSMWINGATLANTIC_Supply_Requests/Shared%20Documents/Forms/AllItems.aspx?newTargetListUrl=%2Fsites%2FHSMWINGATLANTIC%5FSupply%5FRequests%2FShared%20Documents&viewpath=%2Fsites%2FHSMWINGATLANTIC%5FSupply%5FRequests%2FShared%20Documents%2FForms%2FAllItems%2Easpx&id=%2Fsites%2FHSMWINGATLANTIC%5FSupply%5FRequests%2FShared%20Documents%2FApps%2FMicrosoft%20Forms%20Fairfax%2FVehicle%20Rental%20Request%2FSupporting%20Documents&viewid=55590b8b%2D4994%2D4e8b%2D804b%2D24f4774c21e920220815 - HSM-40 Truck Request for 15 AUG 20_Charles Power 1.pdf

c.d.power
For that Get File content using path you would need a relative path without the site url part. You can actually extract the correct path with an expression.
In the example below I retrieve the link property from the Attachment question answer value. I use a json function to turn it into an array, since Microsoft returns a string value for some reason ;)
After that I use nthindexof to determine at which forward slash (starting position of string) I need to slice with a slice function, in this case the 7th instance, which is index 6.
This should retrieve the part which we need for a get file content using path action. With a decodeUriComponent function I make sure the %20 is turned back into space characters.
Make sure you update the question id to your question id.
decodeUriComponent(slice(json(outputs('Get_response_details')?['body/re67e0cfcd95d488593347d93f2728204'])[0]['link'], nthindexof(json(outputs('Get_response_details')?['body/re67e0cfcd95d488593347d93f2728204'])[0]['link'], '/', 6)))

I found the solution to the issue. This wasn’t working because it is a group form and form responses are sent to the group’s SharePoint site; not the user’s OneDrive. Therefore, the Get file content action should be using the SharePoint connector instead of OneDrive.

Related

To encode or not to encode path parts in GCS?

Should path parts be encoded or not encoded when it comes to Google Cloud Storage?
Encoding URI path parts says they should be encoded, but Object names talks about the possibility of naming GCS objects in a seemingly-hierarchical manner...
So if I name an object abc/xyz, is the path to my object https://www.googleapis.com/storage/v1/b/example-bucket/o/abc%2fxyz or https://www.googleapis.com/storage/v1/b/example-bucket/o/abc/xyz?
Which is it!? Somebody please help me with this confusion.
TL;DR
You can use nested folders when working through a GCS client library but sending GET requests to the URL itself will need you to understand how to map the folder names appropriately.
Let's all pretend that folders are real
Yes, you need to encode the object names. There's a useful description here which I partially quote below (with my emphasis) for reference:
Object names reside in a flat namespace within a bucket, [...] means
that objects do not reside within subdirectories in a bucket. For
example, you can name an object
/europe/france/paris.jpg
to make it appear that paris.jpg resides in the subdirectory /europe/france, but to Cloud Storage, the object simply exists in the bucket and has the
name /europe/france/paris.jpg.
So there are no subdirectories but appropriate naming and the use of a knowledgeable UI or API will make it appear as if there is some hierarchy.
All GCS client libraries will know to encode the names correctly but if you are running raw GETs on them (with appropriate authentication), you will have to do this yourself. The relevant section is here and I quote the most relevant part here:
For example, if you send a GET request for the object named foo/?bar
in the bucket example-bucket, then your request URI should be:
GET https://www.googleapis.com/storage/v1/b/example-bucket/o/foo%2f%3fbar
So you can see that the object name part as been encoded with %2f for the slash (/) character. There's a more complete description of the naming convention here.
Metadata v Content using GCS JSON API
I was slightly surprised that the default behaviour for the API was to return metadata about the object in the bucket. To get the actual content I had to append '?alt=media' as described at the end of this section:
By default, this responds with an object resource in the response
body. If you provide the URL parameter alt=media, then it will respond
with the object data in the response body.

TYPO3 7.6: 404 error page: HTML wrapped in numbers

I created my own “404 Page not found” error page on a TYPO3 website and implemented it via the /typo3conf/LocalConfiguration.php as follows, using the page’s Speaking URL path:
return [
...
'FE' => [
...
'pageNotFound_handling' => '/page-not-found/',
]
]
Now when I call a non-existing page, the error page gets displayed but there is a 4-digit alphanumeric number (hexadecimal as far as I’ve seen by now) BEFORE the HTML source code and a “0” AFTER it. Example (the number in the beginning is different after most of the reloads):
37b3
<!DOCTYPE html>
...
</html>
0
When calling the error page URL itself the page is returned correctly without those numbers.
Having the RealURL extension activated or deactivated does not make a difference.
Thanks a lot in advance!
I added the full description from the install tool and I guess we might find the solution there.
How TYPO3 should handle requests for non-existing/accessible pages.
empty (default)
The next visible page upwards in the page tree is shown.
'true' or '1'
An error message is shown.
String
Static HTML file to show (reads content and outputs with correct headers), e.g. notfound.html or http://www.example.org/errors/notfound.html.
Prefix "REDIRECT:"
If prefixed with "REDIRECT:" it will redirect to the URL/script after the prefix.
Prefix "READFILE:"
If prefixed with "READFILE" then it will expect the remaining string to be a HTML file which will be read and outputted directly after having the marker "###CURRENT_URL###" substituted with REQUEST_URI and ###REASON### with reason text, for example: READFILE:fileadmin/notfound.html.
Prefix "USER_FUNCTION:"
If prefixed with "USER_FUNCTION:" a user function is called, e.g. USER_FUNCTION:fileadmin/class.user_notfound.php:user_notFound->pageNotFound where the file must contain a class user_notFound with a method pageNotFound() inside with two parameters $param and $ref.
What you configured:
You're passing a string, thus TYPO3 expects to find a file - which you don't have, because it's more like an URL.
From what you try to achieve I'd go with REDIRECT:/page-not-found/.
Thanks for pointing this one out btw, I will remove the string configuration from the core since it does not make sense to have more people trip into this pitfall.
In short: change the following line in the FE section of your LocalConfiguration.php:
'pageNotFound_handling' => '/your404page.html',
to
'pageNotFound_handling' => 'REDIRECT:/your404page.html',
Cause
The actual cause is a combination of chunked Content-Encoding and the TYPO3 not being able to decode that in some cases. In your case the page not found handler eventually uses GeneralUtility::getUrl() to retrieve the error page.
If you have [SYS][curlUse] enabled it will use cUrl to retrieve the page and there is no problem.
If you don't have [SYS][curlUse] enabled it will open a socket, read the headers and then read the rest of the body. If the webserver uses "chunked" Content-Encoding the body will contain blocks of data and each block starts with a line with the length in hexadecimal format. The content ends with an empty block (with of course a line with the length "0").
cUrl apparently knows how to decode chunked data.
getUrl() itself does not know how to handle chunked data and uses the content as is as the page content.
In TYPO3 8 LTS the guzzle library is used to handle HTTP requests. In the guzzle code I can't find anything about handling chunked data. Guzzle will check if the cUrl PHP extension is present and use that as preferred transport. In most installations cUrl is present and since this decodes chunked data automagically no problem is visible. I have to test guzzle with PHP that has cUrl disabled to see if the issue is also present in v8/master.
Workaround/solution
If the PHP extension cUrl is enabled in your installation you can simply set [SYS][curlUse] in the Install Tool. The numbers around the 404 page content will disappear.

Can I fake uploaded image filesize?

I'm building a simple image file upload form. Programmatically, I'm using the Laravel 5 framework. Through the Input facade (through Illuminate), I can resolve the file object, which in itself is an UploadedFile (through Symfony).
The UploadedFile's API ref page (Symfony docs) says that
public integer | null getClientSize()
Returns the file size. It is extracted from the request from which the
file has been uploaded. It should not be considered as a safe
value. Return Value integer|null The file size
What will be these cases where the uploaded filesize is wrongly reported?
Are there known exploits using this?
How can the admin ensure this is detected (and hence logged as a trespass attempt)?
That method is using the "Content-Length" header, which can easily be forged. You'll want to use the easy construct $_FILES['myfile']['size']. As an answer to another question has already stated: Can $_FILES[...]['size'] be forged?
This value checks the actual size of the file, and is not modified by the provided headers.
If you'd like to check for people misbehaving, you can simply compare the content-length header to your $_FILES['myfile']['size'] value.

Copy file to another folder with REST API in OneDrive pro (Sharepoint)

I'm trying to copy a file located in user's personal folder (OneDrive Pro) using REST API. The resulting link seems to become too long (???) and server returns 400 BadRequest: The length of the URL for this request exceeds the configured maxUrlLength value.
Url looks like this (HTTP verb is POST):
https://<company>-my.sharepoint.com/personal/<user>_<company>_onmicrosoft_com/_api/Web/GetFileByServerRelativeUrl('/personal/<user>_<company>_onmicrosoft_com/Documents/<Folder>/<Folder with guid-like name>/<filename>.pdf')/copyto(strnewurl='/personal/<user>_<company>_onmicrosoft_com/Documents/<Same folder>/<another guid-like name>/<same filename>.pdf',boverwrite=true)
Any help or advice on how to overcome this is highly appreciated.
Just in case if anyone else will face this problem and you will have unique id in your SP.
If first part, which is
GetFileByServerRelativeUrl('/personal/<user>_<company>_onmicrosoft_com/Documents/<Folder>/<Folder with guid-like name>/<filename>.pdf')
will be replaced with GetFileById(uniqueId) the limitation would be satisfied and copy will succeed.

BigCommerce API Update Order with PUT

I need to update an order which is done via PUT method passing the order id as part of the https url string and a single parameter, the status_id.
https://mystore.mybigcommerce.com/orders/12345.json
I have tried several methods to pass the status_id value but no matter what I try "status_id=12" or formatted as JSON "{"status_id": 12,}" I always get the same response:
[{"status":415,"message":"The specified input content type is not valid."}]
I have also tried as a POST request passing the JSON or XML code as raw data but that method is not supported.
How am I supposed to pass that field=value pair? can I embed it in the url string?
I also tried it but it wouldn't work for me.
Any ideas?
In case you are wondering I am doing it within FileMaker with TROIUrl plugIn, not a very popular technology, but the GET method retrieving orders works like a charm
TURL_Put( ""; $url ;"status_id=12") (I have also tried other FM plugIns to no avail)
Don't get too caught up in the Filemaker part, I don't expect many people out there to be familiar with BigCommerce and Filemaker. I just need a generic answer.
Thanks
Commandline tool curl is worth a try. It supports put and https.
Mac OS X: curl already installed, call from FileMaker via AppleScript do shell script.
Windows: must be installed, call via Powershell.
It works for me using { "status_id": "3" } which means you probably need to put quotes around the actual number.
Also, it is a PUT operation and application/json which is part of the request content.
The error message received by the OP:
[{"status":415,"message":"The specified input content type is not valid."}]
Is saying that he did not supply the 'Content-Type' header in his request or that the header supplied is for a content type that is not allowed. For the OP's case using JSON he would need to include the header:
Content-Type: application/json
in his HTTPS request. This description can be found along with those of the other status codes you may see here:
https://developer.bigcommerce.com/api/status-codes