Url length limitation in Sharepoint Online REST - rest

When I'm trying to download files from Sharepoint Online using REST, by calling _api/web/GetFileByServerRelativeUrl, sometimes, I'm getting error :
The length of the URL for this request exceeds the configured maxUrlLength value.
So, as this is Sharepoint Online, there no way to change this configuration (the limit it self is also not clear, why 256 and not 1024 ??, or why to limit at all ??)
So, I was trying to find a solution for this issue and found that I can call another REST method : /_api/web/GetFileById and provide unique id, which is working in 95% of cases, except some case where document library is SitePages or where the URL of the file is really big (even when I'm trying to download using GetFileById), the error here is 404 - not found.
So, web service do allow to access and download those files, BUT web service doesn't work with OAuth tokens (my customers requires OAuth).
Is there any other solution for this ? Or any chance to influence on the 256 chars url limit ?
Thanks

Don't know about why limit is 256 characters but for your other questions.
You are right to prevent this "url lenght exceed error" you should use /GetFileById
https://site_url/_api/web/GetFileById('file_unique_id')
This api will give url length exceed error only if your "site_url" is very large either from having large name of sites or large site heirarchy depth in which case request will fail.
For this line in your question
"except some case where document library is SitePages"
This is not correct.This request works for all libraries.
For other help you can also refer https://sharepoint.stackexchange.com/questions/194576/url-request-length-limit-when-using-rest-apis

There are certain software boundaries for SharePoint which includes the length of URL. You can find the complete details for URL path length restrictions in the below mentioned MSDN link (This is still applicable for SP2013)
https://technet.microsoft.com/en-in/library/ff919564(v=office.14).aspx
Coming back to your problem, have you tried using Javascript object model with sp.js or spservices ? You can find quick start helpers here.
https://msdn.microsoft.com/library/jj163201.aspx#BasicOps_FileTasks

Related

What is the character limit of full links (not shortened) created by twitter's t.co?

I understand there's no theoretical character limit to a URL, but it's 255 for the domain.
The consensus seems to indicate a safe bet is about 2000 characters, but it all depends on the browser and web server.
The twitter API faq for t.co doesn't mention a limit for expanded links, but instead only the shortened links.
Does anyone know what the maximum character count for a fully-expanded t.co link is, regardless of browser or target web server?
Is there a quick and easy way to check?
What happens if a link is too long?

How exactly does backend work from a developer perspective?

Theres a ton of videos and websites trying to explain backend vs frontend, but unfortunately none of them explains it in a way that you know how to develop a backend - driven website (at least I haven't found anything good).
So, I wanted to ensure that I understood it and kindly ask you to confirm or correct me on this topic.
Example:
I wanted to build Mini - Google. I have a Database containing 1000 stored websites.
Assumption #1:
Everytime I type something into the search bar, the autofill suggestions change. This means, everytime i type, another website / API gets called returning the current autofill suggestions. On a developer site, this means the website e.g. is a Python script which gets called with the current word typed in as a Parameter and is returning all suggestions as e.g. JSON:
// Client Side Script
function ontype(input):
suggestions = get("https://api.googlemini.com/suggestions?q=" + str(input))
show(suggestions)
Assumption #2:
This also means I could manually call the website containing the Python script, providing a random word and it would always return a JSON containing the autofill suggestions for that word.
Question #1:
If A#1 turns out true but A#2 turns out false, how could I prevent a user from randomly accessing the "API" while still returning results when called by a script?
Assumption #3:
After pressing enter, my website googlemini.com/search?... would be called. As google.com/search reloads everytime searching for a new query (or going to page 2 etc.), I assume, instead of calling an API, when the server gets the client request, it first searches through its database, sorts the results and then returns a whole html as a static webpage:
// Server Side Script
#app.route("/search")
function oncall():
query = getparam("q")
results = searchdatabase(query)
html = buildhtml(results)
return html
Question #2:
Often, I hear (or at least understand it this way) that database and webserver are 2 seperate servers. How would that work? Wouldn't that mean the database server needs to be accessible to the web too (of course it would have security layers etc., but technically it would)? How could I access the database server from the webserver?
Question #3:
Are there, on a technical basis, any other ways to build backend services?
That's it. I would also appreciate any recommendations like videos, websites or others to learn how to technically setup and / or secure backend servers.
Thanks in advance.
For your first question you can yes there is a way to prevent miss use.
What you can do is add identifier to api like Auth token to identify a user and every time a user access the api you can save the count on the server n whenever the count has exceeded a limit within a time span you can reject the call. And the limit can be set in such a way that it doesn't trouble the honest user and punishes the wrong one. There are even more complex and effective methods but this is the basic idea.
For question number to let me explain you a simple concept a database is a very efficient, resourcefull and expensive data storage solution we never want it to be used in a general sense as varible store or something. We always want to access the database in call get the data process the data update the data. So we do it data way and its not necessary you make sepreate server for data base. The thing is we mostly make databse to be accessible to various platforms android, ios, windows. So its better to add some abstraction and keep data base as a separte entity.
For the last, I am not well aware about what you meant by other but I am listing some backend teechnologies, some of these might be used in isolation some of these not some other tools as well.
Django
FLask
Djnago rest
GraphQL
SQL
PHP
Node
Deno

A method for linking a server side file to a Squarespace page?

I'm trying to build a website on Squarespace, in which the site links to a database file. It's stored in a standard file system on a server tower with cluster. No SQL architecture or anything that I explicitly know of. Unfortunately Google Drive isn't an option due to the size of the file ( > 200 GB). I'm rather lost due to the size constraint -- does anyone have an idea about how to do this? Can I set up some sort of server query using a link on the site? Can I upload the file from my computer and store it somewhere in the backend? Thanks.
"...the size of the file ( > 200 GB)..."
Unfortunately, Squarespace's own upload limitations are far below this for the two places where files like that can be stored: file storage (20MB) and developer-mode '/assets' folder (10MB). The CSS-/Style-related storage only supports images (and likely has a limit of less than 20MB). Digital download products can be 300MB (still to small for your file) and likely can't be linked-to and accessed as you'd need for your application.
"...Can I set up some sort of server query using a link on the site?..."
If you mean a query on some other service besides Squarespace which connects to the file hosted on your Squarespace site, the answer is no simply because there's no way to upload the file to Squarespace due to its size. If, however, your mean a query from your Squarespace site to the file hosted elsewhere, then this must be done using JavaScript and done entirely client-side due to Squarespace's lack of support for server-side languages.
"...Can I upload the file from my computer and store it somewhere in the backend?..."
See the options mentioned above, though all have file size limits below that of your file.
If you are able to utilize the file on your site using client-size/front-end JavaScript only, then perhaps you could host the file on Amazon S3 or other such provider and access it that way.

REST - GET-Respone with temporary uploaded File

I know that the title is not that correct, but i don't know how to name this problem...
Currently I'm trying to design my first REST-API for a conversion-service. Therefore the user has an input file which is given to the server and gets back the converted file.
The current problem I've got is, that the converted file should be accessed with a simple GET /conversionservice/my/url. However it is not possible to upload the input file within GET-Request. A POST would be necessary (am I right?), but POST isn't cacheable.
Now my question is, what's the right way to design this? I know that it could be possible to upload the input file before to the server and then access it with my GET-Request, but those input files could be everything!
Thanks for your help :)
A POST request is actually needed for a file upload. The fact that it is not cachable should not bother the service because how could any intermediaries (the browser, the server, proxy etc) know about the content of the file. If you need cachability, you would have to implement it yourself probably with a hash (md5, sha1 etc) of the uploaded file. This would keep you from having to perform the actual conversion twice, but you would have to hash each file that was uploaded which would slow you down for a "cache miss".
The only other way I could think of to solve the problem would be to require the user to pass in an accessible url to the file in the query string, then you could handle GET requests, but your users would have to make the file accessible over the internet. This would allow caching but limit the usability.
Perhaps a hybrid approach would be possible where you accepted a POST for a file upload and a GET for a url, this would increase the complexity of the service but maximize usability.
Also, you should look into what caches you are interested in leveraging as a lot of them have limits on the size of a cache entry meaning if the file is sufficiently large it would not cache anyway.
In the end, I would advise you to stick to the standards already established. Accept the POST request for the file upload and if you are interested in speeding up the user experience maybe make the upload persist, this would allow the user to upload a file once and download it in many different formats.
You sequence of events can be as follows:
Upload your file/files using POST. For immediate response, you can return required information using your own headers. (It should return document key to access the file for future use.)
Then you can use GET for further operations using the above mentioned document key as a query string.

Still no 'Access-Control-Allow-Origin' header with resumable upload

In this question, I am partially referring to this one.
I am generating an upload URI with an authenticated request on my server, using the gcloud-package for Node. This is done with the createResumableUpload-method of a file. The actual upload will be done in a browser, which will not have the same origin.
Currently, my PUT-requests are cancelled because this header is missing, while OPTIONS-requests work fine.
I found three solutions of which none work.
On number 8 of the troubleshooting list of the documentation about CORS on Google Cloud Storage, it is recommended to set the origin to * (wildcard), using the XML CORS API. While this is outdated, as the API has switched to JSON, it still won't set the header afterwards. I also dislike having to set this to a wildcard, pretty insecure.
Set CORS-option when generating the upload URI to * and the actual origin, these are both ignored
Set CORS as a query string parameter, this is also ignored.
Am I missing something here, or is this still not fixed after two years?
This is still not supported, unfortunately. Resumable uploads are logically considered to be a single operation, which is assumed to involve one remote entity. As I understand it, only the first Origin header will be respected.
You could workaround this in a couple of ways. The best way would probably be to have your server sign a URL and allow the client to start the upload themselves. Alternately, when starting the upload, you could try having your server provide the Origin header that the clients will use, keeping it consistent through the entire operation.
Thanks to Brandon Yarbrough's answer, I could fix my situation.
Turns out, there's a pretty easy solution around this. Set the origin-header using a request-interceptor, and supply it as an option to the createResumableUpload-method of a file in a bucket.
You can now finish your uploads from a browser.