What is the character limit of full links (not shortened) created by twitter's t.co? - redirect

I understand there's no theoretical character limit to a URL, but it's 255 for the domain.
The consensus seems to indicate a safe bet is about 2000 characters, but it all depends on the browser and web server.
The twitter API faq for t.co doesn't mention a limit for expanded links, but instead only the shortened links.
Does anyone know what the maximum character count for a fully-expanded t.co link is, regardless of browser or target web server?
Is there a quick and easy way to check?
What happens if a link is too long?

Related

Real Google static map URL limit

One of the features on my site is a landing page that uses static maps to show a route created by the user. I noted some time ago that some routes would break, and presumed that this was due to very long routes exceeding the 2048 character limit. It's a pretty simple fix; I make the routes shorter by pulling out every other point (as many times as it takes to get the URL short enough.) Since these are very long routes and relatively small images, the loss of precision isn't really noticeable.
Here's the thing though. I went back to find some older very long routes to test with before deploying, and I couldn't find any that were broken. I'm finding routes with URLs over 6000 characters that are working just fine: http://gmap-pedometer.com/gp/bookmark/view/id/6829704.
In fact, I can't find any routes that are breaking. There must have been a change on the API side. The documentation still says the URL limit is 2048. Does anyone know what the new limit is?
The documentation is already updated:
Google Static Maps API URLs are restricted to 8192 characters in size.
They should provide a POST API to get a static map with the really long encoded path,
In my case, when I drew some driving convex hull polygon, the static get API reach its limitation.

Url length limitation in Sharepoint Online REST

When I'm trying to download files from Sharepoint Online using REST, by calling _api/web/GetFileByServerRelativeUrl, sometimes, I'm getting error :
The length of the URL for this request exceeds the configured maxUrlLength value.
So, as this is Sharepoint Online, there no way to change this configuration (the limit it self is also not clear, why 256 and not 1024 ??, or why to limit at all ??)
So, I was trying to find a solution for this issue and found that I can call another REST method : /_api/web/GetFileById and provide unique id, which is working in 95% of cases, except some case where document library is SitePages or where the URL of the file is really big (even when I'm trying to download using GetFileById), the error here is 404 - not found.
So, web service do allow to access and download those files, BUT web service doesn't work with OAuth tokens (my customers requires OAuth).
Is there any other solution for this ? Or any chance to influence on the 256 chars url limit ?
Thanks
Don't know about why limit is 256 characters but for your other questions.
You are right to prevent this "url lenght exceed error" you should use /GetFileById
https://site_url/_api/web/GetFileById('file_unique_id')
This api will give url length exceed error only if your "site_url" is very large either from having large name of sites or large site heirarchy depth in which case request will fail.
For this line in your question
"except some case where document library is SitePages"
This is not correct.This request works for all libraries.
For other help you can also refer https://sharepoint.stackexchange.com/questions/194576/url-request-length-limit-when-using-rest-apis
There are certain software boundaries for SharePoint which includes the length of URL. You can find the complete details for URL path length restrictions in the below mentioned MSDN link (This is still applicable for SP2013)
https://technet.microsoft.com/en-in/library/ff919564(v=office.14).aspx
Coming back to your problem, have you tried using Javascript object model with sp.js or spservices ? You can find quick start helpers here.
https://msdn.microsoft.com/library/jj163201.aspx#BasicOps_FileTasks

Tracking opens in email (alternatives to images/pixels)

l was just curious if there are any techniques to record email opens other than using a hosted pixel/image.
I've read a few places that facebook uses bgsound src tag's to do this, but it doesn't seem to work in the web based gmail client for me.
Any suggestions?
Facebook definitely uses standard tracking pixels (as do nearly all others across the email-sending spectrum). You can also track via click redirection (if someone clicks a link, they can go to your site first, where you record that as evidence of an open, then perform a 301 redirect to another site), but that requires a click, which isn't guaranteed.
At the moment, tracking pixels are the defacto standard. As long as you adhere to good operating principles (pixels should be 1x1, be zero bytes in size, and the HTTP headers should indicate a standard image format and 200 response code), your tracking pixels should operate cleanly.

Facebook image URLs - how are they kept from un-authorised users?

I'm interested in social networks and have stumbled upon something which makes me curious.
How does facebook keep people from playing with URLs and gaining access to photos they should not?
Let me expand, here's an altered example of a facebook image URL that came up on my feed-
https://fbcdn-sphotos-g-a.akamaihd.net/hphotos-ak-prn1/s480x480/{five_digit_number}_{twelve_digit_number}_{ten_digit_number}_n.jpg
So, those with more web application experience will presumably know the answer to this, I suspect it's well understood, but what is to stop me from changing the numbers and seeing other people's photos that possibly I'm not supposed to?
[I understand that this doesn't work, I'm just trying to understand how they maintain security and avoid this problem]
Many thanks in advance,
Nick
There's a couple ways you can achieve it.
The first is link to a script or action that authenticates the request, and then returns an image. You can find an example with ASP.NET MVC here. The downside is it's pretty inefficient, and you run the risk of twice the bandwidth for each request (once so your server can grab the image from wherever it's stored, and once to serve it to your users).
The second option, you can do like Facebook and just generate obscure url's for each photo. As Thomas said in his comment, you're not going to guess a 27 digit number.
The third option I think is the best, especially if you're using something like Microsoft Azure or Amazon S3. Azure Blob Storage supports Shared Access Signatures, which let's you generate temporary url's for private files. These can be set to expire in a few minutes, or last a lifetime. The files are served directly to the user, and there's no risk if the url leaks after the expiration period.
Amazon S3 has something similar with Query String Authentication.
Ultimately, you need to figure out your threat model, and make a decision weighing the pros and cons of each approach. On Facebook, these are images that have presumably been shared with hundreds of friends. There's a significantly lower expectation of privacy, and so maybe authenticating every request is overkill. A random, hard to guess URL is probably sufficient, and let's them serve data through their CDN, and minimizes the amount of processing per request. With Option 3, you're still going to have overhead of generating those signed URL's.

REST api pagination: make page-size a parameter (configurable from outside)

we are having a search/list-resource:
http://xxxx/users/?page=1
Internally the page-size is static and returns 20 items. The user can move forward by increasing the page number. But to be more flexible we are now thinking also to expose the size of a page:
http://xxxx/users/?page=1&size=20
As such this is flexible as the client can now decide network-calls vs. size of response, when searching. Of course this has the drawback that the server could be hit hard either by accident or maliciosly on purpose:
http://xxxx/users/?page=1&size=1000000
For robustness the solution could be to configure an upper limit of page size (e.g. 100) and when it is exceeded either represent an error response or a HTTP redirect to the URL with highest possible page-size parameter.
What do you think?
Personally, I would simply document a maximum page size, and anything larger than that is simply treated as the maximum.
Managing access to resources is always a good idea aka protecting outside interfaces: in other words, put a sensible limit.
The redirect might be a good idea when it comes to development time i.e. when the user of the API gets acquainted with the service but outside of this situation, I doubt there is value.
Make sure the parameters are well documented either way.
Have you tested this to see if it's even a concern? If a user asks for a page size of a million does that really cause all your other requests to stop/slow? If so I might look at your underlying architecture first. But if, in the end, this is an issue I don't think setting a hard limit on page size is bad.
Question: When I user GETs the URI http://xxx/user?page=1 does the response have a link in it to the next page? previous page? If not then it's not really RESTful.