One of the features on my site is a landing page that uses static maps to show a route created by the user. I noted some time ago that some routes would break, and presumed that this was due to very long routes exceeding the 2048 character limit. It's a pretty simple fix; I make the routes shorter by pulling out every other point (as many times as it takes to get the URL short enough.) Since these are very long routes and relatively small images, the loss of precision isn't really noticeable.
Here's the thing though. I went back to find some older very long routes to test with before deploying, and I couldn't find any that were broken. I'm finding routes with URLs over 6000 characters that are working just fine: http://gmap-pedometer.com/gp/bookmark/view/id/6829704.
In fact, I can't find any routes that are breaking. There must have been a change on the API side. The documentation still says the URL limit is 2048. Does anyone know what the new limit is?
The documentation is already updated:
Google Static Maps API URLs are restricted to 8192 characters in size.
They should provide a POST API to get a static map with the really long encoded path,
In my case, when I drew some driving convex hull polygon, the static get API reach its limitation.
Related
I understand there's no theoretical character limit to a URL, but it's 255 for the domain.
The consensus seems to indicate a safe bet is about 2000 characters, but it all depends on the browser and web server.
The twitter API faq for t.co doesn't mention a limit for expanded links, but instead only the shortened links.
Does anyone know what the maximum character count for a fully-expanded t.co link is, regardless of browser or target web server?
Is there a quick and easy way to check?
What happens if a link is too long?
I built a website for someone and I used https://gtmetrix.com to get some analytics, mainly because the wait time is huge (~20 sec) without having any heavy images. Please find attached a screenshot here:
http://img42.com/05yvZ
One of my problems is that it takes quite a long time to perform the 301 redirect. Not sure why, but if someone has a key to the solution I would really appreciate. At least some hints to search would be nice.
The second problem is after the redirection, the waiting time is still huge. As expected I have a few plugins. Their javascripts are called approx. 6 secs after the redirection. Would someone please show me some directions on where to search please?
P.S. I have disabled all plugins and started from a naked plain Twenty Eleven theme, but I still have waiting times during redirection and smaller delay after redirection.
Thanks in advance
But a few suggestions:
1 and 2.) If the redirect is adding noticeable delays; test different redirect methods. There are several approaches to this -- including HTML meta and server side (ie PHP) methods -- I typically stick to server side; if it's showing noticeable delays using a server side method, this may be a great indicator that your experiencing server issues - and may be very well your server all along causing your speed issues; contact your host provider.
3.) Take a look at the size of your media. Images and Video; also Flash if your using any. Often cases it's giant images that were just sliced / saved poorly and not optimized for web in a image editing software like PhotoShop. Optimize your images for web and re-save them at a lower weight to save significant on load time. Also, many cases nowadays and you can avoid using clunky images in general by building the area out using pure CSS3. (ie. Odd repeatable .gifs to create gradients or borders etc.)
I'm interested in social networks and have stumbled upon something which makes me curious.
How does facebook keep people from playing with URLs and gaining access to photos they should not?
Let me expand, here's an altered example of a facebook image URL that came up on my feed-
https://fbcdn-sphotos-g-a.akamaihd.net/hphotos-ak-prn1/s480x480/{five_digit_number}_{twelve_digit_number}_{ten_digit_number}_n.jpg
So, those with more web application experience will presumably know the answer to this, I suspect it's well understood, but what is to stop me from changing the numbers and seeing other people's photos that possibly I'm not supposed to?
[I understand that this doesn't work, I'm just trying to understand how they maintain security and avoid this problem]
Many thanks in advance,
Nick
There's a couple ways you can achieve it.
The first is link to a script or action that authenticates the request, and then returns an image. You can find an example with ASP.NET MVC here. The downside is it's pretty inefficient, and you run the risk of twice the bandwidth for each request (once so your server can grab the image from wherever it's stored, and once to serve it to your users).
The second option, you can do like Facebook and just generate obscure url's for each photo. As Thomas said in his comment, you're not going to guess a 27 digit number.
The third option I think is the best, especially if you're using something like Microsoft Azure or Amazon S3. Azure Blob Storage supports Shared Access Signatures, which let's you generate temporary url's for private files. These can be set to expire in a few minutes, or last a lifetime. The files are served directly to the user, and there's no risk if the url leaks after the expiration period.
Amazon S3 has something similar with Query String Authentication.
Ultimately, you need to figure out your threat model, and make a decision weighing the pros and cons of each approach. On Facebook, these are images that have presumably been shared with hundreds of friends. There's a significantly lower expectation of privacy, and so maybe authenticating every request is overkill. A random, hard to guess URL is probably sufficient, and let's them serve data through their CDN, and minimizes the amount of processing per request. With Option 3, you're still going to have overhead of generating those signed URL's.
I'm developing the application which gets the top 20 of pages from all letters. Basically, at this time there's no problem with limitation. But I need to know what's the exact number of requests from one IP address per second ?
Best regards,
There is no exact number per second. Like any other site, if you do too many you will likely get blocked as a denial of service attack. If you are doing too many of an extended period of time, Facebook will likely block you, at least temporarily.
If you are trying to crawl Facebook, then you should obey the rules defined in their robots.txt file like any other crawler/spider should.
https://www.facebook.com/robots.txt
http://www.facebook.com/apps/site_scraping_tos_terms.php
That said, I've done around 15 million update requests per day back when they have profile boxes. Never had a problem.
we are having a search/list-resource:
http://xxxx/users/?page=1
Internally the page-size is static and returns 20 items. The user can move forward by increasing the page number. But to be more flexible we are now thinking also to expose the size of a page:
http://xxxx/users/?page=1&size=20
As such this is flexible as the client can now decide network-calls vs. size of response, when searching. Of course this has the drawback that the server could be hit hard either by accident or maliciosly on purpose:
http://xxxx/users/?page=1&size=1000000
For robustness the solution could be to configure an upper limit of page size (e.g. 100) and when it is exceeded either represent an error response or a HTTP redirect to the URL with highest possible page-size parameter.
What do you think?
Personally, I would simply document a maximum page size, and anything larger than that is simply treated as the maximum.
Managing access to resources is always a good idea aka protecting outside interfaces: in other words, put a sensible limit.
The redirect might be a good idea when it comes to development time i.e. when the user of the API gets acquainted with the service but outside of this situation, I doubt there is value.
Make sure the parameters are well documented either way.
Have you tested this to see if it's even a concern? If a user asks for a page size of a million does that really cause all your other requests to stop/slow? If so I might look at your underlying architecture first. But if, in the end, this is an issue I don't think setting a hard limit on page size is bad.
Question: When I user GETs the URI http://xxx/user?page=1 does the response have a link in it to the next page? previous page? If not then it's not really RESTful.