Resize images client-side to thumbnails results in jaggy and ugly pictures - thumbnails

Im trying to use thumbnails on the fly so I won't have to have both thumbs and actual images. I had em done with PHP (with the excellent imagecopyresampled function) which worked great.
Now I'm looking to do something similar without PHP and I'm curious about the alternatives. Having the browsers do the rendering doesn't seem to be a good idea. I get good results with Explorer, Safari and Chrome whereas both Firefox and Opera produce jaggy thumbs. I have understood that this have to do with the browsers ability to scale using bicubic interpolation or not.
I'm now wondering if there's a way to let Javascript do it, like PHP did earlier with bicubic interpolation, which results in better-looking thumbs or if there is a fix for the browser issues here (I know about the CSS property -ms-interpolation-mode)? In general, what's the opinion on client-side generated thumbs? Maybe it's better to keep going with PHP instead if there's not a reasonable alternative?
PS Does it matter if I rescale the images using Javascript or CSS?

There are two different reasons to prefer server-side thumbnail generation. The first as you've discovered is the inconsistency of results from browser to browser. The second is that client-side resizing requires the entire full-size image to be downloaded to the client - this results in a significant page slowdown.
Your issue with browser scaling may not have anything to do with their using bicubic interpolation or not. There are different implementations of bicubic, some of which do a great job of shrinking an image and some that don't.

Related

EWW: very slow opening of certain pages

I'm using EWW Emacs browser for opening various remote pages(mostly documentation) which is very handy most of the time.
I'm still trying to understand why certain pages take 4-5-6+ seconds to be rendered in eww(which take <1s in Chrome, for comparison).
For my tasks, I only care about the loaded content - images and fancy styles are not needed.
Is there any simple way to speed it up?
Like setting readable mode/disabling images before calling eww? If that's possible at all.
Update from a few weeks later
I made a few experiments and from what I found the biggest contributing factor in my case is when a page has lots of third-party fonts.
I wasn't able to find a way to disable font fetching in eww source code so probably true "text based" browser like w3m was a better solution in the first place.
Any clarification comments and answers are still very welcome.

how to get browser window resolution while loading Unity3D

Really need your input. I have unity furniture configurator and client decided he wants make it fully adjustable depending on browser window. Would be perfect if unity can get resolution while loading.
I found this http://helloracer.com/unity/ - it is exactly same thing i need.
I cannot figure out how to achieve this result
Thanks!!!
Having in mind you tagged your question with php - you can not do that using it. You need to do it with javascript:
window.screen.availHeight
window.screen.availWidth
and somehow transfer it to your php code, which later decides what to do based on that. You can achieve it using 2 http calls in the same page.

Mysterious severe performance issue on mobile Safari for just one web page

I have a very large (as in feature-rich) responsive website. It consists of over 150 different UI pages, and so far both rendering and performance on mobile are fine (I'm using an iPhone5 to test, and occasionally other devices).
Except for one page, which I am coding now. Here's the temporary dev URL:
http://www.jungledragon.org/apps/jd3/daylight
On Mobile Safari, this page performance extremely poorly:
- It takes several seconds to load, much slower than all other pages
- Once loaded, a touch scroll can take 5-10 secs to do anything
- Mobile Safari as a whole becomes non responding or close to it
I'm trying to troubleshoot the root cause of the issue, but no luck so far. I cannot reproduce this on any desktop browser using a small viewport, not even on desktop Safari. On the desktop, I've inspected several web debuggers to check for any long-running processes, but found none.
Some explanation on what the page does:
It will try to detect your current location (using alerts I discovered this takes little time)
Based on your current location and the current date, it will calculate the sun times for the day. This too is nearly instant
Based on the suntimes, it will dynamically generate a table, and then finally show it on screen
Here's the what I am seeing in detail on mobile Safari:
The server response is fine, the page loads quickly and shows the site header soon
Next, the content body is blank and stays blank for several seconds (which I cannot explain)
Finally, the suntimes table renders.
This completes the page, yet as of this point, the page as well as the browser are extremely sluggish, scrolling takes forever, and Safari controls are nearly irresponsive. It looks and feels as if the browser can crash any moment.
Based on my research so far, and given fine performance in all other pages on the site, I'm totally in the dark on what causes this.
Edit: Using BrowserStack I did some more tests:
iPhone 4S: no issues
iPhone 5S: no issues
Galaxy SII: no issues
HTC One X: no issues
iPhone 5: same issue as above
So I'm not seeing the issue on any desktop browser, and on no mobile device except for the iPhone 5 (iOS7).
Edit2: adding more findings and explanation based on comments received:
The issue does not seem animation-related. For this I have a number of proof points. A simple proof point is the page does not do any visual rendering that is much different from any of the other 100+ pages on the site which have no performance issue.
The 2nd proof point can be explained by understanding what is going on in this specific page. What happens is this:
The system will detect the current user's time and location. For now assume that the user actually allows location sharing. Using a simple alert, I've been able to proof that location detection is not the bottleneck.
Based on the user's time and location, the daylight periods are calculated. This is done by using the Suncalc JS library (https://github.com/mourner/suncalc).
The Suncalc library returns an array of daylight periods for the given date and location. I render that array as a table with colored background rows. That is all.
Rendering a table with 12 rows and different background colors is not likely to cause such enormous issues. My theory therefore lies in step 2 being the root cause. The Suncalc library has a lot of advanced math in it. I am thinking (without evidence yet) that either my mobile processor is horrible at those kind of operations, and/or the specific calculation for some reason cause a peak in memory usage (or even a leak).
As an additional proof point: once the page is loaded on mobile, use the right arrow next to the date to navigate to "tomorrow". Again you will see the extremely bad performance. During that step, there is no network activity, no location detection, nothing, just calculations and some very simple rendering. This validates my theory that perhaps the issue lies in the calculation.
Sadly, it looks like native Javascript profilers on that platform are non-existent. You may also want to try the Javascript Microtime function referenced in this answer. You will need to seed your script with calls at points where you think the bottleneck might be.
Just ran this through Chrome remote debugger (https://developers.google.com/chrome-developer-tools/docs/remote-debugging) on my S3, and it looks like Modernizr's cancelZoom function (showing up in jd3_0006.js) is getting called recursively too many times or by too broad a selector. I've uploaded the profiles into dropbox: https://www.dropbox.com/s/kubxk44smm6qqkx/jungledragon_debug..zip
You can import them into Chrome's debugger on the "Profiles" tab.
I believe your performance problem centers around the use of navigator.geolocation.getCurrentPosition() in your runMap() function
if (urlDate != null) {
urlPos(latitude,longitude);
} else {
if (navigator.geolocation) {
$(".img-loading").show(100);
navigator.geolocation.getCurrentPosition(successPos, errorPos{maximumAge:600000,timeout:10000});
} else {
errorPos('');
}
}
Consider using watchPosition() instead with a callback which will not halt processing of the script thread. You can cancel the watchPostion() update by using clearWatch()
So I've played with this some more, and ran the "Timeline" feature on Chrome (load this file into your chrome timeline tool: https://www.dropbox.com/s/2vpl6z1ntuk3aqj/TimelineRawData-20140328T105820.json), and it looks like this might be your main problem.
Your scripts and libs (including loading Google Maps and jQuery) are getting evaluated AFTER parsing the HTML and running Google Analytics because they are at the bottom of the body, not head. Unless you have a very good reason to do that, I would recommend moving those to the head.
There seems to be a separate problem with scrolling, but perhaps it will be resolved by this change.

Accurate browser detection/redirect possible using JavaScript?

Please forgive me if this answer is somewhere else on this site or online. If it is, I sure haven't found it in the past several days of searching.
What I am hoping to find is an "accurate" method of detecting a browser and redirecting to a simple, static page if not a recent browser.
The samples I have found until now often have not provided an accurate representation of the actual browser being used. For instance:
When testing with Navigator 9, I'll get a message that I'm using Firefox 2
When testing with Maxthon 3, it reports I'm using IE 9.
My site displays correctly in all the current browsers I've been testing it with. But I wish I could have a basic static page for those .01% who still are using an old browser for whatever reason. They could still get some basic information from my site, as well as encouraged to update to a more current browser.
If anyone has any useful suggestions, I'd greatly appreciate them.
Thanks so much.
Cheers,
David
Browser detection is never perfect, for a variery of reasons. If you are using jQuery, you should look into jQuery.browser.
I'd try to detect the browser on the server side and do an HTTP redirect if the browser is something non-standard. Most decent frameworks have functionality to detect the browser from the user agent string. Again, this is not perfect, mainly because of the data browsers report. Also, if Maxthon reports it's IE, that's because it is based on IE and therefore the layout engine should be the same.
So you either
support a small number of browsers and cater for their quirks, sending all other browsers to a basic page (this sucks for future versions of browsers because they might be standards-compliant but they will still display your very basic page), or
you have a standards-compliant page for all browsers and then you define alternatives for the ones that give you problems.
I'd go for the second option. It usually all boils down to one version for all browsers, and a number of hacks for various versions of IE. Also, remember to avoid padding in your CSS and use margins instead.
In the end, you probably shouldn't be testing for browsers and version numbers, but supported features. Try using Modernizr.
The $.browser property is deprecated in jQuery 1.3. On jQuery support site, they strongly recommend to use the detection feature (JQuery.support) instead of the jQuery.browser property.
Actually, this has been answered already in another question, please check here How can you detect the version of a browser?

WebGL framework - what's the best choice? X3DOM?

I'm about to start a Web application that will use interactive generated 3D content. Aim is to let it run natively in the browser, i.e. no Flash is allowed, only JavaScript + HTML5.
Apart from using pure WebGL it's better to use a lib that will offer a more high level interface.
The approach of X3DOM looks great for me - and it looks like it's supposed to become native in the browser and the lib will pave the road.
But after my first impressions I'm not sure if it's lightweight enough. Apart from the 400kb JS-File it slows down Firefox.
The features I need are not many. The whole scene set up could be easily done by "hand". But I need user interaction including to figure out where the user clicks. And later I want to be able to load and insert 3D objects in a common file format.
PS: Browsers of choice are Firefox and Webkit based ones. Desktop and Mobile ones. I don't care about IE.
PPS: Yes, I know the question: WebGL Framework
X3DOM is great when you come from an X3D background (and developed by great people), but if you have no preference watsoever, Three.JS would be my pick.
I looked at most WebGL frameworks just last week, and it indeed seems almost every one of them is in the 300kB range. That's too heavy for me, too. Luckily I found lightgl.js which has everything you need to get started in 28kB, MIT license.
The main thing for me is just abstracting canvas, shader and texture initialization. But lightgl.js does also have some mouse handling and model loading etc.
i think the decision boils down to:
do you want to have a more design or programmer approach.
x3dom: its leveraging of x3d for describing the scene lends itself to a more designer approach, with just the adding of the x3dom css and js one can do this :
<X3D><Scene><Shape><Box/></Shape</Scene></X3d>
three.js: only allows for scene generation through javascript, and a lot of additional code is necessary just to set up the canvas. view the source of this simple box example: http://stemkoski.github.com/Three.js/Template.html
neither way is wrong, i prefer designing the scene and then using js when needed for any computations.