Adding &js_fast_load=0&css_fast_load=0&strip=0 to URL triggers mojibake - liferay-6

I have a Liferay 6.2 page that shows fine.
For debug purposes, I added &js_fast_load=0&css_fast_load=0&strip=0 o the URL (production server that I can not modify).
Problem: By doing so, the page's encoding gets mixed up (mojibake)
What could be triggering the problem, and how to solve it?

I don't know what is the root cause of the problem.
But as it is for debug (these URL parameters are only used for debugging), an easy way to "solve" it is to manually switch the page encoding to UTF-8.
Any better solution is very welcome!

You mention in your own answer that switching to UTF-8 fixes the issue. I'd add to it that this issue points to a general problem with encoding on some level. I always recommend to always strictly and everywhere standardize on the same encoding: From Database/Filesystem/appserver to the HTTP/HTML layer. Mixing up encoding is a recipe for disaster, mainly because it will only be detected in edge cases unless you happen to work in non-latin character sets routinely.
My favorite way to test with non-latin character sets that make sense to you when you only speak languages that use latin alphabet is to utilize http://fliptitle.com to generate test data. If that goes through, odds are that all of your configuration is correct.

Related

Is there a way to end styleformat on Enter in TinyMCE

Working with a indirect implementation of style_formats through Optimizely. What I want to achieve is a way of getting the chosen style to be ended on a line-break (Enter). This works by default on headers, but not when added own formats.
I've read through the documentation: https://www.tiny.cloud/docs/configure/content-formatting/ with no result (as far as I can understand) but guess its possible due to the natural behaviour of headers.
Anyone with experience on this issue?

How to display special symbols in Gtk.Label?

I am writing an Instagram client for Ubuntu in Vala. And I'm using Gtk.Labels to display post title, comments etc. Problem is, sometimes received data contains special symbols like smiles etc. And currently they are displayed incorrectly, like in the picture (these squares containing 6 hex numbers):
I guess that's not the problem of my application because I've seen such behavior in other apps (for example, Pantheon Files). But anyway, this is not the way I want my program to behave, I want these symbols to be displayed correctly.
So, my question is: is it possible to achieve the behavior I want? And if it is possible, then how?
There was indeed an issue with the font I'm using. I just installed ttf-ancient-fonts package (according to the https://www.kirsle.net/blog/entry/make-emoji-work-in-linux) and now it is working.

AngularJS Forms and I18N support - not reading Japanese characters

A quick question regarding Angular Forms and Japanese characters. Am using Angular 1.2.17, and modern Chrome web browser on Mac OSX (latest).
Am writing an AngularJS application for Japanese market. Everything works great displaying Kana etc on the HTML pages etc. There are no issues with the web server, or Database etc, UTF8 support is throughout the application.
However, for the AngularJS forms, it does not read the Kanji / Hiragana / Katakana unless the word or sentence starts with latin character. Angular $scope appears to be unable to distinguish the fact that the JP characters have been typed at all unless prefixed with a latin character.
Example:
こにちわ does NOT register when typed into the input field, and hence form validation fails because it will think a required field is empty.
Whereas:
adsfこにちわ does register and the form can be submitted successfully. End to end, the JP characters are handled correctly, and get stored into DB correctly. So Angular / JS is parsing the UTF8 text correctly. The issue is likely something to do with how Angular binds to the data ($scope) when only JP characters are provided. It doesn't handle this properly by default.
Does anyone know of any HTML or Angular configuration etc - required Angular module or params, meta tags etc etc, that would coerce the Form to behave properly. Have not tested, but am pretty certain this issue is not specific to JP characters - it is likely anyone working with a non latin alphabet might have experienced the same behaviour.
Must be missing something obvious here.
Thank you for any help at all!
OK, updating this question very late - actually solved it very shortly after asking.
This turned out to be a timewaster question. Apologies.
But if anyone should come across a similar problem then please check for any REGEX declarations in the Form fields. For instance, a ng-pattern="/^[a-zA-Z]/"
Yes, this will do what it says, and exclude Kanji. Surprisingly it does NOT then put helpful validation error on the form field so from UI perspective it appears that the foreign language characters simply weren't registered.

QR code with URL, does it *REALLY* need the http://?

It seems like most (if not all) QR readers on my iPhone handle URLs without the http:// just fine but I was wondering if that is universal? Android? BlackBerry? Is there an RFC somewhere that I should be reading
I'm building a QR management/url shortener system and was wondering if it was absolutely necessary. If not, I can drop 7 characters from my QR's URLs and make them the lowest level of complexity (16 characters or less). Which, from everything I've read, is a Good Thing™.
I haven't found any absolute documentation that says it must have it. But... After testing a number of QR reader apps, it's clear that many of them will 'guess' at a url if there is no http:// in it. But many do not and display it as just a string. Since it's a URL, it really does need it. And if any apps won't read it, then I have to bow to them and add it for all of them.
Hey Dan I am the dev of Barcode Scanner and just saw your question. I have a few more tidbits of info which may help.
There is no real 'standard' for this; I suppose the HTTP specification is the closest thing and technically it does say you need "http://". This wiki has everything we think we know about standards and de facto standards in this area.
I can tell you that QR codes have special modes to encode digits only, and alphanumeric-only text. The alpha mode includes only capital letters, but does include key punctuation like colon and slash. So, HTTP://EXAMPLE.ORG/BAR ought to be encodable in QR codes in fewer bytes than http://example.org/bar.
URLs themselves are case-sensitive however. It's not necessarily OK to uppercase a URL. But the server application may be case-insensitive. If you control the endpoints and know you can use all uppercase, this is a way to perhaps squeeze into version 1.
Finally I'll say that version 1 QR codes are a little weird since they have no alignment pattern. Without a fourth point to find, it can't (well, the dumb-but-effective process employed by Barcode Scanner and by extension a lot of scanners) account for perspective distortion. It happens to work with only small tilt. But version 2 actually has a small advantage for decodability with that alignment pattern.
QR readers usually identify as a URL any text that conform to ANY of this conditions:
Text starts with http:// (or HTTP://)
Text starts with www.
Text starts with MEBKM: (NTT DoCoMo format for web bookmark)
You should be fine without http if your url starts with www. but it's not your case.
As Sean points out, you should use all-caps urls instead.
You can fit up to 24 alphanumeric characters in a Version 1 level L QR, wich is just enought for a url shortener.
Example:
HTTP://1QR.ES/AAAAAAAAAA
Fun fact: Samsung Galaxy phones (e.g. S8 and S9) will open a QR code with a URL that has "HTTP" or "HTTPS" (in uppercase) in their text editor. Create the same URL with lowercase "http" or "https" and the same URL and it will open in a browser as expected.

MySource Matrix - Opinions

Has anyone had experience with MySource Matrix as a content management system? If so, thoughts/opinions/comments?
Thanks in advance.
Absolutely excellent. It takes little while to get used to how it does things with its asset structure, but it is really flexible and powerful. Simple edit interfaces are great too.
Make sure you give it enough hardware. If you want dynamic content without caching you need heaps of grunt to make it hum.
Hands down the best CMS I have ever used. We use it on the Pacific Union College website, as well as many side projects. I am still amazed at all it has to offer compared to other products that are not free.
Give it a good look, and take some time to get past the learning curve, but once you do, it will be more than worth it. :)
I've recently been trying to use it in an organization where many non power users are generating content. - it has many interface bugs and odd behavior, so that many simple tasks (i.e. loading images) often have to be done by an power user (i.e. me).
When you are editing the HTML of page content white space is not preserved. If you where to format the HTML in the WYSIWYG editor, save you changes, and then come back the whitespace you've added will be removed - actually when you switch the WYSIWYG editor into HTML mode it doesn't show you the exact HTML, and does some silly things - like pressing enter inserts non breaking spaces - but doesn't show them until you save and re-enter HTML mode.
it is a number of little details like this which make it generally frustrating to use and disliked by everyone here.