How does "Format editor" for JFormatedTextField in NetBeans work? - netbeans

I need to edit format for JFormatedTextField in a Java program. NetBeans are "helping" me with something called Format editor. But, I have no clue how the pattern works.
For #,##0.### , it returns 1,234.567, as pictured above. However, I want to change the thousands delimiter to space and decimal separator to comma.
I would guess # ##0,### is the right format, but no, that returns "Malformed pattern # ##0,###".
How can I change the thousand separator to space and decimal to comma? Is that even possible, using Format editor?

It sounds like you're looking for the reference for the java.text.NumberFormat class.
The DecimalFormatSymbols.getGroupingSeparator method looks like it is probably relevant to what you're doing. You will have to choose an appropriate Locale to get the formatting characters you want.
You may need to do something like:
NumberFormat nf = NumberFormat.getInstance(Locale.FRENCH);
with an appropriate parameter to getInstance() for your country and language.

Related

MS Access Convert from Unicode when Reading from Text File

So, I have an Access database where I import data from a text file. The file is semi-colon delimited. Occasionally (and will become more frequent) I receive a file from one of our affiliates from Russia. The file has unicode (I think) characters like "Ìèðîøíèêîâ" instead of "Мирошников". Ultimately, I'd like to translate those into English upon import, but for now, I'll accept the Russian characters.
How should I go about doing this? Currently, I'm reading each line of the file, using the SPLIT function to separate each field by the ";" separator into an array, and sticking each array element into a table. Would changing the system Keyboard Layout to Russian prior to this work, or is it more complicated than that.
Does any of this make sense, or should I just bag it and go grab a beer (or some Vodka)?
Thanks!
You should be able to create an "Import Specification" that will tell Access how to convert the character data. Follow the procedure here...
Importing a text with separators using VBA
...and choose the appropriate character set from the "Code Page" combo box.
If you need to perform the imports from VBA code then you can save the specification (using the "Save As..." button) and then re-use that specification in a DoCmd.TransferText statement.

Ogg metadata - Vorbis Comment end

I want to implement a class to read vorbis comments. I know that a field will start with a field name, followed by an equal sign and the value. But how does it end? Documentation makes me think that a semicolon will end the field but I checked an ogg file with a hex editor and I cannot see any.
This is how I think it should look like in a file :
TITLE=MY SUPER TITLE;
The field name is title, followed by the equals sign and then the value is MY SUPER TITLE. And finally the semicolon to end the field.
But instead inside my file, the fields look like this :
TITLE=MY SUPER TITLE....
It's almost as above but there is no semicolon. The .'s are characters that cannot be displayed. I thought okay, it seems like the dots represent a value that will say "this is the end of the field!!" but they are almost always different. I noticed that there are always exactly 4 dots. The first dot has always a different value. The other free have usually a value of 0. But not always...
My question now, how does a field end? How do I read this comment?
Also, yeah I know that there are libraries and that I should use them instead of reinventing the wheel over and over again. I will use libraries later but first I want to know how to do it myself. Educational purpose only.
Each field is preceded by a little-endian 32-bit integer that indicates the number of bytes to read. You then convert the bytes to a string via UTF8.
See NVorbis' implementation (LoadComments(...)) for details.

How to use unicode characters in Eclipse File Search?

We have some XML file that contains some invalid character, and the program says neither which file it is, nor which line number or character offset. It would be a few seconds work to fix the problem if I could just search for exactly that character, but I cannot find how to express a Unicode character in the file search (or at least I assume so, since the search returns nothing).
Neither 0x1e nor \u001e seem to match anything.
[EDIT] I mean, I can still change the code, and eventually find which file it is by catching the Exception, and using some kind of script/tool to find where exactly the character is, but I do believe it should be possible to search with Unicode in Eclipse, and that is what I am asking in this question.
It may be a problem with the character encoding.
As you're going to need to perform a global / site-wide search to find the , you'll probably need to set the global text file encoding:
Preferences -> Workspace -> Text file encoding
This option may be under the 'General' section in Eclipse, depending on your setup and installed plugins etc.
Ensure that the encoding is set to UTF-8.
You will also need to escape the unicode character sequences, like so:
\u2665
(which I see you have tried)

Is Localizable.strings required for the root language of an app?

As we're enabling our (English) application to be localized, we're replaced all in-line strings with NSLocalizedString() calls. Since all of our English strings are all there in-line with the code, e.g. NSLocalizedString(#"OK, #"OK button in a message box"), is there any reason we need the English version of Localizable.strings? When we try removing the strings from the English Localizable.strings, the program seems to work fine. Just wanted to double check if there was some side-effect to not having that around. Thanks, alex
One of the main points of using the NSLocalizedString() macro is so that your programming code can be parsed with the genstrings command-line tool to generate the corresponding Localizable.strings file(s) (see Resource Programming Guide: About the String-Loading Macros and Using the Genstrings Tool to Create Strings Files).
That Localizable.strings file then serves as a starting point for your translators, to use to translate to another language. Without that file to work with, your translators would basically need access to your source code in order to see all the strings you want to use (which kind of defeats the purpose).
Yes, your English version works fine right now, since if a localized version of the string you try to get in code –– for example, NSLocalizedString(#"OK", #"") –– cannot be found in a .strings file, it simply uses the #"OK" string that you passed in.
Another reason why you should likely be keeping the English Localizable.strings is that you should generally try to avoid using high-ASCII characters in your code, but should use the full range of available characters in your actual user interface. For example, you may not want to put the following characters in your code, but would want to use them in your user interface:
… (horizontal ellipsis) (U+2026)
“ (left double quotation mark) (U+201C)
” (right double quotation mark) (U+201D)
‘ (left single quotation mark) (U+2018)
’ (right single quotation mark) (U+2019)
So in code, you'd do something like this:
NSLocalizedString(#"Add Bookmark...", #"")
and then in your .strings file (which is UTF16, so this is fine):
"Add Bookmark..." = "Add Bookmark…";
It's not recommended to use the words as keys, if you want to change that Ok for something else and you have any other Localizable.strings, you will have to edit every single of them to update the Okkey.
Surely your translators will want your Localizable.strings file to work from. And want it to be ALL the strings.
(It is true that the system will fall back to the key, but relying on that seems like bad practice. And you'll find it more reliable to use proper punctuation in a Unicode file, which source code seldom is.)

Apostrophe issue in RTF

I have a function within a custom CRM web application (old VB.Net circa 2003) that takes a set of fields from a database and merges them with palceholders in a set of RTF based template documents. These generate merged letters and documentation. The code essentially loops through each line of the RTF template file and replaces any instances of the placeholder values with text from a database record. The issue I'm having is that users have pasted a certain type of apostrophe into the web app (and therefore into the database) that is not rendering correctly in the resulting RTF file. It is rendering like this - ’.
I need a way to spot this invalid apostrophe in the code and replace it with a valid one. Unfortunately when I paste the invalid apostrophe into the Visual Studio editor it gets converted into the correct one. So I need another way to express this invalid apostrophe's value. Unfortunately I do not know a great deal about unicode and other encodings so I'm calling out for help with this.
Any ideas?
If you really just want to figure out what the character is you might want to try and paste it into a text editor like ultraedit. It has a hex mode that you can flip to to see the actual underlying bytes.
In order to do the replace once you've figured out the character you'd do something like this in Vb,
text.Replace(ChrW(2001), "'")
Note that you might not be able to figure it out easily using the text editor because it might also get mangled by paste from the clipboard. You might want to either print some debug of the ascii values from code. You can use the AscW function to do that.
I can't help but think that it may actually simply be a case of specifying the correct encoding to use when you write out the stream though. Assuming you're using a StreamWriter you can specify it on the constructor. I'm guessing you actually want ASCII given your requirement.
oWriter = New System.IO.StreamWriter(path, False, System.Text.Encoding.ASCII)
It looks like you probably want to encode characters out of the 8 bit range (>255).
You can do that using \uNNNN according to the wikipedia article.