Im using Dev - C++, and developing some software for myself in my language, every string that i type can't have chars like 'é' or 'ã' and 'ç' don't show 'ç'. But in my native language they are all necessary characters to build words. What i need to do in this IDE or in the code to bring this chars to the screen? Any help, thanks.
When i do the Execute -> Compile & Run . of:
int main(int argc, char *argv[]) {
char *text = "é á ç â\n";
fprintf(stdout,text);
return 0;}
i get:
note: i intend to continue to do code in this IDE. i use others but this questions is for users of this IDE.
Related
I've got an existing DOORS module which happens to have some rich text entries; these entries have some symbols in them such as 'curly' quotes. I'm trying to upgrade a DXL macro which exports a LaTeX source file, and the problem is that these high-number symbols are not considered "standard UTF-8" by TexMaker's import function (and in any case probably won't be processed by Xelatex or other converters) . I can't simply use the UnicodeString functions in DXL because those break the rest of the rich text, and apparently the character identifier charOf(decimal_number_code) only works over the basic set of characters, i.e. less than some numeric code value. For example, charOf(8217) should create a right-curly single quote, but when I tried code along the lines of
if (charOf(8217) == one_char)
I never get a match. I did copy the curly quote from the DOORS module and verified via an online unicode analyzer that it was definitely Unicode decimal value 8217 .
So, what am I missing here? I just want to be able to detect any symbol character, identify it correctly, and then replace it with ,e.g., \textquoteright in the output stream.
My overall setup works for lower-count chars, since this works:
( c is a single character pulled from a string)
thedeg = charOf(176)
if( thedeg == c )
{
temp += "$\\degree$"
}
Got some help from DXL coding experts over at IBM forums.
Quoting the important stuff (there's some useful code snippets there as well):
Hey, you are right it seems intOf(char) and charOf(int) both do some
modulo 256 and therefore cut anything above that off. Try:
int i=8217;
char c = addr_(i);
print c;
Which then allows comparison of c with any input char.
Unicode code points range from U+000000 to U+10FFFF. While writing myself a lexer generator in F#, I ran into the following problem:
For the character set definitions, I intend to use a simple tuple of type char * char, expressing a range of characters. Omitting some peripheral details, I also need a range I call Alland which is supposed to be the full unicode range.
Now, it is possible to define a char literal as such: let c = '\u3000'. And for strings, it is also possible to refer to a real 32 bit code point like this: let s = "\U0010FFFF". But the latter does not work for chars. The reason being, that a char in .NET is a 16 bit unicode character and the code point would yield 2 words, not one.
So the question is - is there a way I can stick to my char * char tuple and get my All defined somehow or do I need to change it to uint32 * uint32 and define all my character ranges as 32 bit values? And if I have to change, is there a type I should prefer over uint32 I did not discover yet?
Thanks, in advance.
I have a problem using Eclipse and CDT, the problem started with a beginners code using printf() to ask for input and scanf() to store the input, but the console will not display the printf() arguments until after it has been given the scanf() arguments.
I found many threads linked to this problem and understand it is a bug in eclipe and that the buffers are not being flushed properly even when using \n .
The solution seems to be either use fflush(stdout) after each printf() or to add setbuf(stdout, NULL, _IONBF, 0) at the beginning of the main() function.
I added the setvbuf(stdout, NULL, _IONBF, 0) i also tried fflush(stdout) but eclipse is saying stdout can not be resolved.
Can anyone please tell say why and how to fix this.
Thank you.
Mick Caulton
This Is My code :
#include <stdio.h>
int main(){
//setvbuf(stdout, NULL, _IONBF, 0);
char letter;
int num1,num2;
printf("Enter any one keyboard character:\n");
// fflush(stdout);
scanf("%c",&letter);
printf("Enter 2 integers separated by a space: \n");
//fflush(stdout);
scanf("%d %d",&num1,&num2);
printf("Numbers input: %d and %d\n",num1,num2);
printf("Letter input: %c",letter);
printf(" stored at %p \n",&letter);
return 0;
}
If you're developing on Windows, keep in mind that the End-Of-Line indicator is \r\n (you're using just \n), and the Eclipse console window famously only does whole lines.
When I run [NSString UTF8String] on certain unicode characters the resulting const char* representation is mangled both in NSLog and on the device/simulator display. The NSString itself displays fine but I need to convert the NSString to a cStr to use it in CGContextShowTextAtPoint.
It's very easy to reproduce (see code below) but I've searched for similar questions without any luck. Must be something basic I'm missing.
const char *cStr = [#"章" UTF8String];
NSLog(#"%s", cStr);
Thanks!
CGContextShowTextAtPoint is only for ASCII chars.
Check this SO question for answers.
When using the string format specifier (aka %s) you cannot be guaranteed that the characters of a c string will print correctly if they are not ASCII. Using a complex character as you've defined can be expressed in UTF-8 using escape characters to indicate the character set from which the character can be found. However the %s uses the system encoding to interpret the characters in the character string you provide to the formatting ( in this case, in NSLog ). See Apple's documentation:
https://developer.apple.com/library/mac/documentation/cocoa/Conceptual/Strings/Articles/formatSpecifiers.html
%s
Null-terminated array of 8-bit unsigned characters. %s interprets its input in the system encoding rather than, for example, UTF-8.
Going onto you CGContextShowTextAtPoint not working, that API supports only the macRoman character set, which is not the entire Unicode character set.
Youll need to look into another API for showing Unicode characters. Probably Core Text is where you'll want to start.
I've never noticed this issue before, but some quick experimentation shows that using printf instead of NSLog will cause the correct Unicode character to show up.
Try:
printf("%s", cStr);
This gives me the desired output ("章") both in the Xcode console and in Terminal. As nob1984 stated in his answer, the interpretation of the character data is up to the callee.
My end goal here is to write some non-latin text output to console in Windows via a C++ program.
cmd.exe gets me nowhere, so I got the latest, shiny version of PowerShell (that supports unicode). I've verified that I can
type-in non-unicode characters and
see non-unicode console output from windows commands (like "dir")
for example, I have this file, "가.txt" (가 is the first letter in the korean alphabet) and I can get an output like this:
PS P:\reference\unicode> dir .\가.txt
Directory: P:\reference\unicode
Mode LastWriteTime Length
Name
---- ------------- ------
----
-a--- 1/12/2010 8:54 AM 0 가.txt
So far so good. But writing to console using a C++ program doesn't work.
int main()
{
wchar_t text[] = {0xAC00, 0}; // 가 has code point U+AC00 in unicode
wprintf(L"%s", text); // this prints a single question mark: "?"
}
I don't know what I'm missing. The fact that I can type-in and see 가 on the console seems to indicate that I have the three needed pieces (unicode support, font and glyph), but I must be mistaken.
I've also tried "chcp" without any luck. Am I doing something wrong in my C++ program?
Thanks!
From the printf docs:
wprintf and printf behave identically
if the stream is opened in ANSI mode.
Check out this blog post. It has this nice short little listing:
#include <fcntl.h>
#include <io.h>
#include <stdio.h>
int main(void) {
_setmode(_fileno(stdout), _O_U16TEXT);
wprintf(L"\x043a\x043e\x0448\x043a\x0430 \x65e5\x672c\x56fd\n");
return 0;
}