Does anyone have any experience with using Unicode in Fortran? How does one pass Cyrillic characters, and open files with Cyrillic characters in their names?
Details:
I have a Fortran executable that needs to read parameters from a control file. Some of these parameters are in Cyrillic (e.g., file paths).
The executable calls a C++ DLL. Some of the parameters to these calls need to be in Cyrillic.
I am using the latest Intel Fortran.
I'm looking for any source of information, or small examples as to how to do so.
As already indicated, Fortran 2003 has a Unicode character type. Exactly what features will work with that character type ... I don't know ... filenames? I don't see mention of Unicode in the release notes for the Intel Fortran compiler. In 2006 Intel indicated that this feature was a low priority (http://software.intel.com/en-us/forums/showthread.php?t=51751). You might ask on the Intel forums ... probably an Intel representative will answer about the capabilities of the Intel compiler. If the Intel Fortran compiler can't handle Unicode yet, you might need to do this I/O in another language.
While I've not done anything similar, so have no personal experience on the matter, simply googling "fortran unicode" shows a few interesting results.
Apparently, gfortran has some moderate support for it (for an example scroll a bit down). Also, Tobian Burnus's answer in this thread sheds some more light on the matter - it seems that there is progress on that field, in F2003 and the (upcoming) F2013 standard, but for now, it doesn't really present one of the priorities.
If you want to open unicode files, this will not help. However, by default, the Intel Fortran compiler cannot even open files in a unicode folder. The documentation doesn't make it clear, but the /fpscomp:general compiler flag will allow you to work within unicode folders.
Related
how do you open a file of which name contains UTF-8 character?
For example:
(open "~/a/你好.txt")
give this:
The filesystem does not accept filenames with extended characters: "~/a/你好.txt"
I'm using ecl 16.1.3 from emerge from gentoo.
Meantime, sbcl can open the file.
I'm pretty sure ECL simply does not support general unicode filenames on Unix or Linux, however they get encoded in the underlying filesystem (I also don't know how that happens with *nix nowadays, although I guess there must be a standard now).
The specific error you're seeing originates here, in pathname.d. If you then look in unixfsys.d you'll see that ECL_NAMESTRING_FORCE_BASE_STRING is one of the flags passed to ecl_namestring all over the place, and this isn't conditionalized by anything.
So at the very least you would need to compile ECL from scratch, and more probably it simply does not support general unicode filenames at all.
I'm interested in the encoding of the character in the computer.
When I open my xxx.c with visual studio code, how does the VS code detect the encoding of my file and interprets these "01" sequence. Further on, how the visual studio code (or even the computer system) display the character on the screen acorrding to my "01" sequence file and the character encoding?
Thank you!
I also uses Chinese during my projects. Sometimes, the file encoding really drive my crazy. Sometimes,my correct utf-8 file created by edit A for example, was destroyed by some text editor B that interpret it as GBK file, and edit A can never get it back correct.
I searched a lot, but the most answers seems to be too abstract or irrelevant. I want to figure out how the software and the computer system( or operating system) cooperate together to make this simple but important job done!
First things first, "can never get it back": Always Use Source Code Control
"How the software and the computer system (or operating system) cooperate together to make this simple but important job done!": They don't that's the problem!
Short history: Many decades ago people used small character sets. The idea was a system would always use the same one. Simple. Every time a text file was transferred between systems, it would be immediately transcribed to the local character encoding. Then came the globalization of file exchanges and systems needed to hold text files in different encodings. There was no general way of recording what the encoding was. In 1991 came the huge character set Unicode. Languages (VB4, Java), operating system APIs (Win32), file systems (NTFS), … began adopting it. However, its encodings (UTF-8, UTF-16) are just yet more possibilities for which encoding a text file uses. Many programs that read text files either rely on the old system of a system default encoding or guess ("detect").
In the programming world, some languages require source files to use a specific encoding (say UTF-8); In others, tools default to specific encoding (say UTF-8). In most cases, the toolset provided with a C or C++ implementation will have a consistent set of rules. If you also use an IDE or other form of project system, you can set the encoding for the entire project and in some cases specific files.
So, the only solution is to only use tools that work for you and to properly configure them. If it hurts, stop doing it.
Aside: On the topic of programming and default character encodings, be careful not to get tricked with various language libraries' use of the system default character encoding—unless that is exactly what's needed. Otherwise, you are giving your users the same problem that you are encountering. (In Java, just avoid it with explicit arguments. In C and C++ libraries, encoding is combined into Locales. But note that many systems initialize a program to use default character encoding.
I'm looking into installing a disassembler (or decompiler) on my Linux Mint 17.3 OS and I wanted to know what the difference is between a disassembler and a decompiler. I have a rough idea of what they are (the names are fairly self-explanatory), but they are still a bit confusing.
I've read that a disassembler turns a program into assembly language, which I don't know, so it seems kind of useless to me. I've also read that a decompiler turns a 'binary file' into its source code. What exactly is a binary file?
Apparently, decompilers cannot decompile to C, only Python and other similar languages. So how can I turn a program into its original C source code?
A disassembler is a pretty straightforward application that transfers machine code into assembly language statements - This activity is the reverse operation that an assembler program does and is straightforward because there is a strict one-to-one relationship between machine code and assembly. A disassembler aims at a specific CPU. The original assembler that was used to create the executable is only of minor relevance.
A decompiler aims at recreating a compiled high-level language program from machine code into its original format - Thus trying the reverse operation of a C or Forth (popular languages for which de-compilers exist) compiler. Because there are so many high-level languages and thus so many ways in how original high-level language constructs could be expressed in machine code (even a lot of different strategies for the same language and construct, even in the same compiler, and even different strategies depending on the compiler mode and situation), this operation is much more complex and very dependent on the original compiler (and maybe even the command line that was used, it's chosen optimization level and also the used version).
Even if all that fits, most of the work of a decompiler is educated guessing and will most probably never reach a point where it can reconstruct the original program in its source code form 100% - It will rather end up with a version of source code that could have been the original program.
I'm looking for recommended virtual machines that can run on a 8-bit microprocessor AND support dynamic languages. I'd like a VM solution because I perceive benefits in terms of code density, portability, and ability to have a smaller interpreter, leaving more room for larger programs.
My goal is to run a complete LOGO interpreter, following "LOGO for the Apple II" syntax, on something like a 6502 microprocessor.
I've seen references to PyMite, Java "micro edition", and of course now the UCSD p-System sources from the 1970s are available.
Suggestions are welcome.
(Note: I've already +1'ed the FORTH answer.)
Since you mention the 6502, Steve Wozniak (!) wrote an article for Byte magazine in the late 1970s, describing the SWEET16 interpreter for the 6502. This was a partial VM for the 6502, that provided 16-bit integer arithmetic that was EASILY interspersed into 6502 assembly language. It was the basis for the original Integer BASIC, that (as I recall) was later replaced by the floating-point Applesoft BASIC.
FORTH implementation for 6502.
You might want to check out the PICOBIT system, which is a Scheme implementation that works on very very small systems, such as the PIC18. It has since been ported to ARM, and could almost certainly be ported to the 6502 or other processors.
Forgive me if this might be a dumb question but, I'm in an assembly class that was mostly taught using an emulated CPU that was supposed to teach the concepts of assembly code. We haven't even written an Intel program, so I'm trying to adjust. In our emulated CPU, we were able to generate a symbol table file that gave the bytes equivalent for instructions:
http://imgur.com/tw5S8.png
Would I be able to do such a thing with Intel x86 instructions?
Try IDA. It has an option to show binary values of opcodes.
EDIT: Well.. it's a disassembler. Try opening a binary file, and set the number of opcode bytes to show (in Options/General/) to something that is not zero.
If you are looking for an IDE that shows you in real time the opcodes for the instruction you've used, then I don't think you'll find one, because of lack of "market". Can you explain why you need it? Do you want to know just their length, or want to learn them? There is simple pattern for lengths, so by dissasembling many binaries you'll catch it. If it's the opcodes you want.. well, there are lots of them, almost no rules, and practically no use to do it.
I see.. then you have to generate the list file . Your assembler should have an option for that. (for NASM it's -l listfile). Just put any instruction(s) in your .asm file, and generate listing for it. It should contain the binary encoding for each instruction.
First, get Intel Instruction Set Refference, or, better, this link: http://siyobik.info/index.php?module=x86 . There you'll find that most opcodes have several encodings. In your particular case, the bit 1 of the opcode specifies direction, and since both operands are registers, you can toggle the direction and swap the register codes, and the result will be the same. Usually you have this freedom on most register to register arithmetic operations. To check this, try decompiling with IDA this source file:
db 02h, E0h
db 00h, C4h
There is a demo program shipped with fasm.dll which has an editor and hex-viewer: