When (dynamically) compiling CUDA code to PTX, you can pass the --generate-line-info command-line parameter, and get a bunch of .loc entries in your PTX, which relate PTX locations to source file locations.
Is something like this also available when compiling OpenCL code into PTX (clBuildProgram) on NVIDIA platforms?
Try -nv-line-info. I can't find documentation for it, but the compiler accepts it and generates exactly what you are looking for.
The option is notable absent from their official OpenCL compiler options extension.
NOTE: Your mileage may vary. A few years ago when I fiddled around with this the mapping accuracy was not great compared to CUDA +nvcc. Maybe they've improved things though.
Related
I work on computational music. I have found the ps13 pitch spelling algorithm implemented in Lisp in 2003, precisely "Digitool MCL 4.3". I would like to run this code, preferably on a Linux x86 machine, to compare its results with other similar codes.
I am new to Lisp, but so far my research led me to think that Digitool MCL is no longer available. I thought of two ways which may help me:
a virtual environment (Docker or else) which would emulate a machine from 2003…
a code translation tool which would transform the 2003 source code into something executable today
I have not succeeded in finding one of these two options, nor running it directly with sbcl (but, as a newbie, I may have missed a small modification to make it run easily).
May someone help me?
Summary
This code is very close to being portable CL: you won't need something emulating an antique Mac to run it. I ran it on three implementations (SBCL, LispWorks, CCL) within a few minutes. However if you're not a Lisp person (and don't want to become one) it will be somewhat more fiddly to do that.
However I can't just send you a fixed version, both because this isn't the right forum for that, and also because we'd need to get the author's permission to do so. I have asked him if he would be interested in a portabalised version, and if he is, I will send him one in due course. You could also get in touch and ask to be notified.
(Meta-summary: while I think the question is fine, any reasonable answer probably doesn't fit on SO.)
Details
One initial problem with this code is that the file uses old Mac line end conventions (I think: not Unix anyway): unless whatever Lisp you're using is smart enough to spot this (some are, SBCL seems not to be although I am sure there are options to tell it) you'll need to convert it.
Given that, the code that implements this algorithm is very, very close to being portable Common Lisp. It has four dependencies on non-standard things:
two global variables, *save-local-symbols* and *verbose-eval-selection*;
two functions: choose-file-dialog and choose-directory-dialog.
The global variables can probably be safely commented out as I think they are just controls for the compiler, probably. The functions have fairly obvious specifications: they're obviously meant to pop up file / directory choosers.
However you can just not use the bits of the code that use these functions, so you can compile it, get a few compiler warnings about undefined functions, and then it's fine.
But it gets better than that in fact: the latter-day descendant of MCL is Clozure CL: CCL is free, and open source. CCL has both choose-file-dialog and choose-directory-dialog already and both of the globals exist although one is no longer exported.
Unfortunately there are then some hidden portability problems to do with assumptions about what pathnames look like as strings: it's making some assumption about what things looked like on pre-OSX Macs I think. This kind of problem is easy but often a bit fiddly to fix (I think in this case it would be easy). So, again, the answer to that is just not call the things that are doing a lot of pathname munging:
> (ps13-test-from-file-list (directory "~/Downloads/d/*.opnd"))
[... much output ...]
Total number of errors = 81.
Total number of notes = 41544.
Percentage correct = 99.81%
nil
Note that the above output came from LispWorks, not CCL: CCL works just as well though, as will any CL probably.
SBCL has one additional problem: the CL-USER package in SBCL already uses a package which exports int which is defined in this code. So you need to compile it in some other package. But given that, it's fine in SBCL as well.
I've been looking to evaluate the red programming language - red-lang.org
While it is nice that you can obtain a working executable easily I prefer to compile things from source. It is less obvious how to do that for red.
The instructions ask you to download a rebol compiler/intepreter which is itself just an executable.
If you do that it works but it screams "don't do that" very loudly.
rebol> do/args %red.r "-v 2 %tests/hello.red"
will compile hello world but how do you bootstrap the red compiler itself?
1. Assuming you have rebol how do build the 'red' executable?
aside: Are the authors aware that there is a program called 'red' installed on many Linux boxes already (a version of the ancient ed program)?
I thought this might be done by:
rebol> do/args %red.r "-r %environment/console/console.red"
but "console" is not the executable also known as 'red' it doesn't support the same commmand line options such as -c to compile.
2. Assuming the proper way to do this involves bootstrapping from rebol (rather than C or something else) how do you build (a suitable) rebol from source?
I would like to build both red and red/system or any other interesting variants.
This question mentions a youtube video but is there something written down somewhere?
This seems like the sort of thing than ought to be near the front of the documentation to me.
I asked a similar question in their Google Group about a year back.
https://groups.google.com/forum/#!topic/red-lang/zZ3jEeNJ5aI
The short answer is this ...
The current “bootstrap” Red compiler is written in Rebol. Rebol is not compilable. The downloadable Red binaries are not compiled but encapsulated (containing both the compiler and a Rebol executable) using the Rebol Software Development Kit (SDK). The Rebol is SDK is a commercially licensed product that is probably not available any longer. (The REBOL SDK used by the Red team is properly licensed).
There are scripts and instructions on how to build a Red binary at https://github.com/red/red/tree/master/build
That said anything you can compile with the Red binary, you can compile with the source compiler. The source compiler is just as fast as the “binary” one. (As you know, the source compiler happily runs under free Rebol versions that are still easily available.) In fact, the red team uses the source compiler not the “binary” one.
So until the self-hosted Red compiler is available, there are two basic options:
if you want to use the Red binary, get it from the automated builds
if you want the absolute, up to the minute compiler, use it in source form.
Hope this helps.
Peter
I wanted to ask if anyone have run some C code containing CUDA code on Matlab?
I have read the documentation on Mathworks website but I still cant quite wrap my head around it. I do understand that it is two main type of ways you could do this either executing a CUDA kernel by constructing a object with the function parallel.gpu.CUDAKernel or by constructing a mex file out of a .cu file. There are some things though I do not understand when using these two methods.
Using the mex approach should I use another IDE like Visual Studio to compile a .cu file first before compiling the mex file in Matlab? If so how can I compile the .cu file without a main() function in the .cu file, I always get errors when I try to compile it that way in VS, or is it okay to have a main function in the .cu file and pass the pointers to the GPU arrays to the main function?
For the CUDA kernel approach, should compile the kernel in VS, and in that case how?
Both things can work.
If you want to have flexibility, my suggestion is to write your .cu files with .c (or .cpp) files*. Once you have some basic thing working, you should be able to write a mex wrapper around it in order to grab MATLAB variables and convert them co C/C++ so you can pass them to and from CUDA. This requires you to have a compiler that is both compatible with your versions of MATLAB, CUDA and OS. An example is Visual Studio 2013 in widows and most versions of MATLAB and CUDA, but please check. Generally this is done by linking nvcc to the mex compiler after setup with some xml files (see example here from my toolbox). This approach gives you full flexibility, not only for working with CUDA, but also with working with the anything that you may want to use together with your kernels e.g. tensorflow, eigen, SQL, ... Its full flexibility.
If instead you just want a few operations accelerated with simple methods, use the Parallel computing toolbox with gpuarrays for standard MATLAB operations or with parallel.gpu.CUDAKernel for your own kernels. To use this second one you need to compile a ptx file, which seems pretty straightforward. A priori this gives you less flexibility, as it will run just a kernel, but often complex GPU programs may need several kernels and data handling techniques, as well as communication between kernels etc. However I have personally haven't tried it and perhaps you can achieve full flexibility. Let me know and I will edit the answer.
In short, you choice depends on your application/needs.
*You might not need .c or .cpp files with modern versions of MATLAB and mexcuda.
I'm looking into installing a disassembler (or decompiler) on my Linux Mint 17.3 OS and I wanted to know what the difference is between a disassembler and a decompiler. I have a rough idea of what they are (the names are fairly self-explanatory), but they are still a bit confusing.
I've read that a disassembler turns a program into assembly language, which I don't know, so it seems kind of useless to me. I've also read that a decompiler turns a 'binary file' into its source code. What exactly is a binary file?
Apparently, decompilers cannot decompile to C, only Python and other similar languages. So how can I turn a program into its original C source code?
A disassembler is a pretty straightforward application that transfers machine code into assembly language statements - This activity is the reverse operation that an assembler program does and is straightforward because there is a strict one-to-one relationship between machine code and assembly. A disassembler aims at a specific CPU. The original assembler that was used to create the executable is only of minor relevance.
A decompiler aims at recreating a compiled high-level language program from machine code into its original format - Thus trying the reverse operation of a C or Forth (popular languages for which de-compilers exist) compiler. Because there are so many high-level languages and thus so many ways in how original high-level language constructs could be expressed in machine code (even a lot of different strategies for the same language and construct, even in the same compiler, and even different strategies depending on the compiler mode and situation), this operation is much more complex and very dependent on the original compiler (and maybe even the command line that was used, it's chosen optimization level and also the used version).
Even if all that fits, most of the work of a decompiler is educated guessing and will most probably never reach a point where it can reconstruct the original program in its source code form 100% - It will rather end up with a version of source code that could have been the original program.
I want to use Fuzzy Logic Toolbox in C#. To do this, I created a NET-library using deploytool, but it does not include the file fuzzy.m, which I need to work. And in the log mccExcludedFiles.log the following information:
This file contains the list of various toolbox functions that are not
included in the CTF file. An error will be thrown if any of these functions
are called at run-time. Some of these functions may be from toolboxes
that you are not using in your application. The reason for this is that
these toolboxes have overloaded some methods that are called by your code.
If you know which toolboxes are being used by your code, you can use the -p
flag with the -N flag to list these toolboxes explicitly. This will
cause MATLAB Compiler to only look for functions in the specified toolbox
directories in addition to the MATLAB directories. Refer to the MCC
documentation for more information on this.
C:\Program Files\MATLAB\R2010b\toolbox\fuzzy\fuzzy\fuzzy.m
called by D:\MyFolder\VNTU\bakal\matlabAndCs\ShowFuzzyDesigner.m
(because of toolbox compilability rules)
How do I include this excluded fuzzy.m file in the compilation?
The command fuzzy launches the Fuzzy Inference Systems editor, a GUI supplied with Fuzzy Logic Toolbox. Compilation of Toolbox GUIs with MATLAB Compiler is typically not supported, and as detailed in the documentation for MATLAB Compiler, this is true of the GUIs within Fuzzy Logic Toolbox.
I must say, I think the message you're getting in the mccExcludedFiles.log file is mostly misleading - all of those things could cause a file to have been excluded, but in this case the only relevant bit is at the end, "(because of toolbox compilability rules)".
You might want to look into how to build a fuzzy system using the line code functions supplied with the fuzzy toolbox, and not the GUI. This walkthrough gives you a pretty good handle on building a Mamdani inference system using the line code tools. I am not positive how these translate into C# code, but I think there should be equivalent libraries therein.
If you cannot find a natural way to implement the MATLAB routines in C#, then you could look at this discussion which links some free fuzzy libraries for C#. I think one of the links is broken, but the other three load just fine.