How can I get icpc to tell me which files it uses for linking? - icc

I'm linking some compiled code with icpc, e.g.:
icpc -o my_executable f1.o /path/to/f2.a -lfoo -lbar
I want icpc to tell me exactly which files it uses for the linking - which .a, .o and .so* and where. If possible, I want to be able to filter out files it looks at but eventually does not use; but even a superset of the files actually used is good enough.
How can I do that? I tried finding an appropriate command-line option for this and failed.
Note: I'm looking for a solution which doesn't depend on the link succeeding...

You can use the ldd ./executable to know the files that are used. It will list all the dynamic libraries that are dependent.
Thanks

Related

How to only include some directories in ctags?

there is an --exclude option but that to exclude the directories/files. I work on a big project and want to only include the directories that has source code and not build stuff.
How to do that? What should I include in my .ctags file?
I use:
find FILES | ctags -L -
where FILES is the appropriate arguments to make find return only the files I want to index.
Exuberant Ctags (5.8) is now old and unmaintained, though. It still works for me, so I've not switched; but the last time I checked "Universal Ctags" appeared to be the way forwards, so I would suggest starting there:
https://ctags.io
https://github.com/universal-ctags/ctags
n.b. I experienced a curious bug with Exuberant Ctags 5.8 whereby find . resulted in some corrupted tag entries, but find * did not; so you might want to use the latter if using this approach. I didn't need to index any dot files at the root level, so I'm not sure offhand what happens for .* -- I don't think I tried it. Absolute paths were also fine, but then the TAGS file isn't portable. Potentially not an issue in the newer fork.

How to produce a .js file from a haskell source file with haste?

So I noticed, while answering this question, that the one who asked the question appears to be a javascript developer. And as the code I wrote in haskell is easy enough, I thought I give haste a try and try to compile it to javascript.
So, I downloaded the Windows binary package of haste (why does the .msi require a reboot?!!?), added it to my path, issued haste-cabal update and haste-cabal install split and after a bit of reading the output of hastec --help, I issued:
PS E:\h\stackoverflow> hastec -o hexagon.js --pretty-print hexagon.hs
as my best guess on how to get the output I am looking for.
Opposite to my expectation, haste output was this:
hastec.exe: user error (shell expression failed in readModule: Data.Binary.Get.runGet at position 8: not enough bytes)
So, my question: What do I have to do to get a java script source file?
Is it possible that you have an old version of Haste lying around, or have intermediate files (.jsmod, for instance) from a different version of the compiler in your source directory? This sounds like the (quite unhelpful) error message Haste produces when it runs into a corrupted intermediate file.
Check that the version of the binary you're calling is what you expect (hastec --version). Then, try getting rid of all intermediate files in the directory as well as any files in %USERPROFILE%\AppData\Roaming\haste, reinstalling split, and recompiling with the -fforce-recomp flag. You should also add a main function, so that Haste has an entry point to your program from which to start linking. If all you want to do is to make some Haskell function available to external JavaScript, you can use the export foreign function interface:
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Haste.Foreign
import Hexagon
main = export "picture" Hexagon.picture
You will probably also want to compile your program with the --onexec flag, to make sure that main runs and exports picture immediately when loaded, and not on page load which is the default:
> hastec -o hexagon.js --pretty-print --onexec hexagon.hs
After doing this, any code included after hexagon.js will be able to call e.g. Haste.picture(5); in order to produce a picture of size 5.
(Re: MSI installer requiring a reboot, this is required since it adds the Haste binaries to your %PATH%, which does not take effect immediately. I assume that a re-login would be enough to make it take effect, however.)

How to write a makefile that tags a library or an executable with a repository id?

I have code that generates an executable via a makefile. The executable itself generates an output file with data. In the future, when I go back and look at old data that I've kept, I would like to be able to reproduce the data in a reliable and systematic way. In other words, I would need to know an ID number form a repository (GIT) so that I can recover the code, and I would also need to know how I compiled the code and what compiler and flags I used. What is the best way to go about this?
How do I accomplish the same as I've described above but with an old library instead of data, so that I can pick an old library, find out the repository ID number for the code that was used to generate it, and find out the Makefile info used to generate it?
There are many ways to do this; the choice of the best will depend on things like the flexibility of your source control system, and how much user mischief you want to guard against.
One possibility: I am not familiar with GIT, but I bet with some effort you could set up a system such that when you check out a version of the code and makefile, you also produce a small file containing the version number (or ID or whatever). With a little more effort you could write the version number into the makefile to guard against a lost/swapped version file (although this would be conceptually unhygienic, since the makefile would then not be identical to the one under source control). The executable would read the file and append the version number to the data. (Again, the number could be incorporated into the executable if you like, which would make the library a self-contained entity and guard against a swapped makefile/versionfile, but raise the hackles of your QA people.)
Another way: use checksums. The makefile calculates its own checksum and records it in a small file, which the executable uses/incorporates and appends to the data. The executable also calulates its own checksum (with caveats for compiler indeterminacies) and appends that too. A small database of checksums, easily constructed at need, acts as a lookup table for the index back into the repository.
This is straight forward. The trick is to not relink your library if the version has not changed.
.PHONY: version.proto
version.proto:
Run some commands
Which will produce version.proto
Containing something like 'char const Version[] = "MyProj svn rev 19228 tag (null)"
version.c: version.proto
cmp -s $< $# || cp $< $#
∶
Include version.c in the source-list of your project, and you are done.
What's all this cmp -s $< $# || cp $< $#? The trick is to only update version.c if it differs from the last version compiled into your project. OTOH if it does not differ, then no error must be returned to the shell.

Packaging Perl modules with Config files?

I am currently creating a Perl module for in house use. I used ExtUtils::ModuleMaker to generate a build script and skeleton for my Perl module. I would like to include .ini config files that my core modules need to run properly. Where do I put these files so they are installed with my module? What path do I need to use to access these config files across the main and sub modules?
P.S. this is the directory frame:
|-lib
|---Main.pm
|---Main
|-----subModule1.pm
|-----subModule2.pm
|-----subModule3.pm
|-scripts
|-t
If you are using Module::Install, you can use Module::Install::Share and File::ShareDir.
If you are using Module::Build, you may want to use its config_data tool and and a *::ConfigData module.
Taking a look at the generated Makefile, I would bet the better place to put it is under lib/Main and then you can direct your module to look at ~/.modulerc first, then PERLLIB/Main/modulerc.ini or something like that.
You could also embed the defaults in your module in a way that, in absence of ~/.modulerc, the module works using the default data.
To find the home directory, see File::HomeDir. You'll not want to use ~ (since that's a shell thing anyway).
I would suggest having your module work without the rc file as much as possible. If it doesn't exist, the code should fall back to defaults. This should be true, too, even if the file exists, but a particular flag is missing - it should fall back to the default, too.
You may want to look at Config::Any while you're at it. No point reinventing that wheel.

Can someone break down how localization file ( .mo, .po ) generation works?

I'm trying to grok gettext.
Here's how I think it works -
First you use some sort of po editor and tell it to scan a directory for your application, create these ".po" files, the application makes a po file for each file scanned which contains a string in a programming language, then compile them to binary mo files, to which gettext parses, and you call a method using a high level API such as Zend_Translate and specify you want to use gettext, it can be setup to cache translations and it just returns those.
The part I'm really unclear about is how the editing of po files is done really, it's manual - right? Then when the compilation is done of course the application relies on the binary mo files.
And if someone could provide useful linux applications for editing .po files I'd be grateful.
The tutorial on NLS using GNU gettext should help you understand the process.
As for editing .po files, there's at least two applications (apart from vi :-): gtranslator and poedit.