ldconfig seems to be missing one of the symlinks - ld

I have a custom location (/opt/lib) added to my library paths by adding a file to /etc/ld.so.conf. I compiled the git2 library and installed it there.
I am trying to install pygit which needs that library, but I keep getting an error that says that it can't find -lgit2.
I double checked and ldconfig seems to have missed that symlink.
Here is the output on the terminal.
/opt/lib$ ls
libgit2.so libgit2.so.0.26.3 libgit2.so.26 pkgconfig
/opt/lib$ sudo ldconfig
/opt/lib$ sudo ldconfig -v | grep libgit
ldconfig: Can't stat /usr/local/lib/i386-linux-gnu: No such file or directory
ldconfig: Can't stat /usr/local/lib/i686-linux-gnu: No such file or directory
ldconfig: Can't stat /lib/i686-linux-gnu: No such file or directory
ldconfig: Can't stat /usr/lib/i686-linux-gnu: No such file or directory
ldconfig: Can't stat /usr/local/lib/x86_64-linux-gnu: No such file or directory
ldconfig: Path `/lib/x86_64-linux-gnu' given more than once
ldconfig: Path `/usr/lib/x86_64-linux-gnu' given more than once
ldconfig: /lib/i386-linux-gnu/ld-2.27.so is the dynamic linker, ignoring
ldconfig: /lib/x86_64-linux-gnu/ld-2.27.so is the dynamic linker, ignoring
libgit2.so.26 -> libgit2.so.0.26.3
ldconfig: /lib32/ld-2.27.so is the dynamic linker, ignoring
How do I get it to pick up libgit2.so?

libgit2.so is a link editor (ld) input file. /etc/ld.so.conf configures paths for the run-time dynamic linker. The relationship between these two is that the dynamic linker consumes output from the link editor. Both happen to consult the LD_LIBRARY_PATH environment variable, too. Beyond that, they are complete separate programs, with different command line options.
So you need to set LD_LIBRARY_PATH (which may be confusing later on, so it's not a good idea), or better, use -L to specify the path to the directory which contains the libgit2.so link editor input file.

Related

Can we wget with file list and renaming destination files?

I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.

cmake seems to not pick up my -DCMAKE_CXX_FLAGS

I have a totally simple setup. Two files in two separate directories.
mkdir a
touch a/a.h
mkdir b
echo '#include <a/a.h>' > b/b.c
Compiling works, when I specify a header path
cd b
gcc -c -I.. b.c
cd ..
OK now let's add cmake to the picture. For my purposes I need to specify the header search path via the command-line. Consider the CMakeLists.txt read only.
cat<<EOF > b/CMakeLists.txt
cmake_minimum_required (VERSION 3.0)
project (b)
add_library(b
b.c
)
EOF
mkdir b/build
cd b/build
cmake -DCMAKE_CXX_FLAGS=-I.. ..
make VERBOSE=1
But make fails and I don't see the -I.. specification in the cc command line.
[ 50%] Building C object CMakeFiles/b.dir/b.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains /XcodeDefault.xctoolchain/usr/bin/cc -o CMakeFiles/b.dir/b.c.o -c /tmp/b/b.c
/tmp/b/b.c:1:10: fatal error: 'a/a.h' file not found
I tried giving an absolute path too, but it just doesn't work for me.
Your file has .c extension, you should use CMAKE_C_FLAGS for it.
And in most cases you should specify needed include search paths in CMakeLists.txt itself:
include_directories(..)

How to make a file executable using Makefile

I want to copy a particular file using Makefile and then make this file executable. How can this be done?
The file I want to copy is a .pl file.
For copying I am using the general cp -rp command. This is done successfully. But now I want to make this file executable using Makefile
Its a bad practice to use cp and chmod, instead use install command.
all:
install -m 0777 hello ../hello
You can use -m option with install to set the permission mode, and even note that by using the install you will preserve not only the permission but also the owner of the file.
You can still use chmod accordingly but it would be a bad practice
all:
cp hello ../hello
chmod +x ../hello
Update: install vs cp
cp would simply copy files with current permissions, install not only copies, but also can change perms/ownership as arg flags. (This is what your requirement was)
One significant difference is that cp truncates the destination file and starts copying data from the source into the destination file. install, on the other hand, removes the destination file first.
This is significant because if the destination file is already in use, bad things could happen to whomever is using that file in case you cp a new file on top of it. e.g. overwriting an executable that is running might fail. Truncating a data file that an existing process is busy reading/writing to could cause pretty weird behavior. If you just remove the destination file first, as install does, things continue much like normal - the removed file isn't actually removed until all processes close that file.[source]
For more details check these,
install vs. cp; and mmap
How is install -c different from cp

How to find if debug information only contains relative paths or absolute paths?

How to find if debug information contains relative paths or absolute paths?
I am trying to Outputting annotated source (opannotate) using the following link.
http://oprofile.sourceforge.net/doc/opannotate.html
I would like to know about it in order to give the following options along with opannotate.
--base-dirs / -b [paths]/
Comma-separated list of path prefixes. This can be used to point OProfile to a different location for source files when the debug information specifies an absolute path on your system for the source that does not exist. The prefix is stripped from the debug source file paths, then searched in the search dirs specified by --search-dirs.
--search-dirs / -d [paths]
Comma-separated list of paths to search for source files. This is useful to find source files when the debug information only contains relative paths.
Thanks.
If the C_FLAGS during compilation contain the -g parameter, then all the paths of individual source files are included in the .debug_info section in the resulting binary executable.
The following command will dump to the console, a complete list of all the paths to various .c source files that are present in the binary built with debug-info.
$ readelf --debug-dump=info <binary-executable> | grep "\.c" | awk '{print $8}'
To search for the path of a particular source-file within the debug-info of the binary, one can modify the grep "\.c" to grep "<filename>" as appropriate.
For more details, checkout this excellent article on debug-info in binaries.

Image::Imlib2->load("filename") says file doesn't exists even though it does

use Image::Imlib2;
my $a = Image::Imlib2->load("/foo/file");
gives me the following error:
Runtime error: Image::Imlib2 load error: File does not exist at (eval 469) line 6.
Note that /foo/file is a CIFS mounted directory and this only happens for files on CIFS mounted directories. An additional complication is that this happens on Debian Squeeze but not on Debian Lenny.
The solution is to mount the CIFS directory using the 'noserverino' option.
Image::Imlib2 is a Perl wrapper around the Imlib2 C library. The problem is CIFS servers can return inode integer values > 31^2. This makes programs not compiled with LFS (Large File Support), to throw a glibc EOVERFLOW error. Either compile the program with LFS support (i.e. with -D_FILE_OFFSET_BITS=64) or use the "noserverino" mount option. But you may not be able to detect hardlinks properly.
http://linux.die.net/man/8/mount.cifs