coqide - can't load modules from same folder - coq

I can't load modules that are in same folder in CoqIde.
I'm trying to load sources from Software Foundations, I'm running coqide in folder that contains SF sources with coqide or coqide ./, then after opening and running the file, I'm getting this error:
Error: Cannot find library Poly in loadpath
in this line:
Require Export Poly.
and it's same for every other require commands.
So how are you people loading programs from SF into coqide ?

You need to compile the .v files into .vo files and add their directory to your load path if you're going to require them. To compile them, run coqc <file-path> in the command prompt. To add the files' directory to your load path in CoqIde, you can insert the line Add LoadPath "<directory-path>". at the beginning of the .v files.

I realize this is an old thread, however there are not many resources on this problem. I just spent some time solving it so I figured it would be good to post it on the first topic I got from googling. I'm using Coq 8.4p16 compiled with no additional configuration on Arch Linux.
So, the manual says variables like $COQPATH, ${XDG_DATA_DIRS}/coq/ and ${XDG_DATA_HOME}/coq/ are checked, however I've had no luck with those.
I also tried putting coqc -I /folder/path the Edit->Preferences->Externals of CoqIde, however, still no luck there.
I write these as they may work for someone.
The only GLOBAL way which works for me is writing a coqrc file with Add LoadPath "<directory-path>". in it. On Linux the file needs to be in the home folder.
Hope this saves someone some time.

To be able to load a source file in coqtop, coqc, CoqIDE, or Proof General there are several ways, one method is as follows:
Suppose you have downloaded code files for the book titled Certified Programming with Dependent Types written by Adam Chlipala that are available here and extract them in a path we call CODEHOME. As you can see there are this first line in any source files Require Import Bool Arith List Cpdt.CpdtTactics.
First type coqc -v in a cli (command line interface), the output would be something like The Coq Proof Assistant, version 8.4pl4 ...., then create a file named coqrc.8.4pl4, the file extension should be in accord with the version of the coq you are using, in $HOME/.config/coq directory if it does not exist and write this line in it Add LoadPath "CODEHOME/cpdt/src" as Cpdt . and that's it.

If you started your file.v with Module file. just get rid of it (Also get rid of the End file.) and your problem is solved.

Related

Development of the Coq library. (Add LoadPath solution is not good enough.)

I am adding some theorems to the library
https://github.com/coq-contribs/zfc
But there is a not very good thing.
While I developing the code in the CoqIDE I have to add
Add LoadPath "/home/user/0my/GITHUB/".
and to rename all
Require Import Axioms.
to
Require Import ZFC.Axioms.
. All files of the library are in
/home/user/0my/GITHUB/ZFC
and the name of the last folder matters.
But when I want to run the "make" command I have to rename everything back.
The file "Make" contains the names of files and the prefix. Deleting the first line didn't solve the problem.
I don't think that it is the best practice of the developing with the CoqIDE, so what shall I do instead?
Edited1:
I have "run_coqide.sh" which consist of
#!/usr/bin/env bash
COQPATH=/home/user/0my/GITHUB/
/home/user/opam-coq.8.8.1/4.02.3/bin/coqide
"From ZFC Require Import Sets. " raises error "Cannot find a physical path".
Edited2:
I have find out that this is a working script:
#!/usr/bin/env bash
export COQPATH=/home/user/0my/GITHUB/
/home/user/opam-coq.8.8.1/4.02.3/bin/coqide
Is it normal run or a hack?
Rename Make to _CoqProject, this is the name currently recognized by CoqIDE where it will look for project configuration (in particular the -R . ZFC option to make the Coq files in the directory visible to CoqIDE).
It is also possible to change the name CoqIDE is looking for to Make, but _CoqProject seems to really be the new standard.
Note that the -R . ZFC option, that allows you to import libraries unqualified, corresponds to the command Add Rec LoadPath "/.../ZFC" as ZFC.
I would also suggest to actually switch the whole codebase to explicit qualification ZFC.Axiom, that you've been doing locally by hand, making it less error prone to work with different projects at the same time. I'm not sure why it was necessary to rename things back to run make, my understanding is that it shouldn't be necessary.
See also the Coq reference manual, about the coq_makefile utility.

How to Build an RPM from Source in One Line without Spec Files etc?

I've been reading about building RPMs, and the process is quite complex. Is there any program/software that works like this:
Download tar.gz file. Extract to directory
cd into directory
Run
RPM file is output into the directory
Does any such program exist? It seems as if it should. After all, when I run make, make install etc, I don't need to specify spec files, provide locations for where the software has to be installed. So why should I have to do all that for creating RPMs?
I've tried using checkinstall, but I keep getting errors like "Directory not found: /root/rpmbuild/BUILDROOT/hello-2.10-1.x86_64/usr"
So is there an easier way?
No. There is no easier way.
Sometimes upstream provide 'make rpm' target. Sometime checkinstall works. But often you have to create the spec file manually.
BTW that error from checkinstall reveals two things:
you are running that command as root. That is very very unwise.
you should create few build directories. Run command rpmdev-setuptree it will create them for you.

ldconfig: *.so* is not a symbolic link

I recently took over some RPM work while one of our colleagues is away. During the post installation step, the RPM installs some libraries in a particular location, writes a file to /etc/ld.conf.so.d/ containing the path to these files, and then runs "ldconfig". But when the call is made, there are quite few messages stating that: "libXYZ.so* is not a symbolic link".
I looked at the files and the sym links are not set up correctly. E.g., libA.so.1 and libA.so.1.1 are identical files, instead of libA.so.1->libA.so.1.1. Whenever ldconfig is run on a system with the RPM installed, these messages are shown.
Now for no particular reason, I tried replicating this by creating a shared library called libmylib.so.1.1. Then I created another file called libmylib.so.1 which was identical to the previous file. I added a test.conf file to /etc/ld.conf.so.d which contains the path to these shared libraries, and then ran ldconfig. But I didn't see any of these "not a symlink" messages. Instead ldconfig set up two symlinks to both these files. Is that message displayed only under particular circumstances?
Also, when I'm installing shared libraries, do I need to setup the links such as:
linker name -> so name -> real name
manually? And then run ldconfig?
This is my first time working with RPM and installing shared libraries, so any input would be appreciated.
Thanks
Common practice is to make symbolic links to relate the library version and soname to the name used when building/linking a program. Here are a few comments on that:
Program Library HOWTO: 3. Shared Libraries
“soname” option for building shared library
ldconfig expects that there is only one actual file. Here are a few places where this question has been asked:
Always getting “not a symbolic link” message, while installing any package
ldconfig : /usr/lib/libstdc++.so.6 is not a symbolic link + mysql
Usually, problems in this area are due to improper updating of the symbolic links. But an incorrect build-script can be the problem as well.

cmake which package name to pass to find_package

I am trying to link against the libconfig++ library using cmake. I installed the library
using apt-get so I am assuming it will have a .cmake file so I can use find_package. Problem is I don't know what package name to use. I tried libconfig, config, config++ as the package name to no avail.
As a general question, how does one find out which package is associated with a library.
I know that find_package looks into CMAKE_MODULE_PATH to see if there is a .cmake script. How to I find out what is the value of CMAKE_MODULE_PATH on my system. It's not an environment variable. I am running ubuntu 12.04.
Any help is appreciated.
To use find_package you need to have corresponding Find or Config cmake file. But library may not to provide it, seems with your library is such a case. You can use find_library for finding libraries and find_path to find include directories. With these commands you can even write FindXXX.cmake yourself.
CMAKE_MODULE_PATH is not an environment variable, it is CMake's one. This variable is intended for you to set, if you have additional directories with modules, by default it's empty. This is used in the "Module" mode. In this mode CMake searches FindXXX.cmake in the CMAKE_MODULE_PATH (your modules) or in modules shipped with CMake and if it's found, it then used to find library and it's headers.
If that module wasn't found, it then switches into "Config" mode. On Unix it searches for ConfigXXX.cmake in the following directories:
<prefix>/(lib/<arch>|lib|share)/cmake/<name>*/
<prefix>/(lib/<arch>|lib|share)/<name>*/
<prefix>/(lib/<arch>|lib|share)/<name>*/(cmake|CMake)/
This files is shipped with the library, so there is no need to find anything, they contain all information, where library and includes located, etc.
About naming scheme, there is no standard one. You can look at Standard CMake modules. Modules found in internet for your library named FindLibConfig.cmake
For your case, library ships without corresponding cmake file, so you should write it your self (or find already written) and add directory with that file to CMAKE_MODULE_PATH.
I suggest you to read how find_package command works and how to write FindXXX.cmake files.

Can I move a Perl installation from one computer to another computer?

I am trying to set up an application dependant on few Perl modules, but the server I am installing to, does not have Internet connection. I read about offline module installs via ppd files, however I would have to resolve all the dependencies one by one.. All the more tedious considering I don't have direct internet connection.
I am hoping to find a solution, where I install ActivePerl on my PC and install all the libraries that I want and then copy paste the directories to my server. If it is just a matter of fixing some environment variables, that would be fine. Just want to know the definitive list of variables to modify. Not sure whether it is mandatory to install the perl libraries on the computer in which it is intended to run? (One is 32 bit platform and other one is 64 bit, but the server is already running various 32 bit applications so I hope it is not a major problem) For best compatibility, I plan to install ActivePerl on both the systems and merge the library directories to be identical.
The answer was on Perl FAQ, my bad didn't go through it properly.
I copied the perl binary from one machine to another, but scripts don't work.
That's probably because you forgot libraries, or library paths differ.
You really should build the whole distribution on the machine it will
eventually live on, and then type "make install". Most other approaches
are doomed to failure.
One simple way to check that things are in the right place is to print
out the hard-coded #INC that perl looks through for libraries:
% perl -le 'print for #INC'
If this command lists any paths that don't exist on your system, then
you may need to move the appropriate libraries to these locations, or
create symbolic links, aliases, or shortcuts appropriately. #INC is also
printed as part of the output of
% perl -V
You might also want to check out "How do I keep my own module/library
directory?" in perlfaq8.
From this link
Occasionally, you will not be able to
use any of the methods to install
modules. This may be the case if you
are a particularly under-privileged
user - perhaps you are renting web
space on a server, where you are not
given rights to do anything.
It is possible, for some modules, to
install the module without compiling
anything, and so you can just drop the
file in place and have it work.
Without going into a lot of the
detail, some Perl modules contain a
portion written in some other language
(such as C or C++) and some are
written in just in Perl. It is the
latter type that this method will work
for. How will you know? Well, if there
are no files called something.c and
something.h in the package, chances
are that it is a module that contains
only Perl code.
In these cases, you can just unpack
the file, and then copy just the *.pm
files to a directory from which you
will run the modules. Two examples of
this should suffice to illustrate how
this is done.
IniConf.pm is a wonderful little
module that allows you to read
configuration information out of a
.ini-style config file. IniConf.pm is
written only in Perl, and has no C
portion. When you unpack the .tar.gz
file that you got from CPAN, you will
find several files in there, and one
of them is called IniConf.pm. This is
the only file that you are actually
interested in. Copy that file to the
directory where you have the Perl
programs that will be using this
module. You can then use the module as
you would if it was installed
``correctly,'' with just the line:
use IniConf;
Time::CTime is another very handy
module that lets you print times in
any format that strikes your fancy. It
is written just in Perl, without a C
component. You will install it just
the same way as you did with IniConf,
except that the file, called CTime.pm,
must be placed in a subdirectory
called Time. The colons, as well as
indicating an organization of modules,
also indicates a directory structure
on your file system.