I wrote a new lexer for pygments and I try to use it. Thus I look at this page
http://pygments.org/docs/lexerdevelopment/
where the install procedure is described. They said to do make mapfiles but I don't know where.
I look into this two directories where there is the other.py module they talk about.
/usr/share/pyshared/pygments/lexers/
and
/usr/share/pyshared/pygments/lexers/
But there is not any makefiles in there. Thus how can I do ?
The blog post Custom syntax in pygments explains another way to add a custom lexer to pygments:
Pygments enables custom plugins via something called entrypoints in setuptools.
Directory structure:
|- FantomLexer
|- fantomlexer
| |- __init__.py
| |- lexer.py
|- setup.py
The __init__.py file can be empty but it needs to be there so its enough to simply touch it. The lexer.py will contain the regex lexer for pygments.
The contents of the setup.py goes as following:
from setuptools import setup, find_packages
setup (
name='fantomlexer',
packages=find_packages(),
entry_points =
"""
[pygments.lexers]
fantomlexer = fantomlexer.lexer:FantomLexer
""",
)
You can then install your lexer via sudo python setup.py develop.
I found a solution which works. I guess you lexer is in the file mylex.py. I did the following under ubuntu 13.10. You need to have root permissions to do that.
Copy your new lexer into /usr/share/pyshared/pygments/lexers/
In the same directory run python _mapping.py
Do a symbolic link of your lexer into /usr/lib/python2.7/dist-packages/pygments/lexers/. For example :
cd /usr/lib/python2.7/dist-packages/pygments/lexers/
ln -s /usr/share/pyshared/pygments/lexers/algobox.py .
Related
I am currently so confused about installation of my own Python packages... Between setup.pys, sdists and wheels, no matter what I do, I can't seem to achieve what I want - which is to have a bunch of non-.py data files installed and kept in the virtual environment with the same structure I have in my project folder after the installation.
I've read all kinds of documentations, and created a setup.py file that has a data_files field that contains all the data files I need in my installation.
I have the following structure:
.
|__ requirements.txt
setup.py
hr_datapool
|__ __init__.py
|__ data_needed
|__ needed_folder
|__ needed_file.notpy
|__ one_module
|__ __init__.py
|__ misc_tools.py
|__tests
|__ test_tools.py
|__ other_module
...
And data_needed contains non-.py data files that are needed for misc_tools.py (and thus tests.py) to run.
Because of this, I added a data_files into my setup.py that contains all the folders and files I need. This I confirmed, everything is there what should be.
And yet, if I do any variation of pip install ., python setup.py install or the likes, the data_files are completely ignored, and/or placed in the wrong directory and don't appear anywhere in the created build or dist folders. And because of this, obviously all my tests fail, since they can't load files that are not there. Neither are they stored in the installation folder on the venv when I sometimes do succeed in copying them, but rather in the root of the venv.
The funny thing is, the files are handled while installing, I keep getting console output when installing with python setup.py install like:
copying data_needed/needed_folder/needed_file.notpy-> /Users/.../venv/hr_datapool/data_needed/needed_folder/
but only if I use python setup.py install, (not when using pip install .).
According to the documentation:
The directory should be a relative path. It is interpreted relative to the installation prefix (Python’s sys.prefix for system
installations; site.USER_BASE for user installations). Distutils
allows directory to be an absolute installation path, but this is
discouraged since it is incompatible with the wheel packaging format.
No directory information from files is used to determine the final
location of the installed file; only the name of the file is used.
Notice the highlights. However, in my example, it doesn't install relative to the directory containing the package, but it installs into its own folder in the root of the virtual environment, making it practically unreachable from within my code. I made sure I se relative paths in my setup.py, but still this happens.
How can I make sure the required data_files install within the target directory of the module, and not separately into the root of the virtual environment?
I have a Coq project with its libraries organised into subdirectories, something like:
…/MyProj/Auxiliary/Aux.v
…/MyProj/Main/Main.v (imports Auxiliary/Aux.v)
When I compile the files, I expect to do so from working directory MyProj (via a makefile). But I also want to work on the files using Proof General/Coqtop, in which case the working directory is by default the directory in which the file lives.
But this means that the LoadPath is different between the two contexts, and so the logical path needed for the library import is different. How do I set up the coqc invocation, the LoadPath, and the import declarations so that they work in both contexts?
Each approach I have tried, something goes wrong. For instance, if I compile Aux.v by invoking
coqc -R "." "MyProj" Auxiliary/Aux.v
and import it in Main.v as
Require Import MyProj.Auxiliary.Aux.
then this works when I compile Main.v with
coqc -R "." "MyProj" Main/Main.v
but fails in Coqtop, with Error: Cannot find library MyProj.Auxiliary.Aux in loadpath. On the other hand, if before the Require Import I add
Add LoadPath ".." as MyProj.
then this works in Coqtop, but fails under coqc -R "." "MyProj" Main/Main.v, with
Error: The file […]/MyProj/Auxiliary/Aux.vo contains library
MyProj.Auxiliary.Aux and not library MyProj.MyProj.Auxiliary.Aux
I’m looking for a solution that’s robust for a library that’s shared with collaborators (and hopefully eventually with users), so in particular it can’t use absolute file paths. The best I have found for now is to add emacs local variables to set the LoadPath up when Proof General invokes Coqtop:
((coq-mode . ((coq-prog-args . ("-R" ".." "MyProj" "-emacs")))))
but this (a) seems a little hacky, and (b) only works for Proof General, not in Coqide or plain Coqtop. Is there a better solution?
Allow me to side-step your question by suggesting an alternative process, hinted at by Tiago.
Assuming that your project's tree looks like this:
MyProj/Auxiliary/Aux.v
MyProj/Main/Main.v
In MyProj, write a _CoqProject file listing all your Coq files
-R . ProjectName
Auxiliary/Aux.v
Main/Main.v
When you open one of these Coq files, emacs will look for the _CoqProject and do-the-right-thing (tm).
As shown by Tiago, coq_makefile will also give you a Makefile for free.
I know you explicitly asked for something that works across different platforms, but there's already a Proof-General-specific solution that is less hacky than the one you have. Proof General has a special variable called coq-load-path that you can set with local variables, much like you did for coq-prog-args. The advantage is that you don't have to worry about any other arguments that need to be passed to coqtop (such as -emacs in your example). Thus, your .dir-locals.el file could have a line like this:
((coq-mode . ((coq-load-path . ((".." "MyProj"))))))
Unfortunately, I am not aware of anything that works across platforms, although I'm pretty sure that something specific for CoqIDE must exist. If this is the case, maybe you could set up a script to keep these configuration files updated across different platforms?
If you use coq_makefile you can install the library in your system.
Without OPAM
To initialize your project:
coq_makefile -f _CoqProject -o Makefile
Share your library with other projects:
make install
With OPAM
Assuming you have OPAM installed, you can use coq-shell to help you take care of dependencies.
To setup your project:
coq_shell_url="https://raw.githubusercontent.com/gares/opam-coq-shell/master/src/opam-coq"
curl -s "${coq_shell_url}" | bash /dev/stdin init 8.4 # Install Coq and its dependencies
eval `opam config env --switch=coq-shell-8.4` # Setup the environment
coq_makefile -f _CoqProject -o Makefile # Generates the makefile
opam pin add coq:YOURLIBRARY . # Add your library to OPAM
When you update your library you should do:
opam upgrade coq:YOURLIBRARY
Here is an example of a project that uses the OPAM method:
https://bitbucket.org/cogumbreiro/aniceto-coq/src
I am familiar with using package.json with node.js, Gemfile for Ruby, Podfile for Objective-C, et al.
What is the equivalent file for Perl and what is the syntax used?
I've installed a couple packages using cpanm and would like to save the package names and version in a single file that can be executed by team members.
For simple use cases, writing a cpanfile is a good choice. A sample file might look like
requires 'Marpa::R2', '2.078';
requires 'String::Escape', '2010.002';
requires 'Moo', '1.003001';
requires 'Eval::Closure', '0.11';
on test => sub {
requires 'Test::More', '0.98';
};
That is, it's actually a Perl script, not a data format. The dependencies can then be installed like
$ cd /path/to/your/module
$ cpanm --installdeps .
This does not install your module! But it makes sure that all dependencies are satisfied, so we can do:
use lib '/path/to/your-module/lib'; # add the location as a module search root
use Your::Module; # works! yay
This is usually sufficient e.g. for a git repository which you want others to tinker with.
If you want to create a tarball that can be distributed and installed easily, I'd recommend Dist::Zilla (although it's geared towards CPAN releases). Instead of a cpanfile we use a dist.ini:
name = Your-Module
version = 1.2.3
author = Your Self <you#example.com>
license = GPL_3
copyright_holder = Your Self
[#Basic]
[Prereqs]
Marpa::R2 = 2.078
String::Escape = 2010.002
Moo = 1.003001
Eval::Closure = 0.11
[Prereqs / TestRequires]
Test::More = 0.98
Then:
$ dzil test # sanity checks, and runs your tests
$ dzil build # creates a tarball
Dist::Zilla takes care of creating a Makefile.PL and other infrastructure that is needed to install the module.
You can then distribute that tarball, and install it like cpanm Your-Module-1.2.3.tar.gz. Dependencies are resolved, your packages are copied to a permanent location, and you can now use Your::Module in any script without having to specify the location.
Note that you should adhere to the standard directory layout for Perl modules:
./
lib/
Your/
Module.pm # package Your::Module
Module/
Helper.pm # package Your::Module::Helper
t/ # tests to verify the module works on the target syste,
foo.t
bar.t
xt/ # optional: Author tests that are not run on installation
baz.t
bin/ # optional: scripts that will later end up in the target system's $PATH
command-line-tool
Makefile.PL usually (along with a few other files; Perl has had packages for longer then any of the other languages you mention and suffers from a bit of inelegance here).
Module Starter is a sensible way to start writing a package. It has a getting started guide.
I have a mixed Python/C++ library with test files mixed in amongst source files in the same directories. The layout looks like
/home/irving/geode
geode
__init__.py
vector
__init__.py
test_vector.py
...
...
Unfortunately, the library is unusable in-place since it lacks .so extension modules. Question: Can I make py.test always use an installed version, even when run from /home/irving/geode or a subdirectory?
The test files have from __future__ import absolute_import, and run fine if executed directly as scripts. For example, if I do
cd geode/vector
./test_vector.py
which does import geode, it finds the installed version. However, if I run py.test in geode/vector, it finds the local copy of geode, and then dies.
I think you have two options:
run py.test --pyargs geode.vector.test_vector to make pytest interpretet the argument as an import path, deriving the file system path from it. This should run the test against the installed version.
move the tests out into a tests directory without an __init__.py file. This way you need to pip install -e . to work in-place or can do python setup.py install and the py.test tests to run tests against the installed version.
Need your help:
I want to use Eclipse CDT and QT without creating a "Qt gui project". Is it possible? How to include QT libraries to my C++ project, and how to call qmake/make to compile the program? This Similar question didn't help me(
I want to use 'C++ project' instead of 'QT Gui project' because there is an issue with external libraries indexing in the QT project (this problem)
Thank you a lot!
Nikolai.
We've done something similar using Qt with a vendor customized version of Eclipse (Momentics) and CDT. To get it to work, we ended up creating a generic makefile project in Eclipse with our own, hand generated Makefile.
The hand generated Makefile basically contained enough information to invoke QMake on the appropriate .pro file ("qt.pro") and then invoke the resulting Makefile ("qtmake.mk").
all: qtmake.mk
$(MAKE) -f qtmake.mk
qtmake.mk: qt.pro
qmake -r -o qtmake.mk qt.pro
clean: qtmake.mk
$(MAKE) -f qtmake.mk clean
install: qtmake.mk
$(MAKE) -f qtmake.mk install
Doing this is quite bothering, I suggest you don't do it. I've tried it only on small projects.
As far as I know you'll have to write a correct Makefile yourself (or setup CDT to create it) by including all the include paths you need for Qt headers. Then you'll have to link to all the Qt libraries your project is using.
If you make use of the Qt meta-object system you'll have to run the moc before compiling and linking. The moc generates C++ sources that you'll have to link to the other sources. If you're using GNU make, and I guess you are, it seems to be possible to automate the moc writing the correct instructions in the Makefile CDT will create. For detailed information read this: http://doc.qt.io/qt-5/moc.html#writing-make-rules-for-invoking.
By the way, isn't it possible for you to use Qt Creator?
This is very easy making use of Netbeans, since qt is integrated in the c++ projects.
But if you use Eclipse, as is my case, you could follow these two steps (for linux users):
Include the directories with the Qt headers, for example /usr/include/qt4/Qt.
Generate the moc files from the headers that contain Qt macros, such as Q_OBJECT. This can be done using the following command in the project directory before the build process:
find . -name ".h" | sed 's/(.)(/)(.*)(.h)/moc-qt4 -D & -o \1\2moc_\3.cpp/' | sh
where you have to define the you want. Just run it once, or use the following command before from the project directory:
find . -name "moc_*.cpp" -exec -rm -f {} \;
Build your project.
By the way, have you tried the qt plugging?
J.
Here is an improved variant of the jwernerny's makefile:
first: all
all clean distclean install uninstall: qtmake.mk
$(MAKE) -f qtmake.mk $#
qtmake.mk: *.pro
qmake -r -o qtmake.mk $<
.PHONY: first all clean distclean install uninstall
It should not to be edited when will be copied to another project, and actually the same rules was merged into one.