Write coffeescript cake file in literate coffee - coffeescript

I would like to write a Cakefile in litcoffee (as in Cakefile.litcoffee), is there a way to do that?

It appears that cake does not do any literate checking when it finds and runs your Cakefile. You can of course add this behaviour if you are so inclined.
I would still try running cake -l and adding a #!/usr/bin/env coffee -l shebang to your file, you never know, right?
Barring the above, for what it's worth, I've had success using Grunt with litcoffee using this incantation.

Related

How to open and edit encrypted perl script?

I have a perl script that is encrypted. This script can be compiled only if Filter::decrypt is installed. I have that Filter and I installed it and the script is compiled with no problem, but now I want to open that script with some text editor and edit it.
Can someone help me and tell me how can I do this?
Pretty fundamentally - it's extremely difficult to make a script unreadable, simply because perl is an interpreted language. Exactly how to disentangle something is more a question of how it got tangled in the first place.
So I would suggest as a first port of call - have a look through Mastering Perl which has a whole chapter on the subject of disassembling perl code.
However if you just look at the Filter::decrypt module page, it indicates several places which the module simply cannot cover - you can only truly 'protect' code if you've control of the perl interpreter in the first place. However the things it suggests are:
Strip the Perl binary to remove all symbols.
Build the decrypt extension using static linking. If the extension is provided as a dynamic module, there is nothing to stop someone from linking it at run time with a modified Perl binary.
Do not build Perl with -DDEBUGGING. If you do then your source can be retrieved with the -Dp command line option.
The sample filter contains logic to detect the DEBUGGING option.
Do not build Perl with C debugging support enabled.
Do not implement the decryption filter as a sub-process (like the cpp source filter). It is possible to peek into the pipe that connects to the sub-process.
Check that the Perl Compiler isn't being used.
There is code in the BOOT: section of decrypt.xs that shows how to detect the presence of the Compiler. Make sure you include it in your module.
Assuming you haven't taken any steps to spot when the compiler is in use and you have an encrypted Perl script called "myscript.pl", you can get access the source code inside it using the perl Compiler backend, like this
perl -MO=Deparse myscript.pl
Note that even if you have included the BOOT: test, it is still possible to use the Deparse module to get the source code for individual subroutines.
So:
perl -MO=Deparse yourscript
perl -Dp yourscript
If these don't work - look at your local copy of Filter::decrypt and alter it so it prints the decrypted result.
Best option: Just edit your unencrypted copy and reinstall it.
Alternative: Use decr (comes with Filter::decrypt) to decrypt an encrypted file.

What is the proper way to test perl modules during development?

I'm working on a personal Perl module to build a basic script framework and to help me learn more about the language. I've created a new module called "AWSTools::Framework" with ExtUtils::ModuleMaker via the command line tool modulemaker. I'm trying to figure out the appropriate way to test it during development.
The directory structure that was created includes the following:
./AWSTOOLS/Framework/lib/AWSTools/Framework.pm
./AWSTOOLS/Framework/t/001_load.t
The autogenerated 001_load.t file looks like this:
# -*- perl -*-
# t/001_load.t - check module loading and create testing directory
use Test::More tests => 2;
BEGIN { use_ok( 'AWSTools::Framework' ); }
my $object = AWSTools::Framework->new ();
isa_ok ($object, 'AWSTools::Framework');
If I try to run the script directly (either from the command line or inside my TextMate editor), it fails with:
Can't locate AWSTools/Framework.pm in #INC....
If I try to run prove in the ./AWSTOOLS/Framework directory, it fails as well.
The question is: What is the proper way to run the tests on Perl modules while developing them?
If you want to run a single test file, you need to tell perl where to find your modules just like you would for any other program. I use the blib to automatically add the right paths:
$ perl Makefile.PL; make; perl -Mblib t/some_test.t
You can also use prove to do the same thing. I don't use prove, but you can read its documentation to figure it out. The -b switch should do that, but I've had problems with it not doing the right thing (could just be my own idiocy).
If you're using the typical toolchain (ExtUtils::MakeMaker) it will be perl Makefile.PL to generate a makefile, then make test every time afterward. Those commands should be run from the root directory of the module. See http://search.cpan.org/perldoc?ExtUtils::MakeMaker#make_test
Edit: and don't do it all manually, or you will come to hate testing. (Well, more than usual.) You will also want to look at least briefly at Test::Tutorial and https://www.socialtext.net/perl5/testing
You may also want to ask the friendly* people in #perl or related channels on your preferred IRC networks.
*Not actually friendly
I actually think that Dist::Zilla is sufficiently flexible enough to allow you to use it for all development. If you aren't uploading to CPAN, just make sure you don't have [UploadToCPAN] in your dist.ini. Also make sure to [#Filter] it out of any plugin bundles which provide it.
Dist::Zilla may be too much to install for only one quick module that you aren't going to touch very often. If you have more than one dist in development then it is definitely worth a look.
You can easily interface it with your VCS using plugins. (Including Git)
You can create a plugin to deploy onto your server. Which would allow you to make sure that all your test files pass before allowing you to deploy ([TestRelease]).
If you don't like tabs in your source files, you can test for that without writing the test yourself ([NoTabsTests]).
Minimal dist.ini for non-CPAN dist
name = Your-Library
author = E. Xavier Ample <example#example.org>
license = Perl_5
copyright_holder = E. Xavier Ample <example#example.org>
copyright_year = 2012
version = 0.001
[GatherDir]
[PruneCruft]
[PruneFiles]
filename = dist.ini
filename = TODO.txt
match = ^.*[.]te?mp$
[NoTabsTests]
[TestRelease]
[CheckExtraTests]
[ModuleBuild]
[FakeRelease]
Test the dist:
dzil test
dzil xtest
If at a later date, you decide to upload it to CPAN:
Replace [FakeRelease] with [UploadToCPAN].
Get a PAUSE id, and set ~/.pause.
user YOUR-PAUSE-ID
password YOUR-PAUSE-PASSWORD
Run dzil release
DONE
In a quick attempt to help you, I would recommend looking at Testing Files and Test Modules.
Continuing to dig around and experiment, I've found the following two things which work for me:
Use prove -l in the './AWSTOOLS/Framework' directory. According to the prove perldoc page, it adds the "lib" directory to the path when Perl runs all the tests in the "t" directory.
To run the script individually/directly, I'm adding the following to the start of the script above the use Test::More line:
use FindBin qw($Bin);
use lib "$Bin/../lib";
This let's me run the script directly via the commad line and in my editor (TextMate). This is based off this page from the Programming Perl book.
Using the -l flag for prove seems very much like the correct thing to do.
As for the "use lib" solution, I doubt that's actually a best practice. If it was, I would expect that modulemaker would have created the 001_load.t test file with that to begin with.

SBT-like features in the Haskell build ecosystem

I've been using Scala with SBT quite a bit lately. The REPL loop of has a handy feature: ~ COMMAND, meaning perform COMMAND for every source file change in the project. For instance:
~ test
and
~ compile
are terrifically useful for rapid development. I wonder, does anyone know of something similar for Haskell, a cabal shell, maybe?
You can get something like this very easily using inotifywait.
Just fire up a terminal in your project directory and run something like this:
$ while inotifywait -qq -r -e modify .; do cabal build && ./dist/build/tests/tests; done
This also works for any other language; just insert the build commands of your choice.
You can script ghci to both define your own commands, and augment existing commands. To do this:
define a ~/.ghci file
write a macro using :def to replace e.g. :reload
More info on GHCi :def commands is here.
The ghcid project provides something limited to ~ :reload. It provides a few extra features (format to a fixed number of lines, persist warnings from previously loaded files), but not the generality of running any command.

How to make sphinx look for modules in virtualenv while building html?

I want to build html docs using a virtualenv instead of the native environment on my machine.
I've entered the virtualenv but when I run make html I get errors saying the module can't be imported - I know the errors are due to the module being unavailable in my native environment.
How can I specify which environment should be used when searching for docs (eg the virtualenv)?
The problem is correctly spotted by Mathijs.
$ which sphinx-build
/usr/local/bin/sphinx-build
I solved this issue installing sphinx itself in the virtual environment.
With the environment activated:
$ source /home/migonzalvar/envs/myenvironment/bin/activate
$ pip install sphinx
$ which sphinx-build
/home/migonzalvar/envs/myenvironment/bin/sphinx-build
It seems neat enough.
The problem here is that make html uses the sphinx-build command as a normal shell command, which explicitly specifies which Python interpreter to use in the first line of the file (ie. #!/usr/bin/python). If Python gets invoked in this way, it will not use your virtual environment.
A quick and dirty way around this is by explicitly calling the sphinx-build Python script from an interpreter. In the Makefile, this can be achieved by changing SPHINXBUILD to the following:
SPHINXBUILD = python <absolute_path_to_sphinx-build-file>/sphinx-build
If you do not want to modify your Makefile you can also pass this parameter from the command line, as follows:
make html SPHINXBUILD='python <path_to_sphinx>/sphinx-build'
Now if you execute make build from within your VirtualEnv environment, it should use the Python interpreter from within your environment and you should see Sphinx finding all the goodies it requires.
I am well aware that this is not a neat solution, as a Makefile like this should not assume any specific location for the sphinx-build file, so any suggestions for a more suitable solution are warmly welcomed.
I had the same problem, but I couldn't use the accepted solution because I didn't use the Makefile. I was calling sphinx-build from within a custom python build file. What I really wanted to do was to call sphinx-build with the exact same environment that I was calling my python build script with. Fiddling with paths was too complicated and error prone, so I ended up with what seems to me like an elegant solution, which is to "manually" load the console script entry point and call it:
from pkg_resources import load_entry_point
cmd = load_entry_point('Sphinx', 'console_scripts', 'sphinx-build')
cmd(['sphinx-build', basepath, destpath])

How to split a Scala script into multiple files

Used as a scripting language, does Scala have some sort of include directive, or is there a way to launch a script from an other script ?
The scala command has the :load filename command to load a Scala file interactively. Alternatively the scala command's -i filename argument can be used to preload the file.
As of beginning of 2013, there seems to be no built-in support for multiple-file scripts.
There's one guy that implemented #include support for non-interactive scala scripts, by assembling and compiling the files in a prior stage (have not tried it yet).
Here is the blog post about it:
http://www.crosson.org/2012/01/simplifying-scala-scripts-adding.html
And the git repository:
https://github.com/dacr/bootstrap
I hope this, or something along the lines, get official some day, since the -i filename scala switch seems to apply only for the interactive console.
Until then a proper scripting language, like Ruby, might still remain the best option.
Ammonite is an option. It implements scala (plus extensions) including import in scripts.