Grunt files must either be "Gruntfile.js" or "Gruntfile.coffee". So, how can I write my Gruntfile using literate coffeescript instead of vanilla coffeescript (since, I believe, literate coffeescript files need to be named with a .litcoffee at the end instead of just .coffee)?
Make this your Gruntfile.coffee. You could just as easily do it as a js file of course, as long as node knows what litcoffee is supposed to parse like (that's why you require coffee-script)
coffee = require 'coffee-script'
module.exports = require './Gruntfile.litcoffee'
This is under the assumption that the litcoffee file's export is the (grunt) -> function
You could have a Gruntfile.coffee act as a bootstrapper for the Gruntfile.litcoffee file, something like this pseudo code...
coffee = require "coffee-script"
module.exports = eval coffee.compile "Gruntfile.litcoffee"
Related
While using ReasonML and Bucklescript, is it possible to configure Bucklescript so it won't generate export statements? I'd prefer if the generated code could be used as is in a browser, that is, being ES5 (or ES6) compatible.
Edit: OK, while trying out the tool chain a bit more, I realize just turning off the export is not enough. See example below:
function foo(x, y) {
return x + y | 0;
}
var Test = /* module */[
/* foo */foo
];
exports.Test = Test;
This code will pollute global namespace if exports is removed, and is simply broken from an ES5 compatibility viewpoint.
Edit 2: Reading on Bucklescript's blog, this seems not possible:
one OCaml module compiled into one JavaScript module (AMDJS, CommonJS, or Google module) without name mangling.
Source.
BuckleScript can output modules in a number of different module formats, which can then be bundled up along with their dependencies using a bundler such as webpack or rollup. The output is not really intended to be used as a stand-alone unit, since you'd be rather limited in what you could do in any case, as the standard and runtime libraries are separate modules. And even something as trivial as multiplication will involve the runtime library.
You can configure BuckleScript to output es6 modules, which can be run directly in the browser as long as your browser supports it. But that would still require manually extracting the standard and runtime libraries from your bs-platform installation.
The module format is configured through the package-specs property in bsconfig.json:
{
...
"packages-specs": ["es6-global"] /* Or "es6" */
}
Having said all that, you actually can turn off exports by putting [###bs.config { no_export }] at the top of the file. But this is undocumented since it's of very limited use in practice, for the above mentioned reasons.
I know specific instances of this question have been answered before:
How can I dynamically include Perl modules without using eval?
How do I use a Perl package known only in runtime?
There are also good answers at Perl Monks:
Writing a Perl module that dynamically loads other modules.
Creating subroutines on the fly
But I would like a robust way to add functionallity to a Perl application that will be:
Efficient: if the code is not needed it should not be compiled.
Easy to debug: error reporting if something goes wrong at the dynamic code, should point at the right place at the dynamic code.
Easy to extend: adding new code should be as easy as adding a new file or directory+file.
Easy to invoke: the main application should be able to use an "add on" without much trouble. An efficient mechanism to check if the "add on" has already been loaded and if not load it, would be a plus.
To illustrate the point, here are some examples that would benefit from a good solution:
A set of scripts that move data from different applications. For instance, moving data from OpenCart to Prestashop, where each entity in the data model has a specific "add on" that deals with the input or output; then an intermediate data model takes care of the transformation of the data. This could be used to move data in any direction or even between different versions of the same ecommerce.
A web application that needs to render different types of HTML in different places. Each "module" knows how to handle a certain information and accepts parameters to do it. A module outputs HTML, another a list of documents, another a document, another a banner, and so on.
Here are some examples that I have used and that work.
Load a function at run time and output the possible compile errors:
eval `cat $file_with_function`;
if( $# ) {
print STDERR $#, "\n";
die "Errors at file $file_with_function\n";
}
Or more robust using File::Slurp:
eval read_file("$file_with_function", binmode => ':utf8');
Check that a certain function has been defined:
if( !defined &myfunction ) {
die "myfunction is not defined\n";
}
The function may be called from there on. This is fine with one function, but not for many.
If the function is put in a module:
require $file_with_function; # needs the ".pm" extension, i.e. addon/func.pm
$name_of_module->import(); # need to know the module name, i.e. Addon::Func
$name_of_module->myfunction(...);
Where the require may be protected inside an eval and then use $# as before.
With Module::Load:
load $name_of_module;
Followed by the import and used in the same way. Security should not be a concern as it may be assumed that the dynamic code comes from a trusted place. Are there better ways? Which way would be considered good practice?
In case it helps, I will be using the solution (among other places, but not exclusively) within the Dancer framework.
EDIT: Given the comments, I add some more info. All cases that I have in mind have in common:
There is more than one dynamic piece of code. Probably many to start with.
Each bit of code has the same interface.
Given the comments and the lack of responses, I have done some research to answer my own question. Comments or other answers are welcome!
Dynamic code
By dynamic code I mean code that is evaluated at run-time. In general, I consider better to compile an application so that you have all the error checking the Perl compiler can offer before starting to execute. Added to use strict and use warnings, you can catch many common mistakes that way. So why using dynamic code at all? These are the reasons I consider:
An application performs many different actions that are chosen depending on the context of execution. For instance, an application extracts certain properties from a file. The way to extract them depends on the file type and we want to deal with many file types, but we do not want to change the application for each new file type we add. We also want the application to start quickly.
An application needs to be expanded on the fly in a way that does not require the application to restart.
We have a large application that contains a number of features. When we deploy the application, we do not want to provide all the possible features all the time, maybe because we licence them separately, maybe because not all of them are able to run under all platforms. By throwing in only the files with the features we want, we have a distribution that does not require changing any code or config files.
How do we do it?
Given the possibilities that Perl offers, solutions to adding dynamic code come in two flavors: using eval and using require. Then there are modules that may help do things in an easier or more maintainable way.
The quick and dirty way
The eval way uses the form eval EXPR to compile a piece of Perl code at run-time. The expression could be a string but I suggest putting the code in a file and grouping other similar files in a convenient place. Then, if possible using File::Slurp:
eval read_file("$file_with_code", binmode => ':utf8');
if( $# ) {
die "$file_with_code: error $#\n";
}
if( !defined &myfunction ) {
die "myfunction is not defined at $file_with_code\n";
}
Specifying the character set to read_file makes sure that the file will be interpreted correctly. It is also good to check that the compilation was correct and that the function we expect was defined. So in $file_with_code, we will have:
sub myfunction(...) {
# Do whatever; maybe return something
}
Then you may invoke the function normally. The function will be a different one depending on which file was loaded. Simple and dynamic.
The modular way (recommended)
The way I would do it with maintainability in mind would be using require. Unlike use, that is evaluated at compile-time, require may be used to load a module at run-time. Out of the various ways to invoke require, I would go for:
my $mymodule = 'MyCompany::MyModule'; # The module name ends up in $mymodule
require $mymodule;
Also unlike use, require will load the module but will not execute import. So we may use any functions inside the module and those function names will not polute the calling namespace. To access the function we will need to use:
$mymodule->myfunction($a, $b);
See below as to how the arguments get passed. This way of invoking a function will add an argument before $a and $b that is usually named $self. You may ignore it if you donĀ“t know anything about object orientation.
As require will try to load a module and the module may not exist or it may not compile, to catch the error it will be better to use:
eval "require $mymodule";
Then $# may be used to check for an error in the loading+compiling process. We may also check that the function has been defined with:
if( $mymodule->can('myfunction') ) {
die "myfunction is not defined at module $mymodule\n";
}
In this case we will need to create a directory for the modules and a file with the .pm extension for each one:
MyCompany
MyModule.pm
Inside MyModule.pm we will have:
package MyCompany::MyModule;
sub myfunction {
my ($self, $a, $b);
# Do whatever; maybe return something
# $self will be 'MyCompany::MyModule'
}
1;
The package bit is essential and will make sure that whatever definitions we put inside will be at the MyCompany::MyModule namespace. The 1; at the end will tell require that the module initialization was correct.
In case we wanted to implement the module by using other libraries that could polute the caller namespace, we could use the namespace::clean module. This module will make sure the caller does not get any additions to the namespace coming from the module we are defining. It is used in this way:
package MyCompany::MyModule;
# Definitions by these modules will not be available to the code doing the require
use Library1 qw(def1 def2);
use Library2 qw(def3 def4);
...
# Private functions go here and will not be visible from the code doing the require
sub private_function1 {
...
}
...
use namespace::clean;
# myfunction will be available
sub myfunction {
# Do whatever; maybe return something
}
...
1;
What happens if we include a module more than once?
The short answer is nothing. Perl keeps track of which modules have been loaded and from where using the %INC variable. Both use and require will not load a library twice. use will add any exported names to the callers namespace. require will not do that either. In case you want to check that a module has been loaded already, you could use %INC or better yet, you could use module::loaded which is part of the core in modern Perl versions:
use Module::Loaded;
if( !is_loaded( $mymodule ) {
eval "require $mymodule" );
...
}
How do I make sure Perl finds my module files?
For use and require Perl uses the #INC variable to define the list of directories that will be used to look for libraries. Adding a new directory to it may be achieved (among other ways) by adding it to the PERL5LIB environment variable or by using:
use lib '/the/path/to/my/libs';
Helper libraries
I have found some libraries that may be used to make the code that uses the dynamic mechanism more maintainable. They are:
The if module: will load a module or not depending on a condition: use if CONDITION, MODULE => ARGUMENTS;. May also be used to unload a module.
Module::Load::Conditional: will not die on you while trying to load a module and may also be used to check the module version or its dependencies. It is also able to load a list of modules all at once even checking their versions before doing so.
Taken from the Module::Load::Conditional documentation:
use Module::Load::Conditional qw(can_load);
my $use_list = {
CPANPLUS => 0.05,
LWP => 5.60,
'Test::More' => undef,
};
print can_load( modules => $use_list )
? 'all modules loaded successfully'
: 'failed to load required modules';
Where do I put common server-side JavaScript files used by most of my jobs? I do not want to get fancy and create a new Node module, I just need a place to put a couple of utility functions.
A Node module is the only way that works I could find. I created a .js file (for instance utils.js) in the package folder where jobs/ and widgets/ are, and put all my common code for export:
module.export = {
commonFunction: function () { ... },
:
};
In my jobs, I import the common code I need with:
var utils = require('../../utils.js');
and use the exported properties offered by utils.
If I install the SassAndCoffee.Core package from NuGet, and then ask SassAndCoffee to compile some CoffeeScript, it seems to pass the "bare" option to the CoffeeScript compiler -- i.e., it does not wrap my script in CoffeeScript's usual (function() { and }).call(this); bookends.
Is there a way I can make SassAndCoffee not use the "bare" option?
Note: this is in a desktop app, and I'm explicitly calling into SassAndCoffee's code -- this is not the magic rewriting that happens in an ASP.NET site.
More details: Here's my code to compile CoffeeScript using SassAndCoffee.
var sassCompiler = new CoffeeScriptCompiler();
var js = sassCompiler.Compile("alert 'Hello World'");
which results in the following (bare) output in the js variable:
alert('Hello World');
But if I write some straightforward JavaScript that calls the official CoffeeScript compiler with the default options, e.g. this HTML file (drop coffee-script.js into the same directory):
<script src="coffee-script.js"></script>
<script>
document.write("<pre>")
document.write(CoffeeScript.compile("alert 'Hello World'"))
document.write("</pre>")
</script>
I get the expected, wrapped JavaScript output:
(function() {
alert('Hello World');
}).call(this);
It looks like SassAndCoffee is calling CoffeeScript.compile(input, {bare: true}) instead of just CoffeeScript.compile(input).
I'd like to use SassAndCoffee.Core for its V8 support, but I want to be able to choose between default output and bare output. Short of rewriting SassAndCoffee's CoffeeScript compiler (which would kinda defeat the point of using SassAndCoffee), or manually prepending and appending the wrapper code (I'd feel dirty duplicating work that the compiler is supposed to do), is there any way I can get SassAndCoffee to output non-bare JavaScript?
If I'm reading this right, it seems to be an explicit decision by the developers:
https://github.com/xpaulbettsx/SassAndCoffee/blob/43300b7805db8b3dccf20cc71d1282ecfd8c76e1/SassAndCoffee.Core/CoffeeScript/coffee-script.js
That might also provide you the file you need to change to modify it to your needs.
As an alternative, in case you get stuck, I use Mindscape Web Workbench as a Visual Studio plugin that seems to do most of what it appears SassAndCoffee accomplishes for you.
I am totally new to Perl/Fastcgi.
I have some pm-modules to which will have to add a lot of scripts and over time it will grow and grow. Hence, I need a structure which makes the admin easier.
So, I want to create files in some kind of directory structure which I can include. I want the files that I include will be exaclty like if the text were written in the file where I do the include.
I have tried 'do', 'use' and 'require'. The actual file I want to include is in one of the directories Perl is looking in. (verified using perl -V)
I have tried within and outside BEGIN {}.
How do I do this? Is it possible at all including pm files in pm files? Does it have to be pm-files I include or can it be any extension?
I have tried several ways, included below is my last try.
Config.pm
package Kernel::Config;
sub Load {
#other activities
require 'severalnines.pm';
#other activities
}
1;
severalnines.pm
# Filter Queues
$Self->{TicketAcl}->{'ACL-hide-queues'} = {
Properties => {
},
PossibleNot => {Ticket => { Queue =>
['[RegExp]^*'] },
},
};
1;
I'm not getting any errors in the Apache's error_log related to this. Still, the code is not recognized like it would be if I put it in the Config.pm file.
I am not about to start programming a lot, just do some admin in a 3rd party application. Still, I have searched around trying to learn how it works with including files. Is the severalnines.pm considered to be a perl module and do I need to use a program like h2xs, or similar, in order to "create" the module (told you, totally newbie...)?
Thanks in advance!
I usually create my own module prefix -- named after the project or the place I worked. For example, you might put everything under Mu with modules named like Mu::Foo and Mu::Bar. Use multiple modules (don't try to keep everything in one single file) and name your modules with the *.pm suffix.
Then, if the Mu directory is in the same directory as your programs, you only need to do this:
use Mu::Foo;
use Mu::Bar;
If they're in another directory, you can do this:
use lib qw(/path/to/other/directory);
use Mu::Foo;
use Mu::Bar;
Is it possible at all including pm files in pm files?
Why certainly yes.
So, I want to create files in some kind of directory structure which I can include. I want the files that I include will be exaclty like if the text were written in the file where I do the include.
That's a bad, bad idea. You are better off using the package mechanism. That is, declare each of your module as a separate package name. Otherwise, your module will have a variable or function in it that your script will override, and you'll never, ever know it.
In Perl, you can reference variables in your modules by prefixing it with the module name. (Such as File::Find does. For example $File::Find::Name is the found file's name. This doesn't pollute your namespace.
If you really want your module's functions and variables in your namespace, look at the #EXPORT_OK list variable in Exporter. This is a list of all the variables and functions that you'd like to import into your module's namespace. However, it's not automatic, you have to list them next to your use statement. That way, you're more likely to know about them. Using Exporter isn't too difficult. In your module, you'd put:
package Mu::Foo;
use Exporter qw(import);
our EXPORT_OK = qw(convert $fundge #ribitz);
Then, in your program, you'd put:
use Mu::Foo qw(convert $fundge #ribitz);
Now you can access convert, $fundge and #ribitz as if they were part of your main program. However, you now have documented that you're pulling in these subroutines and variables from Mu::Foo.
(If you think this is complex, be glad I didn't tell you that you really should use Object Oriented methods in your Modules. That's really the best way to do it.)
if ( 'I want the files that I include will be exactly like if the text were written in the file where I do the include.'
&& 'have to add a lot of scripts and over time it will grow and grow') {
warn 'This is probably a bad idea because you are not creating any kind of abstraction!';
}
Take a look at Exporter, it will probably give you a good solution!