Getting rid of CoffeeScript's closure wrapper - coffeescript

How can I omit the automatic closure wrappers that hides my variables from the global scope?
(function() {
// my compiled code
}).call(this);
Just playing around with CoffeeScript+SproutCore, and of course, I'd prefer to leave the scope as it is: in this case there is no need to protect anything from overwriting.
I know I can use # or this. at the declaration, but that's not too elegant.

Quick and dirty solution: Use the console flag -b (bare). Warning: Kittens will die if you do that!
Clean solution: Don't do that.
Usage: coffee [options] path/to/script.coffee
-c, --compile compile to JavaScript and save as .js files
-i, --interactive run an interactive CoffeeScript REPL
-o, --output set the directory for compiled JavaScript
-j, --join concatenate the scripts before compiling
-w, --watch watch scripts for changes, and recompile
-p, --print print the compiled JavaScript to stdout
-l, --lint pipe the compiled JavaScript through JSLint
-s, --stdio listen for and compile scripts over stdio
-e, --eval compile a string from the command line
-r, --require require a library before executing your script
-b, --bare compile without the top-level function wrapper
-t, --tokens print the tokens that the lexer produces
-n, --nodes print the parse tree that Jison produces
--nodejs pass options through to the "node" binary
-v, --version display CoffeeScript version
-h, --help display this help message

I used another option which was to attach my global variables to the global object in the scope of my function. I attached mine to the 'window'. This keeps your JavaScript encapsulated and only exposes the variable that you need in the global scope.

Related

Fish shell creating functions using eval'd output from another exe

On Ubuntu Server 16.10 x64 with fish 2.3.1, my uru_rt exe generates this function on stdout
function uru
set -x URU_INVOKER fish
# uru_rt must already be on PATH
uru_rt $argv
if test -d "$URU_HOME" -a -f "$URU_HOME/uru_lackee.fish"
source "$URU_HOME/uru_lackee.fish"
else if test -f "$HOME/.uru/uru_lackee.fish"
source "$HOME/.uru/uru_lackee.fish"
end
end
when run via uru_rt admin install. The uru function provides the golang-based cross-platform ruby version manager tool https://bitbucket.org/jonforums/uru
On bash systems, I inject the uru function by placing eval "$(uru_rt admin install)" in a startup file so that uru is present in the shell.
On fish, running eval (uru_rt admin install) rewards me with this failure
$ eval (uru_rt admin install)
Missing end to balance this begin
- (line 1): begin; function uru set -x URU_INVOKER fish # uru_rt must already be on PATH uru_rt $argv if test -d "$URU_HOME" -a -f "$URU_HOME/uru_lackee.fish" source "$URU_HOME/uru_lackee.fish" else if test -f "$HOME/.uru/uru_lackee.fish" source "$HOME/.uru/uru_lackee.fish" end end
^
from sourcing file -
called on line 60 of file /usr/share/fish/functions/eval.fish
in function “eval”
called on standard input
source: Error while reading file “-”
I've also tried set u1 (uru_rt admin install); eval "$u1" with the same result.
As expected, when I do uru_rt admin install > ~/.config/fish/functions/uru.fish the uru function becomes persistently available. While this is an option, my preference is to use eval in ~/.config/fish/config.fish
As a noob to fish, how do I dynamically inject this uru function into the environment using eval similar to bash's eval "$(uru_rt admin install)"?
Fish's eval is a wrapper function around its source builtin, and it seems there's some weirdness (maybe even a bug) going on with it's argument splitting when you pass multiple lines..
However, in this case it's simpler, faster and actually working if you just use source, like uru_rt admin install | source.
That assumes though that uru_rt admin install really needs to be called - if all it does is print that code to stdout, without changing it, you can also simply save the function, e.g. in ~/.config/fish/functions/uru.fish.

Is there a way to issue boilerplate debugger commands on the command line?

I'd love to be able to do something like:
perl -d my-program.pl -c 'b postpone Foo::Bar::some_func'
i.e. specify as part of the invocation of perl -d a command that I would ordinarily enter at the debugger prompt, namely b postpone Foo::Bar::some_func.
From man perldebug, you could use source file
Put some common commands in a file and source that manually after starting the debugger.

can I attach functions to the coffee console before it starts?

Let's say I want to have a pp function:
coffee> pp = (obj) -> JSON.stringify(obj)
[Function]
coffee> pp({cat: "fancy"})
'{"cat":"fancy"}'
coffee>
Is there a way that I can have that function be available immediately when the console loads? I'm looking at coffee -r "utils.coffee", but don't see any way to put that required library into an object that's available at the command line. It looks like I might be able to alter repl.js, but that seems like a bad idea.
You can put those functions into the global variable (which is kinda Node's version of window) and then use the -r option.
# utils.coffee
global.pp = (obj) -> JSON.stringify(obj)
And then run, on the same directly utils.coffee of:
coffee -r ./utils
It should start a CoffeeScript REPL and have the pp function available as a global variable:
coffee> pp ohmy: 'neat'
'{"ohmy":"neat"}'
Update: it seems the -r command-line option was removed in CoffeeScript's master. It probably wasn't meant to be used this way :(
Update 2: There's another way to do this! And it doesn't rely on any command-specific parameter:
{ echo "require './utils'"; cat; } | coffee
It will, however, not work 100% like the Coffee REPL. The arrow keys for example don't seem to work.
Edit (jc): Using this method allows you to make an unload for node, which is handy:
# utils.coffee
global.unload = (moduleName) ->
cacheName = require.resolve(moduleName)
delete require.cache[cacheName]
$ coffee -r ~/Dev/utils.coffee
coffee> unload
[Function]
Update 3: Another possibility is to "create your own REPL". Not really reimplementing anything. Based on this hacky solution, you could do something like:
#! /usr/bin/env coffee
# REPL functions.
global.pp = (obj) -> JSON.stringify(obj)
# Start the REPL.
require 'coffee-script/lib/coffee-script/repl'
And then use that script as your new REPL. It will work exactly like the normal Coffee REPL plus the new functions (no problems with arrow keys nor TAB completion :)
BTW, i think you'll need to have CoffeeScript installed without the -g option on npm for that to work.
It is a very hacky solution though. It relies on the internal CoffeeScript implementation file structure and its functionality, and that could change at any moment (in fact, i'm aware that there has been some work done in a new revamped Coffee REPL based on Node's one... i hope that functionality gets exposed to be used programmatically, so these kind of hacks are not hacks any more).
Have a look at #Daniel's (Daniel Taylor's) excellent nesh, an enhanced node shell that also speaks CoffeeScript: http://danielgtaylor.github.io/nesh/
Here's how you would load your pp function into a CoffeeScript REPL:
nesh -c -e 'pp = (obj) -> JSON.stringify(obj)'
Similarly, if you wanted to load a file, say ~/.nesh_profile.coffee (analogous to loading a shell profile), on startup, use:
nesh -c -e ~/.nesh_profile.coffee
Small caveat: The REPL language - JavaScript v. CoffeeScript - and the language of the code you're loading must be the same. Edit: The only exception is that you can always load JavaScript via a file with extension .js, even when starting a CoffeeScript REPL. (By contrast, you can't specify a JS string when starting a CoffeeScript REPL).
To automate loading of your 'nesh profile' into a CoffeeScript REPL you could define an alias in your shell profile; e.g.:
alias neshc='nesh -c -e ~/.nesh_profile.coffee'
Update: Here's how you can use the very same 'nesh profile' with a JavaScript REPL; compilation to JavaScript is performed on the fly using process substitution:
alias neshj='nesh -e <(coffee -bp ~/.nesh_profile.coffee)'
nesh is available as an npm package, so you can easily install it as follows:
sudo npm install nesh -g

Run Coffeescript Interactive (REPL) with a script

In python, I can run a script and enter interactive mode in the context of that script. This lets me mess with global variables and what not to examine program state.
$ python -i hello.py
Can I do this with Coffeescript? I've tried the following:
$ coffee -i hello.coffee
doesn't load hello.coffee. It's equivalent to coffee -i
$ cat hello.coffee | coffee -i
runs the script line by line in REPL but ends REPL after the EOF.
I've recently started a project to create an advanced interactive shell for Node and associated languages like CoffeeScript. One of the features is loading a file or string in the context of the interpreter at startup which takes into account the loaded language.
http://danielgtaylor.github.com/nesh/
Example:
# Load a string
nesh -c -e 'hello = (name) -> "Hello, #{name}"'
# Load a file
nesh -c -e hello.coffee
Then in the interpreter you can access the hello function. Also a good idea to create an alias in bash:
alias cs='nesh -c'
cat foo.coffee - | coffee -i
tells cat to first output your code and then output stdin, which gives you what you're looking for I think.
I am confronted with this problem as well. The one provide by #int3 doesn't solve this problem, for CoffeeScript is one indentation based language. stdin will pass the code line by line, but the repl is not smart enough to realize this. Since you post this question, I suggest you create one issue (feature request) on CoffeeScript

Which shell does a Perl system() call use?

I am using a system call to do some tasks
system('myframework mycode');
but it complains of missing environment variables.
Those environment variables are set at my bash shell (from where I run the Perl code).
What am I doing wrong?
Does the system call create a brand new shell (without environment variable settings)? How can I avoid that?
It's complicated. Perl does not necessarily invoke a shell. Perldoc says:
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to execvp , which is more efficient.
So it actually looks like you would have the arguments passed right to execvp. Furthermore, whether the shell loaded your .bashrc, .profile, or .bash_profile depends on whether the shell is interactive. Likely it isn't, but you can check like this.
If you don't want to invoke a shell, call system with a list:
system 'mycommand', 'arg1', '...';
system qw{mycommand arg1 ...};
If you want a specific shell, call it explicitly:
system "/path/to/mysh -c 'mycommand arg1 ...'";
I think it's not the question of shell choice, since environment variables are always inherited by subprocesses unless cleaned up explicitly.
Are you sure you have exported your variables?
This will work:
$ A=5 perl -e 'system(q{echo $A});'
5
$
This will work too:
$ export A=5
$ perl -e 'system(q{echo $A});'
5
$
This wouldn't:
$ A=5
$ perl -e 'system(q{echo $A});'
$
system() calls /bin/sh as a shell. If you are on a somewhat different box like ARM it would be good to read the man page for the exec family of calls -- default behavior. You can invoke your .profile if you need to, since system() takes a command
system(" . myhome/me/.profile && /path/to/mycommand")
I've struggled for 2 days working on this. In my case, environment variables were correctly set under linux but not cygwin.
From mkb's answer I thought to check out man perlrun and it mentions a variable called PERL5SHELL (specific to the Win32 port). The following then solved the problem:
$ENV{PERL5SHELL} = "sh";
As is often the case - all I can really say is "it works for me", although the documentation does imply that this might be a sensible solution:
May be set to an alternative shell that perl must use internally for executing "backtick" commands or system().
If the shell used by perl does not implicitly inherit the environment variables then they will not be set for you.
I messed with environment variables being set for my script on this post where I needed the env variable $DBUS_SESSION_BUS_ADDRESS to be set, but it wouldn't when I called the script as root. You can read through that, but in the end you can check whether %ENV contains your needed variables and if not add them.
From perlvar
%ENV
$ENV{expr}
The hash %ENV contains your current environment. Setting a value in "ENV" changes
the environment for any child processes you subsequently fork() off.
My problem was that I was running the script under sudo and that didn't preserve all my user's env variables, are you running the script under sudo or as some other user, say www-data (apache)?
Simple test:
user#host:~$ perl -e 'print $ENV{q/MY_ENV_VARIABLE/} . "\n"'
and if that doesn't work then you will need to add it to %ENV at the top of your script.
try system("echo \$SHELL"); on your system.