Interactions between browser Repl, browser, http server in clojurescript - emacs

I have been playing around with clojurescriptone - neat project - to try and understand how clojurescript works. It is not clear to me how the three components, browser, browser repl and the http server interact.
I use emacs for my development environment
To understand ClojureScript(CS) better I decided to try and port clojurescriptone(CS1) to use lein2 and use nrepl as my repl. The port did work and I was able to get the CS1 environment going and interact with the browser. I prefer - for now - not to start an inferior lisp process to work with the CS repl, but instead run the CS repl within the clojure repl. The only drawback with this is that the CS repl takes input from stdin and emacs prompts me to use stdin. To get around this I am trying to replace some code in CS1 so that it starts the repl from the piggieback library written by Chas Emerick.
In doing so I have reached the limits of my understanding of how these components interact. Apparently from what I can gather the browser repl is a 'server' that listens on some port; while all along I thought it was some sort of client that send requests to the http server and redirected the output to the browser (how??) after evaluating the result. Now I am not certain that is the case.
How do these components interact?
Sorry about the long explanation!!!
Sid

The browser REPL has a sever-side and a client-side. The server-side runs in your main Clojure process; the ClojureScript REPL it self is actually running in the bREPL server.
The bREPL client runs in ClojureScript in the browser, and maintains a long-poll AJAX connection to the server. Whenever you type something in to the REPL on the server, it is compiled to JavaScript and sent to the client via the long-poll mechanism, where it is evaluated in the client an the response sent back.
The server's ClojureScript REPL runs "inside" of your normal Clojure REPL - the exact mechanism for how that works depends on which REPL you're using. nREPL itself runs on a client-server architecture so it's easy to see how things can get confusing.
Does that help at all?

Related

Bootstrapping a Staging Clojure App via REPL incl. fetching dependenices

Deploying Clojure/Java apps is hard, so I had this idea yesterday I want to understand better. If I spin up a machine that has Clojure and boot-clj installed and run boot wait repl -s -H 0.0.0.0 on the machine (let's ignore auth for now), I should be able to connect to it from my dev box and trigger the retrieval of dependencies over the wire (which will then be cached on the machine), then wire over all the source code and eval until I hit a snag, right?
Let's pretend this is a good idea. Is it possible to do this, and what are the hurdles involved? Right now I'm waiting 5 minutes for CircleCI to package up an uberjar, then fail because some Heroku token expired but all I want to do is see my code running on a staging environment so I can wire some more code and re-eval it.
The first thing t hat comes to mind is nREPL auth, which I see is not mentioned in any of the nREPL libraries. So let's say that's a higher-level networking concern and I'll do ACL via VPC.
Has anyone done this? Why is it a bad idea? Can you show your recipe for bootstrapping a Clojure app on a remote machine without the use of git or SSH (aside from initial REPL start)?
I should be able to connect to it from my dev box and trigger the retrieval of dependencies over the wire (which will then be cached on the machine), then wire over all the source code and eval until I hit a snag, right?
Is it possible to do this, and what are the hurdles involved?
Yes, but you need to specify -b (address server listens on) instead of -H (host to connect client to):
$ boot wait repl -s -b 0.0.0.0 -p 3000
nREPL server started on port 3000 on host 0.0.0.0 - nrepl://0.0.0.0:3000
Then connect to it however you like, for example with lein repl:
$ lein repl :connect 127.0.0.1:3000
Now you can add a dependency in the REPL and it'll be downloaded on the server/host. In the client REPL:
boot.user=> (set-env! :dependencies #(into % '[[clj-time "0.14.0"]]))
And if you're watching the server console you'll see it downloading dependencies:
Retrieving clj-time-0.14.0.pom from https://repo.clojars.org/ (3k)
Retrieving joda-time-2.9.7.pom from https://repo1.maven.org/maven2/ (32k)
Retrieving clj-time-0.14.0.jar from https://repo.clojars.org/ (22k)
Retrieving joda-time-2.9.7.jar from https://repo1.maven.org/maven2/ (618k)
And then back on the client side:
boot.user=> (require '[clj-time.core :refer [now]])
nil
boot.user=> (now)
#object[org.joda.time.DateTime 0x1f68b743 "2018-03-15T12:16:29.342Z"]
Has anyone done this?
Yes, I've seen people host nREPLs from remote servers and connect to them to tinker with a running system.
Why is it a bad idea?
Generally speaking, we want reproducible builds and stable artifacts to give some degree of certainty about what code is being released. Doing this type of development on-the-fly on-the-server works against those goals, making it harder to determine what code is running where. I'd try to structure the system (and its testing) such that this degree of remote dynamism isn't required for normal development.
It sounds like your primary problem is a cumbersome link (CI/CD) in your dev/test/run feedback loop. I'd explore other options for optimizing that feedback loop before going to dynamic dependency-hot-loading nREPL, if you can avoid it. Of course, it's there if you need it!
Can you show your recipe for bootstrapping a Clojure app on a remote machine
Personally, I only ever deploy JARs to remote machines, and usually in a container. By that time I've already exercised/tested the system locally and have some confidence it'll behave as expected. If most of your system is untestable without deploying, that may be a sign you should break it into smaller, more testable pieces.

Emacs connects to system bus, but not to the session one

The system bus works fine
(dbus-init-bus :system)
returns nil, as it should.
However, connection to the session bus
(dbus-init-bus :session)
raises
(dbus-error "No connection to bus" :session)
qdbus in the command line works just fine with both buses. It even
works from within eshell, if that is of any concern.
Neither emacs nor emacs --daemon connect.
Which version of Emacs are you using? One version (before 2012-05-25) only looks for the DBUS_SESSION_BUS_ADDRESS environment variable, while a more recent one uses a library function that also looks in ~/.dbus/session-bus I think.
Did you try this before running emacs:
eval $(dbus-launch)
export DBUS_SESSION_BUS_ADDRESS

Emacs-client - whats the minimal installation?

Lets say I have an Emacs-Server running on some remote server, with all the libraries and software necessary for running my application.
Then I want several clients to connect to that remote machine, using Emacs-client. Does each client need a full Emacs installation, or is there a minimal installation that is just enough to communicate with the remote server, where all the action is?
Could this (Emacs-)client installation be so minimal, that almost all software-updates can be done on the server, without affecting the Emacs-clients?
Is there a reason not to run the clients remotely as well, and simply use a local display? That way, pretty much all you need on the local machines is the ssh client and the X Window server.
ssh -X (user)#(server) "emacsclient -c"
Edits for the comments:
This command starts a new client to connect to an existing Emacs server (which it assumes is already running). You can use "emacsclient -a '' -c" to automatically start emacs --daemon if there is no existing server, but I don't know whether you want the connecting user to be starting the server.
In fact, I'm pretty unsure about the whole multi-user side of this to be honest, as I've never done that before. Authentication for the above is handled by ssh, but there may well be subsequent permission issues to deal with, or similar, when the server and the clients are started by different users.
This approach should be possible with Windows/Cygwin as client and/or server, as Cygwin provides Emacs, OpenSSH, and X.org packages. (I regularly use Windows/Cygwin as a local display for Emacs running on Linux.) It may be harder to set up, though, and any permissions issues are probably different when you're using Cygwin.
I'm less sure how this would work without Cygwin. NTEmacs certainly won't talk to X.org, so I imagine you'd be terminal based in that instance. (There are probably other options, but Cygwin sounds to me like the best-integrated approach to using all of Emacs, SSH, and X on Windows).
Lastly, I imagine you're probably getting your "Connection refused" error because localhost is not running a sshd daemon? I would say that configuration of ssh is outside the scope of this question, but there are lots of resources online for that.
Depending on what you're trying to achieve, you may be able to use a combination of Emacs and Screen. By starting up Emacs from Screen on the remote machine and detaching from it, you can subsequently re-attach from a different machine that doesn't have Emacs. Again, whether this will work for you or not depends on what you're trying to do; however, for many Emacs use-cases, this can be very effective. If you're not familiar with using Screen in this manner, here is some reading material:
screen - The Terminal Multiplexer
I am not sure that would be possible. emacsclient uses tramp to connect to a remote server, and just by looking at the number of requires in the tramp elisp files (41) it seems very unlikely. You can try it yourself with the following:
zgrep -oE "\(require '[a-z-]+\)" *el.gz | sed -e 's%[a-z0-9-]\+\.el\.gz:%%g' | sort | uniq -cu | wc -l
I'm not an expert in emacsclient, but I don't think is was designed to do what you're looking for. I think the general use case is that emacsclient allows you to redirect new requests to open a file with emacs to a persistent emacs process to avoid what may be a bit of an overhead in startup time. You seem to be looking for more of a true client/server relationship.
I think to meet the goal you're aiming at you'll probably need to look a little outside emacs, probably a project unto itself - 'emacsRemoteClient. It boils down to one or two models; the file you want to edit would need to have it's path sent over to the server machine so that emacs could do some sort of remote tramp access & then spawn the xwindow locally (using the local X env or requiring an x server on windows)... or two, transferring the file to some temp location on the server box and again spawning the remote x window locally (followed by syncing the changes between the tmp & local file).
Would be cool to have something like that... but suspecting it'll involve a bit of work. Maybe we just need a version of emacs written in javascript and it can live in the cloud or on your browser... oh to have emacs keybindings in the browser ;-)
-Steve

Where can I find application runtime errors using Nginx, Starman, Plack and Catalyst?

I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.

Creating a simple command line interface (CLI) using a python server (TCP sock) and few scripts

I have a Linux box and I want to be able to telnet into it (port 77557) and run few required commands without having to access to the whole Linux box. So, I have a server listening on that port, and echos the entered command on the screen. (for now)
Telnet 192.168.1.100 77557
Trying 192.168.1.100...
Connected to 192.168.1.100.
Escape character is '^]'.
hello<br />
You typed: "hello"<br />
NOW:
I want to create lot of commands that each take some args and have error codes.
Anyone has done this before?
It would be great if I can have the server upon initialization go through each directory
and execute the init.py file and in turn, the init.py file of each command call
into a main template lib API (e.g. RegisterMe()) and register themselves with the server as function call backs.
At least this is how I would do it in C/C++.
But I want the best Pythonic way of doing this.
/cmd/
/cmd/myreboot/
/cmd/myreboot/ini.py (note underscore don't show for some reason)
/cmd/mylist/
/cmd/mylist/init.py
... etc
IN: /cmd/myreboot/__ini__.py:
from myMainCommand import RegisterMe
RegisterMe(name="reboot",args=Arglist, usage="Use this to reboot the box", desc="blabla")
So, repeating this creates a list of commands and when you enter the command in the telnet session, then the server goes through the list, matches the command and passed the args to that command and the command does the job and print the success or failure to stdout.
Thx
I would build this app using combination of cmd2 and RPyC modules.
Twisted's web server does something kinda-sorta like what you're looking to do. The general approach used is to have a loadable python file define an object of a specific name in the loaded module's global namespace. Upon loading the module, the server checks for this object, makes sure that it derives from the proper type (and hence has the needed interface) then uses it to handle the requested URL. In your case, the same approach would probably work pretty well.
Upon seeing a command name, import the module on the fly (check the built-in import function's documentation for how to do this), look for an instance of "command", and then use it to parse your argument list, do the processing, and return the result code.
There likely wouldn't be much need to pre-process the directory on startup though you certainly could do this if you prefer it to on-the-fly loading.