boot-refresh inside cider-connect - emacs

After following the suggested steps at
https://github.com/samestep/boot-refresh
the intended hot-reloading behavior works when using cider-jack-in from inside a boot project.
However, in the following scenario it does not work. consider this boot task:
(deftask dev2 []
(comp
(serve
:handler 'app.core/handler
:reload true
:port 3000
:httpkit true
:nrepl {:port 4000})
(watch) (refresh) ;; doesn't work with or without this line
))
The relevant part is the :nrepl keyword.
After this task is fired, one can connect to a nrepl server at port 4000, which has the advantage of accessing the actual state of the application during development. (see this post for more details)
This can be done via cider-connect. However, in there the hot-reloading is gone. The :reload true option might confuse here, this only triggers a source reload when a http request is done. But I'm looking for the more general approach of boot-refresh.
note: The intention here is to have a live-reloading behavior on the server side, which is similar to concepts known on the client side (figwheel or boot-reload).

Related

No output from erlang tracer

I've got a module my_api with a function which is callback for cowboy's requests handle/2,
So when I make some http requests like this:
curl http://localhost/test
to my application this function is called and it's working correctly because I get a response in the terminal.
But in another terminal I attach to my application with remsh and try to trace calls to that function with a dbg module like this:
dbg:tracer().
dbg:tp(my_api, handle, 2, []).
dbg:p(all, c).
I expected that after in another terminal I make a http request to my api, the function my_api:handle/2 is called and I get some info about this call (at least function arguments) in the attached to the node terminal but I get nothing in there. What am I missing?
When you call dbg:tracer/0, a tracer of type process is started with a message handler that sends all trace messages to the user I/O device. Your remote shell's group leader is independent of the user I/O device, so your shell doesn't receive the output sent to user.
One approach to allow you to see trace output is to set up a trace port on the server and a trace client in a separate node. If you want traces from node foo, first remsh to it:
$ erl -sname bar -remsh foo
Then set up a trace port. Here, we set up a TCP/IP trace port on host port 50000 (use any port you like as long as it's available to you):
1> dbg:tracer(port, dbg:trace_port(ip, 50000)).
Next, set up the trace parameters as you did before:
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Then exit the remsh, and start a node without remsh:
$ erl -sname bar
On this node, start a TCP/IP trace client attached to host port 50000:
1> dbg:trace_client(ip, {"localhost", 50000}).
This shell will now receive dbg trace messages from foo. Here, we used "localhost" as the hostname since this node is running on the same host as the server node, but you'll need to use a different hostname if your client is running on a separate host.
Another approach, which is easier but relies on an undocumented function and so might break in the future, is to remsh to the node to be traced as you originally did but then use dbg:tracer/2 to send dbg output to your remote shell's group leader:
1> dbg:tracer(process, {fun dbg:dhandler/2, group_leader()}).
{ok, ...}
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Since this relies on the dbg:dhandler/2 function, which is exported but undocumented, there's no guarantee it will always work.
Lastly, since you're tracing all processes, please pay attention to the potential problems described in the dbg man page, and always be sure to call dbg:stop_clear(). when you're finished tracing.

JPAM Configuration for Apache Drill

I'm trying to configure PLAIN authentification based on JPAM 1.1 and am going crazy since it doesnt work after x times checking my syntax and settings. When I start drill with cluster-id and zk-connect only, it works, but with both options of PLAIN authentification it fails. Since I started with pam4j and tried JPAM later on, I kept JPAM for this post. In general I don't have any preferences. I just want to get it done. I'm running Drill on CentOS in embedded mode.
I've done anything required due to the official documentation:
I downloaded JPAM 1.1, uncompressed it and put libjpam.so into a specific folder (/opt/pamfile/)
I've edited drill-env.sh with:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
I edited drill-override.conf with:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "local",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security: {
auth.mechanisms: ["PLAIN"],
},
security.user.auth: {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
It throws the subsequent error:
Error: Failure in starting embedded Drillbit: org.apache.drill.exec.exception.DrillbitStartupException: Problem in finding the native library of JPAM (Pluggable Authenticator Module API). Make sure to set Drillbit JVM option 'java.library.path' to point to the directory where the native JPAM exists.:no jpam in java.library.path (state=,code=0)
I've run that *.sh file by hand to make sure that the necessary path is exported since I don't know if Drill is expecting that. The path to libjpam should be know known. I've started Sqlline with sudo et cetera. No chance. Documentation doesn't help. I don't get it why it's so bad and imo incomplete. Sadly there is 0 explanation how to troubleshoot or configure basic user authentification in detail.
Or do I have to do something which is not told but expected? Are there any Prerequsites concerning PLAIN authentification which aren't mentioned by Apache Drill itself?
Try change:
export DRILLBIT_JAVA_OPTS="-Djava.library.path=/opt/pamfile/"
to:
export DRILL_JAVA_OPTS="$DRILL_JAVA_OPTS -Djava.library.path=/opt/pamfile/"
It works for me.

Debugging a crashing language server

I apologize if I'm a bit low on details here, but the main issue is actually trying to find the problem with my code. I'm updating an older extension of my own that was based on the Language Server example (https://code.visualstudio.com/docs/extensions/example-language-server). I've run into an issue where when I run the client part of my code using F5, and the debug window fires, I get:
The CSSLint Language Client server crashed 5 times in the last 3 minutes. The server will not be restarted.
Ok... so... here's the thing. The problems view in my extension client code shows nothing. DevTools for that Code window shows nothing.
The problems view for my server code shows nothing. DevTools, ditto.
For the Extension Developer Host instance, DevTools does show this:
messageService.ts:126 The CSSLint Language Client server crashed 5 times in the last 3 minutes. The server will not be restarted.e.doShow # messageService.ts:126
But I can't dig into the details to find a bug. So the question is - assuming that my server code is failing, where exactly would the errors be available?
Here is what I usually do to track server crashes down (I assume your server is written in JavaScript / TypeScript).
Use the following server options:
let serverModule = "path to your server"
let debugOptions = { execArgv: ["--nolazy", "--debug=6009"] };
let serverOptions = {
run: { module: serverModule, transport: TransportKind.ipc },
debug: { module: serverModule, transport: TransportKind.ipc, options: debugOptions}
};
Key here is to use the TransportKind.ipc. Errors that happen in the server and printed to stdio will then show in the output channel associated to your server (the name of the output channel is the name passed to the LanguageClient)
If you want to debug the server startup / initialize sequence you can change the debugOptions to:
let debugOptions = { execArgv: ["--nolazy", "--debug-brk=6009"] };
If the extension is started in debug mode (e.g. for example launched from VS Code using F5) then the LanguageClient automatically starts the server in debug mode. If the extension is started normally (for example as a real extension in VS Code) then the server is started normally as well.
To make this all work you need a latest version of the LSP node npm module both for server can client (e.g. 2.6.x)

Inspect state atom which gets updated by a ring handler

Consider the following scenario:
A minimal boot task starts up a http server:
(boot (serve :handler 'myapp.server/handler
:port 3000))
(This might be launched in several ways, here it's ok to just run it from a nrepl session, e.g. started by boot repl from the terminal)
The handler is represented by the function handler inside the namespace myapp.server. The corresponding file looks like this:
(ns myapp.server (:require ...))
(defonce server-state (atom {:nr 0}))
(defn handler [req]
(prn (swap! server-state update :nr inc))
{:body "Answer.\n"})
This works, every time the address localhost:3000 is visited the the atom is updated and the new version is printed to stdout inside the repl.
How can the atom been inspected at any time?
boot.user=> #myapp.server/server-state
yields an error. (...no such var...)
When trying the same thing from within an emacs cider nrepl connection, this previous attempt always shows up the initial value of the atom: {:n 0}
UPDATE
Here are the exact steps I do when using emacs/cider:
cd projectdir
start emacs
cider-jack-in
(boot (dev))
Ctrl+C+C (in order to get a prompt again.)
Then Testing with curl: getting responses + inside emacs updated atom is logged: {:n 1} .. {:n 2} ..
Then, in the repl: (require 'myapp.server), takes a while: nil.
finally: #myapp.server/state --> however: {:n 0}
Your (...no such var...) error probably happens because you haven't require the myapp.server namespace. Attempts to see updates happening to your atom in CIDER REPL fail probably because your ring app runs in another JVM process than your REPL so the REPL sees only initial value as updates from ring handler happen in another JVM or it is enclosed in a separated classloader as it might be isolated by boot POD.
You have two options:
start your ring app with REPL server enabled and connect to it from another process (for example by using Server Socket REPL and connecting to it using telnet)
start your REPL and then start your ring app from it and you have access to all your loaded namespaces.
With the first approach you probably need to use boot-clj's nrepl option. When you configure it to start nREPL server then you can connect to it using boot repl -c (optionally providing the same coordinates as to boot-http nrepl options) or directly from CIDER using cider-connect.

Sails.js HOWTO: implement logging for HTTP requests

With the poor default logging of Sails.js not showing http request logs(even on verbose). What is the best way implement http request logging to console so i can see if I am getting malformed requests? Expressjs's default logging would be enough.
I would prefer a Sails.js configuration way of doing it rather then a change the source code approach is possible.
Has anyone had experience with this. My google searches seem oddly lacking information.
Running Sails v0.9.8 on Mac OSX.
There's no Sails config option to log every request, but you can add a quick logging route at the top of config/routes.js that should do the trick:
// config/routes.js
'/*': function(req, res, next) {sails.log.verbose(req.method, req.url); next();}
Maybe too late, but for future references about this, I'm using Sails 0.11 and you can config that in the middleware, in the config/http.js file
Add this function (in fact it comes as an example)
// Logs each request to the console
requestLogger: function (req, res, next) {
console.log("Requested :: ", req.method, req.url);
return next();
},
And setup it on the order var:
order: [
'startRequestTimer',
'cookieParser',
'session',
'requestLogger', // Just here
'bodyParser',
'handleBodyParserError',
'compress',
'methodOverride',
'poweredBy',
'$custom',
'router',
'www',
'favicon',
'404',
'500'
]
I forked the sails-hook-requestlogger module to write all the request logs (access logs) to file.
sails-hook-requestlogger-file
All you have to do is npm install sails-hook-requestlogger-file and you are good to go!
Usage
Just lift your app as normal and all your server requests will be logged, with useful information such as response-time, straight to your console. As a default it is activated in your dev environment but deactivated in production.
Configuration
By default, configuration lives in sails.config.requestloggerfile
You can create config/requestlogger.js and override these defaults:
Parameter Type Details
format ((string)) Defines which logging format to use. Deaults to dev.
logLocation ((string)) Defines where to log: console or file. Defaults to console.
fileLocation ((string)) Location of file relative to project root (if file is specified in logLocation. This has no effect if console is specified in logLocation.
inDevelopment ((boolean)) Whether or not to log requests in development environment. Defaults to true.
inProduction ((boolean)) Whether or not to log requests in production environment Defaults to false.
Example config/requestlogger.js file:
module.exports.requestloggerfile = {
//see: https://github.com/expressjs/morgan#predefined-formats for more formats
format: ':remote-addr - [:date[clf]] ":method :url" :status :response-time ms ":user-agent"',
logLocation: 'file',
fileLocation: '/var/log/myapp/access.log',
inDevelopment: true,
inProduction: true
};
Hope that it would help someone :)
I found this matched my needs - it uses the Morgan module for Express and hooks it all up for you: https://www.npmjs.com/package/sails-hook-requestlogger