No output from erlang tracer - trace

I've got a module my_api with a function which is callback for cowboy's requests handle/2,
So when I make some http requests like this:
curl http://localhost/test
to my application this function is called and it's working correctly because I get a response in the terminal.
But in another terminal I attach to my application with remsh and try to trace calls to that function with a dbg module like this:
dbg:tracer().
dbg:tp(my_api, handle, 2, []).
dbg:p(all, c).
I expected that after in another terminal I make a http request to my api, the function my_api:handle/2 is called and I get some info about this call (at least function arguments) in the attached to the node terminal but I get nothing in there. What am I missing?

When you call dbg:tracer/0, a tracer of type process is started with a message handler that sends all trace messages to the user I/O device. Your remote shell's group leader is independent of the user I/O device, so your shell doesn't receive the output sent to user.
One approach to allow you to see trace output is to set up a trace port on the server and a trace client in a separate node. If you want traces from node foo, first remsh to it:
$ erl -sname bar -remsh foo
Then set up a trace port. Here, we set up a TCP/IP trace port on host port 50000 (use any port you like as long as it's available to you):
1> dbg:tracer(port, dbg:trace_port(ip, 50000)).
Next, set up the trace parameters as you did before:
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Then exit the remsh, and start a node without remsh:
$ erl -sname bar
On this node, start a TCP/IP trace client attached to host port 50000:
1> dbg:trace_client(ip, {"localhost", 50000}).
This shell will now receive dbg trace messages from foo. Here, we used "localhost" as the hostname since this node is running on the same host as the server node, but you'll need to use a different hostname if your client is running on a separate host.
Another approach, which is easier but relies on an undocumented function and so might break in the future, is to remsh to the node to be traced as you originally did but then use dbg:tracer/2 to send dbg output to your remote shell's group leader:
1> dbg:tracer(process, {fun dbg:dhandler/2, group_leader()}).
{ok, ...}
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Since this relies on the dbg:dhandler/2 function, which is exported but undocumented, there's no guarantee it will always work.
Lastly, since you're tracing all processes, please pay attention to the potential problems described in the dbg man page, and always be sure to call dbg:stop_clear(). when you're finished tracing.

Related

boot-refresh inside cider-connect

After following the suggested steps at
https://github.com/samestep/boot-refresh
the intended hot-reloading behavior works when using cider-jack-in from inside a boot project.
However, in the following scenario it does not work. consider this boot task:
(deftask dev2 []
(comp
(serve
:handler 'app.core/handler
:reload true
:port 3000
:httpkit true
:nrepl {:port 4000})
(watch) (refresh) ;; doesn't work with or without this line
))
The relevant part is the :nrepl keyword.
After this task is fired, one can connect to a nrepl server at port 4000, which has the advantage of accessing the actual state of the application during development. (see this post for more details)
This can be done via cider-connect. However, in there the hot-reloading is gone. The :reload true option might confuse here, this only triggers a source reload when a http request is done. But I'm looking for the more general approach of boot-refresh.
note: The intention here is to have a live-reloading behavior on the server side, which is similar to concepts known on the client side (figwheel or boot-reload).

wsadmin script timing out when executing against DMGR via SOAP

I'm attempting to start and stop an application on a single JVM via the wsadmin console since the Web UI for IBM BPM PS Adv. doesn't allow for that kind of operation. So, I have the following script:
https://gist.github.com/predatorian3/b8661c949617727630152cbe04f78d7e
and when I run it against the DMGR from the Cell Host, I receive the following errors.
[wasadmin#server01 ~]$ cat /usr/local/bin/Run_wsadmin.sh
#!/bin/bash
#
#
#
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -user serviceAccount -password password $*
[wasadmin#cessoapscrt00 ~]$ time Run_wsadmin.sh -f /opt/IBM/wsadmin/wsadmin_Restart_Application.py WPS00 CRT00WPS01 redirectResource_war
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager
WASX7303I: The following options are passed to the scripting environment and are available as arguments that are stored in the argv variable: "[WPS00, CRT00WPS01, redirectResource_war]"
WASX7017E: Exception received while running file "/opt/IBM/wsadmin/wsadmin_Restart_Application.py"; exception information: com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
real 3m21.275s
user 0m17.411s
sys 0m0.796s
So, I'm not specifying the connection types, and using the default, which is SOAP. However, upon reading about the other Connection Types, none of them seem any better, but I attribute that to IBM Documentation vagueness. Is there an option to increase the timeout wait periods, or turn it off, or is there a better connection type?
Also running this directly on the wsadmin console, it seems that it is hanging up on gathering the application manager string.
[wasadmin#server01 ~]$ Run_wsadmin.sh
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager WASX7031I: For help, enter: "print Help.help()"
wsadmin>appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicatoinManager,process=CRT00WPS01,*')
WASX7015E: Exception running command: "appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicationManager,process=CRT00WPS01,*')"; exception information:
com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
wsadmin>
You can increase timeout value in {profile}/properties/soap.client.props
com.ibm.SOAP.requestTimeout=180
If you want to turn off timeout, modify com.ibm.SOAP.requestTimeout=0
Or if you want longer timeout you can modify the value 180 to something else.
Also about your query command, I noticed that you have a typo on the MBean type, you had type=ApplicatoinManager, it should be type=ApplicationManager
HERE YOU GO -- I had the same issue. I want to override the timeout prop temporarily. This worked like a champ. Make sure you follow below steps exactly.I did some mistakes and the prop did not passed, I figured out and it works.
Copy the soap.client.props file from /properties and give it a new name such as mysoap.client.props.
Edit mysoap.client.props and update the value of com.ibm.SOAP.requestTimeout as required
Create a new Java properties file soap_override.props and enter the following line:
com.ibm.SOAP.ConfigURL=file:/mysoap.client.props
Pass soap_override.props into wsadmin using the -p option: wsadmin -p soap_override.props...
REFERENCE:
https://www.ibm.com/developerworks/community/blogs/timdp/entry/avoiding_wsadmin_request_timeouts_the_neat_way32?lang=en

Unable to accept connections on socket, when creating sockets on remote node via RPC in Erlang

I am struggling to identify the reason for gen_tcp:accept always returning an {error, closed} response.
Essentially, I have a supervisor that creates a listening socket:
gen_tcp:listen(8081, [binary, {packet, 0}, {active, false}, {reuseaddr, true}]),
This socket is then passed to a child, which is an implementation of the gen_server behaviour. The child then accepts connections on the socket.
accept(ListeningSocket, {ok, Socket}) ->
spawn(fun() -> loop(Socket) end),
accept(ListeningSocket);
accept(_ListeningSocket, {error, Error}) ->
io:format("Unable to listen on socket: ~p.~n", [Error]),
gen_server:call(self(), stop).
accept(ListeningSocket) ->
accept(ListeningSocket, gen_tcp:accept(ListeningSocket)).
loop(Socket) ->
case gen_tcp:recv(Socket, 0) of
{ok, Data} ->
io:format("~p~n", [Data]),
process_request(Data),
gen_tcp:send(Socket, Data),
loop(Socket);
{error, closed} -> ok
end.
I load the supervisor and gen_server BEAM binaries locally, and load them on a another node (which runs on the same machine) via an RPC call to code:load_binary.
Next, I execute the supervisor via an RPC call, which in turn starts the server.{error, closed} is always returned by gen_tcp:accept in this scenario.
Should I run the supervisor and server while logged in to a node shell, then the server can accept connections without issue. This includes 'remsh' to the remote node that would fail to accept connections, had I previously RPCed it to start the server unsuccessfully.
I seem to be able to replicate the issue by using the shell alone:
[Terminal 1]: erl -sname node -setcookie abc -distributed -noshell
[Terminal 2]: erl -sname rpc -setcookie abc:
net_adm:ping('node#verne').
{ok, ListeningSocket} = rpc:call('node#verne', gen_tcp, listen, [8081, [binary, {packet, 0}, {active, true}, {reuseaddr, true}]]).
rpc:call('node#verne', gen_tcp, accept, [ListeningSocket]).
The response to the final RPC is {error, closed}.
Could this be something to do with socket/port ownership?
In case it is of help, there are no clients waiting to connect, and I don't set timeouts anywhere.
Each rpc:call starts a new process on the target node to handle the request. In your final example, your first call creates a listen socket within such a process, and when that process dies at the end of the rpc call, the socket is closed. Your second rpc call to attempt an accept therefore fails due to the already-closed listen socket.
Your design seems unusual in several ways. For example, it's not normal to have supervisors opening sockets. You also say the child is a gen_server yet you show a manual recv loop, which if run within a gen_server would block it. You might instead explain what you're trying to accomplish and request help on coming up with a design to meet your goals.

Akka IO(Tcp) get reason of CommandFailed

I have the following example of Actor using IO(Tcp)
https://gist.github.com/flirtomania/a36c50bd5989efb69a5f
For the sake of experiment I've run it twice, so it was trying to bind to 803 port. Obviously I've got an error.
Question: How can I get the reason why "CommandFailed"? In application.conf I have enabled slf4j and debug level of logs, then I've got an error in my logs.
DEBUG akka.io.TcpListener - Bind failed for TCP channel on endpoint [localhost/127.0.0.1:803]: java.net.BindException: Address already in use: bind
But why is that only debug level? I do not want to enable all ActorSystem to log their events, I want to get the reason of CommandFailed event (like java.lang.Exception instance which I could make e.printStackTrace())
Something like:
case c # CommandFailed => val e:Exception = c.getReason()
Maybe it's not the Akka-way? How to get diagnostic info then?
Here's what you can do - find the PID that still keeps living and then kill it.
On a Mac -
lsof -i : portNumber
then
kill -9 PidNumber
I understood that you have 2 questions.
if you ran the same code simultaneously, bot actors are trying to bind to the same port (in your case, 803) which is not possible unless the bound one unbinds and closes the connection so that the other one can bind.
you can import akka.event.Logging and put val log = Logging(context.system, this) at the beginning of your actors, which will log all the activities of your actors and ...
it also shows the name of the actor,corresponding actor system and host+port (if you are using akka-cluster).
wish that helps

Lua Socket cannot be properly stopped by Ctrl+C

I have a standalone lua script that uses lua sockets to connect to a server via TCP IP. It uses receive call to receive data from that server. It works, however, when I try to stop it with Ctrl+C, one of the two scenarios is happening:
-If there is currently no traffic and receive is waiting, Ctrl+C will have no effect. The program will continue to run, and will have to be terminated by kill.
-If there is traffic, the program will exit with the below printout and with the socket still open and with the server not accepting another connection:
lua: luaSocketTest.lua:15: interrupted!
stack traceback:
[C]: in function 'receive'
luaSocketTest.lua:15: in function 'doWork'
luaSocketTest.lua:22: in main chunk
[C]: ?
I tried using pcall to solve the second scenario, without success. pcall doesn't return, the process still throws the error.
Sample of my program is below:
local socket = require ("socket")
local ip = "localhost"
local port = 5003
function doWork ()
print ("Starting socket: "..ip..":"..port)
client = assert(socket.connect(ip, port))
print ("Socket Accepted")
client:send("TEST TEST")
while 1 do
local byte, err = client:receive (1)
if not err then
print (byte)
end
end
end
while 1 do
local status = pcall(doWork())
print ("EXITED PCALL WITH STATUS: "..tostring(status))
if not status then client:close() end
end
This would be quite a change, but you could employ lua-ev. It allows to add Signal handlers, which is exactly what is required to react to ctrl-c.
local socket = require'socket'
-- make connect and send in blocking mode
local client = socket.connect(ip,port)
client:send('TEST TEST')
-- make client non-blocking
client:settimeout(0)
ev.IO.new(function()
repeat
local data,err,part = client:receive(10000)
print('received',data or part)
until err
end,client:getfd(),ev.READ):start(ev.Loop.default)
local ev = require'ev'
local SIGINT = 2
ev.Signal.new(function()
print('SIGINT received')
end,SIGINT):start(ev.Loop.default)
ev.Loop.default:loop()