Akka IO(Tcp) get reason of CommandFailed - scala

I have the following example of Actor using IO(Tcp)
https://gist.github.com/flirtomania/a36c50bd5989efb69a5f
For the sake of experiment I've run it twice, so it was trying to bind to 803 port. Obviously I've got an error.
Question: How can I get the reason why "CommandFailed"? In application.conf I have enabled slf4j and debug level of logs, then I've got an error in my logs.
DEBUG akka.io.TcpListener - Bind failed for TCP channel on endpoint [localhost/127.0.0.1:803]: java.net.BindException: Address already in use: bind
But why is that only debug level? I do not want to enable all ActorSystem to log their events, I want to get the reason of CommandFailed event (like java.lang.Exception instance which I could make e.printStackTrace())
Something like:
case c # CommandFailed => val e:Exception = c.getReason()
Maybe it's not the Akka-way? How to get diagnostic info then?

Here's what you can do - find the PID that still keeps living and then kill it.
On a Mac -
lsof -i : portNumber
then
kill -9 PidNumber

I understood that you have 2 questions.
if you ran the same code simultaneously, bot actors are trying to bind to the same port (in your case, 803) which is not possible unless the bound one unbinds and closes the connection so that the other one can bind.
you can import akka.event.Logging and put val log = Logging(context.system, this) at the beginning of your actors, which will log all the activities of your actors and ...
it also shows the name of the actor,corresponding actor system and host+port (if you are using akka-cluster).
wish that helps

Related

No output from erlang tracer

I've got a module my_api with a function which is callback for cowboy's requests handle/2,
So when I make some http requests like this:
curl http://localhost/test
to my application this function is called and it's working correctly because I get a response in the terminal.
But in another terminal I attach to my application with remsh and try to trace calls to that function with a dbg module like this:
dbg:tracer().
dbg:tp(my_api, handle, 2, []).
dbg:p(all, c).
I expected that after in another terminal I make a http request to my api, the function my_api:handle/2 is called and I get some info about this call (at least function arguments) in the attached to the node terminal but I get nothing in there. What am I missing?
When you call dbg:tracer/0, a tracer of type process is started with a message handler that sends all trace messages to the user I/O device. Your remote shell's group leader is independent of the user I/O device, so your shell doesn't receive the output sent to user.
One approach to allow you to see trace output is to set up a trace port on the server and a trace client in a separate node. If you want traces from node foo, first remsh to it:
$ erl -sname bar -remsh foo
Then set up a trace port. Here, we set up a TCP/IP trace port on host port 50000 (use any port you like as long as it's available to you):
1> dbg:tracer(port, dbg:trace_port(ip, 50000)).
Next, set up the trace parameters as you did before:
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Then exit the remsh, and start a node without remsh:
$ erl -sname bar
On this node, start a TCP/IP trace client attached to host port 50000:
1> dbg:trace_client(ip, {"localhost", 50000}).
This shell will now receive dbg trace messages from foo. Here, we used "localhost" as the hostname since this node is running on the same host as the server node, but you'll need to use a different hostname if your client is running on a separate host.
Another approach, which is easier but relies on an undocumented function and so might break in the future, is to remsh to the node to be traced as you originally did but then use dbg:tracer/2 to send dbg output to your remote shell's group leader:
1> dbg:tracer(process, {fun dbg:dhandler/2, group_leader()}).
{ok, ...}
2> dbg:tp(my_api, handle, 2, []).
{ok, ...}
3> dbg:p(all, c).
{ok, ...}
Since this relies on the dbg:dhandler/2 function, which is exported but undocumented, there's no guarantee it will always work.
Lastly, since you're tracing all processes, please pay attention to the potential problems described in the dbg man page, and always be sure to call dbg:stop_clear(). when you're finished tracing.

Remote Logging using Log4j2

So i have this task to log activities to a file, but it has to be done
remotely on the server side, Remote logging.
NOTE : Remote Logging has to be in latest version of Log4j2(2.10)
My task was simple
Send logging info to a port.
Log info from port to a file.
My Discoveries
Socket Appender exist which help send info to a port. This is it, you dont need to create a client side code or anything.
Socket appender configuration in log4j2.properties
appender.socket.type = Socket
appender.socket.name= Socket_Appender
appender.socket.host = "IP address"
appender.socket.port = 8101
appender.socket.layout.type = SerializedLayout
appender.socket.connectTimeoutMillis = 2000
appender.socket.reconnectionDelayMillis = 1000
appender.socket.protocol = TCP
Adapting from here. But this is also log4j 1.x adaptation.
I found out that before log4j 2.6 to listen to a port we used TcpSocketServer which started a server using LogEventBridgeThis helped reach that conclusion. This class was in core.net.server which is no longer available.Assuming it is not used anymore and the only similar/closest class, TcpSocketManager.Other links that helped. How to use SocketAppend?
Then i tried this
public static final Logger LOG=LogManager.getLogger(myapp.class.getName());
main(){
LOG.debug("DEBUG LEVEL");
}
and got the following error
main ERROR TcpSocketManager (TCP:IPAddress:8111) caught exception
and will continue: java.net.SocketTimeoutException: connect timed out
I know this work because i made it read to a socket but there was no one listening, but somehow i messed up big time and there was a code change.
I need help how to go ahead. Thank You in advance
The socket server to remotely receive log events has been moved to a separate repository: https://github.com/apache/logging-log4j-tools
This still needs to be released.

wsadmin script timing out when executing against DMGR via SOAP

I'm attempting to start and stop an application on a single JVM via the wsadmin console since the Web UI for IBM BPM PS Adv. doesn't allow for that kind of operation. So, I have the following script:
https://gist.github.com/predatorian3/b8661c949617727630152cbe04f78d7e
and when I run it against the DMGR from the Cell Host, I receive the following errors.
[wasadmin#server01 ~]$ cat /usr/local/bin/Run_wsadmin.sh
#!/bin/bash
#
#
#
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -user serviceAccount -password password $*
[wasadmin#cessoapscrt00 ~]$ time Run_wsadmin.sh -f /opt/IBM/wsadmin/wsadmin_Restart_Application.py WPS00 CRT00WPS01 redirectResource_war
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager
WASX7303I: The following options are passed to the scripting environment and are available as arguments that are stored in the argv variable: "[WPS00, CRT00WPS01, redirectResource_war]"
WASX7017E: Exception received while running file "/opt/IBM/wsadmin/wsadmin_Restart_Application.py"; exception information: com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
real 3m21.275s
user 0m17.411s
sys 0m0.796s
So, I'm not specifying the connection types, and using the default, which is SOAP. However, upon reading about the other Connection Types, none of them seem any better, but I attribute that to IBM Documentation vagueness. Is there an option to increase the timeout wait periods, or turn it off, or is there a better connection type?
Also running this directly on the wsadmin console, it seems that it is hanging up on gathering the application manager string.
[wasadmin#server01 ~]$ Run_wsadmin.sh
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager WASX7031I: For help, enter: "print Help.help()"
wsadmin>appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicatoinManager,process=CRT00WPS01,*')
WASX7015E: Exception running command: "appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicationManager,process=CRT00WPS01,*')"; exception information:
com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
wsadmin>
You can increase timeout value in {profile}/properties/soap.client.props
com.ibm.SOAP.requestTimeout=180
If you want to turn off timeout, modify com.ibm.SOAP.requestTimeout=0
Or if you want longer timeout you can modify the value 180 to something else.
Also about your query command, I noticed that you have a typo on the MBean type, you had type=ApplicatoinManager, it should be type=ApplicationManager
HERE YOU GO -- I had the same issue. I want to override the timeout prop temporarily. This worked like a champ. Make sure you follow below steps exactly.I did some mistakes and the prop did not passed, I figured out and it works.
Copy the soap.client.props file from /properties and give it a new name such as mysoap.client.props.
Edit mysoap.client.props and update the value of com.ibm.SOAP.requestTimeout as required
Create a new Java properties file soap_override.props and enter the following line:
com.ibm.SOAP.ConfigURL=file:/mysoap.client.props
Pass soap_override.props into wsadmin using the -p option: wsadmin -p soap_override.props...
REFERENCE:
https://www.ibm.com/developerworks/community/blogs/timdp/entry/avoiding_wsadmin_request_timeouts_the_neat_way32?lang=en

Log after actor system has been shutdown

I am using log method from with ActorLogging to make logs. I would like to make a few logs after the system is shut down, but that is not working, as I would assume it uses system for logging. What I would like to do looks like that:
logger.info("Shutting down actor system.")
context.system.shutdown()
context.system.registerOnTermination {
logger.info("Actor System terminated, stopping loggers and exiting.")
loggerContext.stop()
}
Are there any workarounds to this problem?
Thanks!
You can use just slf4j (backed for instance by logback) directly as described here.

Handling connection failures in apache-camel

I am writing an apache-camel RabbitMQ consumer. I would like to react somehow to connection problems (i.e. try to reconnect). Is it possible to configure apache-camel to automatically reconnect?
If not, how can I find out that a connection to the queue was interrupted? I've done the following test:
start the queue (and some producer)
start my consumer (it was getting messages as expected)
stop the queue (the messages stopped arriving, as expected, but no exception was thrown)
start the queue (no new messages were received)
I am using camel in Scala (via akka-camel), but a Java solution would be probably also OK
You can pass in the flag automaticRecoveryEnabled=true to the URI, Camel will reconnect if the connection is lost.
For automatic RabbitMQ resource recovery (Connections/Channels/Consumers/Queues/Exchanages/Bindings) when failures occur, check out Lyra (which I authored). Example usage:
Config config = new Config()
.withRecoveryPolicy(new RecoveryPolicy()
.withMaxAttempts(20)
.withInterval(Duration.seconds(1))
.withMaxDuration(Duration.minutes(5)));
ConnectionOptions options = new ConnectionOptions().withHost("localhost");
Connection connection = Connections.create(options, config);
The rest of the API is just the amqp-client API, except your resources are automatically recovered when failures occur.
I'm not sure about camel-rabbitmq specifically, but hopefully there's a way you can swap in your own resource creation via Lyra.
Current camel-rabbitmq just create a connection and the channel when the consumer or producer is started. So it don't have a chance to catch the connection exception :(.