How do I send message from the command line to an Erlang process? - matlab

I am trying to notify an Erlang process that an external program (a Matlab script) has completed. I am using a batch file to do this and would like to enter a command that will notify the Erlang process of completion. Here is the main code:
In myerlangprogram.erl:
runmatlab() ->
receive
updatemodel->
os:cmd("matlabscript.bat"),
...
end.
In matlabscript.bat:
matlab -nosplash -nodesktop -r "addpath('C:/mypath/'); mymatlabscript; %quit;"
%% I would like to notify erlang of completion here....
exit
As you can see I am using the 'os:cmd' erlang function to call my matlab script.
I am not sure that this is the best approach. I have been looking into using ports (http://www.erlang.org/doc/reference_manual/ports.html) but am struggling to understand how/where the ports interact with the operating system.
In summary, my 2 questions are:
1. What is the easiest way to send a message to an Erlang process from the command line?
2. Where/how do erlang ports receive/send data from/to the operating system?
Any advice on this would be gratefully received.
N.b. the operating system is windows 7.

I assume that you want to call os:cmd without blocking your main process loop. In order to accomplish that you will need to call os:command from a spawned process and then send a message back to the Parent process indicating completion.
Here is an example:
runmatlab() ->
receive
updatemodel ->
Parent = self(),
spawn_link(fun() ->
Response = os:cmd("matlabscript.bat"),
Parent ! {updatedmodel, Response}
end),
runmatlab();
{updatedmodel, Response} ->
% do something with response
runmatlab()
end.

For the first, Erlang process is something definitely different from os process. There is no "notifications" mechanism or "messaging" mechanism between them. What can you do is a) run new erlang node, b) connect to target node, c) send message to remote node.
But. Regarding to your question.
runmatlab() ->
receive
updatemodel->
BatOutput = os:cmd("matlabscript.bat"),
%% "here" BAT script has already finished
%% and output can be found in BatOutput variable
...
end.
For the second, ports are about encoding/decoding erlang data type (in short words).

Related

Tcl/Tk - How to keep other buttons useable while separate function still running?

I'm very new to Tcl/Tk and have been dealing with an issue for the last couple of days. Basically I have a server written in C and a client GUI written in Tcl/Tk. So far it doesn't do a ton. To test it, I start up the server so that it's listening for connections, then run my GUI. When I click one of the buttons, the GUI should open up a separate toplevel window with a text widget embedded in it. (This part works.) Then, my client connects to the server and gives it a couple of settings, and through this the server decides what info to send back. The server's response is what gets printed to that second window's text widget.
What I'm trying to add in now is a Stop button. Right now, my server is set up to wait a couple of seconds, then write the same message to the client. This is set up inside a loop that is waiting to hear a "Stop" command from my client. I have a Stop button in the GUI with a command set up to write that command to the server when clicked. However, all of my buttons get frozen as soon as I hit the begin button and messages are written to the client.
Basically, how can I keep allowing my server to write to my client while still keeping the rest of my GUI usable? I want my client to write a new line to the text widget on my separate window whenever it receives a new message from the server, but I still want the main GUI window that has all my command buttons to behave independently.
In general, it depends on whether what you are doing is CPU-intensive (where reading from a plain file counts as CPU-intensive) or I/O-intensive (where running things in another process counts as I/O-intensive; database calls often count as CPU-intensive here despite not really needing to). I'm only going to mention summaries of what's going on as you aren't quite providing enough information.
For I/O-intensive code, you want to structure your code to be event-driven. Tcl has good tools for this, in that fileevent works nicely on sockets, terminals and pipelines on all supported platforms. The coroutine system of Tcl 8.6 can help a lot with preventing the callbacks required from turning your code into a tangled mess!
For CPU-intensive code, the main option is to run in another thread. That thread won't be able to touch the GUI directly (which in turn will be free to be responsive), but will be able to do all the work and send messages back to the main thread with whatever UI updates it wants done. (Technically you can do this with I/O-intensive code too, but it's more irritating than using a coroutine.) Farming things out to a subprocess is just another variation on this where the communications are more expensive (but much isolation is enforced by the OS).
If you're dealing with sockets, you're probably I/O-intensive. Assume that until you show otherwise. Here's a simple example:
proc gets_async {sock} {
set sock [lindex $args end]
fileevent $sock readable [info coroutine]
while {[gets $sock data] < 0 && [fblocked $sock]} {
yield
}
fileevent $sock readable {}
return $data
}
proc handler {socket} {
set n 0
while {![eof $socket]} {
# Write to the server
puts $socket "this is message [incr n] to the server"
# Read from the server
puts [gets_async $socket]
}
close $socket
}
proc launchCommunications {host port} {
set sock [socket $host $port]
fconfigure $sock -blocking 0 -encoding utf-8
coroutine comms($host:$port) handler $socket
}
Note that gets_async is much like coroutine::util gets in Tcllib.

a bug from rpc call to other node? [duplicate]

This question already has an answer here:
a bug in erlang node after an error at another node
(1 answer)
Closed 2 years ago.
i created 2 erlang nodes in the same Windows machine with two cmd windows:'unclient#MYPC' and 'unserveur#MYPC' , the server code is very simple :
-module(serveur).
-export([start/0,recever/0,inverse/1]).
%%%%
start() ->
Pid=spawn(serveur,recever,[]),
register(ownServer, Pid).
%%%%
recever() ->
receive
{From, X} ->From ! {ownServer,1/X} end.
%%%%
inverse(X) ->
ownServer!{self(),1/X}
receive
{ownServer, Reply} ->Reply end.
so at the server node cmd i start this module
c(serveur).
serveur:start()
at the client node i used the rpc call function to try the connection and all is fine, for example i try :
rpc:call(unserveur#MYPC,serveur,inverse,[2]).
and i received 0.5
now i use an atom to send it to the server for causing an error
rpc:call(unserveur#MYPC,serveur,inverse,[a]).
at the client cmd node :
i waited for the response from the server but i didn't receive anything and there is no more the client prompt :
unclient#MYPC 1>
i can write but the shell does not execute my instructions anymore and there is not any prompt.
I searched about and i found that rpc:call trigger the rex server at the destination node to spawns and monitors a process who execute the (M,F,A) is that true ? if yes why i had this bug on the client node ?
On unclient side, rpc:call(Node,serveur,inverse,[a]) builds a message for the Node rpc server and wait a response.
on unserveur side, the RPC server receives the message and start a process to call the function serveur:inverse(a).
the inverse function send a message to the serveur:recever() which execute the instruction 1/a and crashes.
Therefore, the reply message cannot be sent back to inverse. The inverse function will wait the answer forever, as well as the rpc:call on unclient node since you did not define any timeout.
You could define a time out in the inverse function:
inverse(X) ->
ownServer!{self(),1/X}
receive
{ownServer, Reply} -> {ok,Reply}
after 100 -> % define a timout of 100 ms
{error,timeout}
end.
In addition, it is a good idea to use a timeout in the remote procedure call using rpc:call(Node, Module, Function, Args, Timeout)
In a previous post you were trying to get a response using the trap_exit flag. There were several mistakes there. First as explained by #legoscia, in case of error, the exit message is sent to any linked process. The second is that you were expecting that your process will continue to execute its code. On error, the process stops immediately and the system issues the exit message which kills or will be received by all the linked process depending on the flag trap_exit value.
I wrote a version that works as you expected:
-module(serveur).
-export([start/0,recever/0,inverse/1]).
-export([do_op/3]).
%%%%
start() ->
Pid=spawn(serveur,recever,[]),
register(ownServer, Pid).
%%%%
recever() ->
process_flag(trap_exit,true),
receive
stop -> stopped;
{'EXIT',_,_} -> recever(); % necessary to throw the {EXIT,_,normal} messages
{From, Op, X} ->
spawn_link(serveur, do_op, [self(),Op,X]),
receive
Reply -> From ! {ownServer, Reply}
end,
recever()
end.
do_op(From, inverse, X) ->
From ! {result,1/X}.
%%%%
inverse(X) ->
ownServer!{self(), inverse, X},
receive
{ownServer, Reply} ->Reply
end.
In fact this code works more or less like a catch statement, which is exactly what you wanted to do, and it is what you should use there. In erlang, it is a very good idea to let processes crash when something unexpected happens, specially using the Erlang OTP mechanisms, but when the error is probable (user interface for example) I think it is more adapted to use catch or try/catch at the right level.
[edit]
It is cool that you want to fully understand the behavior of the system. To answer to your question, I am sorry but I never use rpc, and I don't know in which cases it is well suited.
For this case I use the global library that allow the communication between the nodes of a cluster (see erlang distribution from learnyousomeerlang a very good site to learn and understand erlang).
As you say the way you solved the issue use a lot of code (and I am not sure that it works in local now). In my opinion, it is because the flag trap_exit is not meant for this usage but for the OTP supervisor trees and all the otp behaviors (see What is OTP from the same site). In your case, you should use a catch statement and add timeouts to handle the possible errors. Here is a code which handle bad arguments and overloaded server. I have added a few interfaces to simulate the different use cases.
-module(serveur).
-export([start/0,recever/0,inverse/1,lock/0,unlock/0,stop/0,wait10s/0]).
%%%%
start() ->
Pid=spawn(serveur,recever,[]),
register(ownServer, Pid).
%%%%
recever() ->
receive
{From, X} ->
From ! {ownServer,(catch 1/X)},
recever();
waitForUnlock ->
ok = wait_for_unlock(),
recever();
stop -> server_stopped
end.
%%%%
inverse(X) ->
ownServer ! {self(),X},
receive
{ownServer, {'EXIT',{Reply,_}}} ->
{error,Reply};
{ownServer, Reply} ->
{ok,Reply}
after 100 ->
{error,timeout}
end.
%%%% use this interface to simulate an overloaded server
lock() ->
ownServer ! waitForUnlock.
%%%% use this interface to unlock the server
unlock() ->
ownServer ! unlock.
%%%% use this interface to simulate a very long answer from server
wait10s() ->
timer:sleep(10000),
iAmAwake.
%%%% use this interface to stop the server
stop() ->
ownServer ! stop.
%%%% private function used to hang the server
wait_for_unlock() ->
receive
unlock -> ok
end.
The test on local node
(unserveur#MyPc)1> c(serveur).
{ok,serveur}
(unserveur#MyPc)2> serveur:start().
true
(unserveur#MyPc)3> serveur:inverse(2).
{ok,0.5}
(unserveur#MyPc)4> serveur:inverse(a).
{error,badarith}
(unserveur#MyPc)5> serveur:lock().
waitForUnlock
(unserveur#MyPc)6> serveur:inverse(2).
{error,timeout}
(unserveur#MyPc)7> serveur:inverse(a).
{error,timeout}
(unserveur#MyPc)8> serveur:unlock().
unlock
(unserveur#MyPc)9> serveur:inverse(2).
{ok,0.5}
(unserveur#MyPc)10> serveur:wait10s().
iAmAwake
(unserveur#MyPc)11> serveur:stop().
stop
(unserveur#MyPc)12> serveur:inverse(2).
** exception error: bad argument
in function serveur:inverse/1 (serveur.erl, line 60)
(unserveur#MyPc)13>
and (almost) the same test from the client node
(unclient#MyPc)1> net_adm:ping(unserveur#MyPc).
pong
(unclient#MyPc)2> rpc:call(unserveur#MyPc,serveur,start,[]).
true
(unclient#MyPc)3> rpc:call(unserveur#MyPc,serveur,inverse,[2]).
{ok,0.5}
(unclient#MyPc)4> rpc:call(unserveur#MyPc,serveur,inverse,[a]).
{error,badarith}
(unclient#MyPc)5> rpc:call(unserveur#MyPc,serveur,lock,[]).
waitForUnlock
(unclient#MyPc)6> rpc:call(unserveur#MyPc,serveur,inverse,[2]).
{error,timeout}
(unclient#MyPc)7> rpc:call(unserveur#MyPc,serveur,inverse,[a]).
{error,timeout}
(unclient#MyPc)8> rpc:call(unserveur#MyPc,serveur,unlock,[]).
unlock
(unclient#MyPc)9> rpc:call(unserveur#MyPc,serveur,inverse,[2]).
{ok,0.5}
(unclient#MyPc)10> rpc:call(unserveur#MyPc,serveur,wait10s,[]).
iAmAwake
(unclient#MyPc)11> rpc:call(unserveur#MyPc,serveur,wait10s,[],1000).
{badrpc,timeout}
(unclient#MyPc)12> rpc:call(unserveur#MyPc,serveur,stop,[]).
stop
(unclient#MyPc)13> rpc:call(unserveur#MyPc,serveur,inverse,[2]).
{badrpc,{'EXIT',{badarg,[{serveur,inverse,1,
[{file,"serveur.erl"},{line,60}]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,197}]}]}}}
(unclient#MyPc)14>
yeah, Finally i resolved this bug and the most important i understood what happens :
when i called rpc:call('unserveur#MYPC',serveur,inverse,[a]) the client node process(the main shell process) send this message to the serveur node process(the main shell process) , the serveur node process send this message to the rex server of the serveur node, the rex server spawns and monitors a new process who will run apply(serveur, inverse, [a]), this new process run the function and the serveur process who run recever() will crash and no reply to the new process who will wait forever and all the processes behind him will wait forever including the main shell client node process and that explains the desapearing of the prompt and writing normally. This is exactely what Pascal said so you have answered my question.
i resolved this problem by adding
process_flag(trap_exit, true),
link(whereis(ownServer)),
at the head of the inverse function and i add
{'EXIT', _, _} -> start(),
sorry;
at the head of the receive session of the inverse function and when i called the rpc call with an atom i can see sorry at the client node shell and the server returns automatically to work again so when i called for the second time rpc call inverse with an integer i had the right answer.
i see that i coded a lot for this call so may be the rpc call is not a good choice and replacing it with spawning processes manually will be better, what do you think ?

How to submit a JCL through SQR Call System Command on MVS z/os?

I’m trying to submit a JCL through an SQR Program using Call System Command on MVS z/os. The JCL resides in specific Dataset.
What I’m trying do is something like this:
let $jclcmd= 'SUBMIT PSLIBDSN.O92.CUST7.JCLSRC(UTILI)'
call system using $jclcmd #rtnstat
Up to this point, I have not been able to submit the JCL. What I get from the mainframe is this error:
**** WARNING **** ERRNO = ESYS
Generated in SYSTEM called from line 389 of SYS(UCALL) , offset 000118
Program SUBMIT was abnormally terminated with a system code of 66D.SYS(UCALL) , offset 000118
I also tried let $jclcmd= 'TSO SUBMIT PSLIBDSN.O92.CUST7.JCLSRC(UTILI)' but gets this:
Program TSO was abnormally terminated with a system code of 806.
SYSTEM COMPLETION CODE=806 REASON CODE=00000004
Up to this point I have thought that the call system function does not allow operating system commands to be executed for reasons of incompatibility with MVS. The reality is that the SQR documentation does not mention that it is not, but always mentions Windows and UNIX as an example. I have made a thousand attempts to execute a REXX program, submit a JCL and others but looks like the function is not right assembling the command.
Any idea will be welcome.

How to get application process to wait until the socket has data to read using libevent bufferevents?

I'm working with libevent for the first time and have been having an issue trying to get my application to not run until the read callback is called. I am using bufferevents as well. Essentially I am doing is trying to avoid the sleep in my main application loop and instead have the OS wake up the process (via libevent) when there is data to be read off the socket. Anyone know how to do this? I found in an alpha build of libevent that you can set a base event loop to be EVLOOP_NO_EXIT_ON_EMPTY, but from looking at the libevent code that will just use up my whole proc I believe. I also read on this question that it is a bad idea to set a socket to blocking on windows which is why I haven't done that as a solution either. I will mark this with libuv and libev too since they are similar and might contribute to my solution.
you have to use the following api, some of the API may be oudated you can search google for new one.
struct event_base *base ;
struct event g_eve
base = event_init();
//after binding the socket register your socket for read event using below api
event_set(&g_eve, SockFd, EV_READ | EV_PERSIST, CallbackFunctin, &g_eve);
event_add(&g_eve, NULL);
event_base_dispatch(base);

Perl IPC - FIFO and daemons & CPU Usage

I have a mail parser perl script which is called every time a mail arrives for a user (using .qmail). It extracts a calendar attachment out of the mail and places the "path" of the file in a FIFO queue implemented using the Directory::Queue module.
Another perl script which reads the path of the calendar attachment and performs certain file operations on the local system as well as on the remote CalDAV server, is being run as a daemon, as explained here. So basically this script looks like:
my $declarations
sub foo {
.
.
}
sub bar {
.
.
}
while ($keep_running) {
for(keep-checking-the-queue-for-new-entries) {
sub caldav_logic1 {
.
.
}
sub caldav_logic2 {
.
.
}
}
}
I am using Proc::Daemon for running the script as a daemon. Now the problem is, this process has almost 100% CPU usage. What are the suggested ways to implement the daemon in a more standard, safer way ? I am using pretty much the same code as mentioned in the link mentioned for usage of Proc::Daemon.
I bet it is your for loop and checking for new queue entries.
There are ways to watch a directory for file changes. These ways are OS dependent but there might be a Perl module that wraps them up for you. Use that instead of busy looping. Even with a sleep delay, the looping is inefficient when you can have your program told exactly when to wake up by an OS event.
File::ChangeNotify looks promising.
Maybe you don't want truly continuous polling. Is keep-checking-the-queue-for-new-entries a CPU-intensive part of the code, even when the queue is empty? That would explain why your processor is always busy.
Try putting a sleep 1 statement at the very top (or very bottom) of the while loop to let the processor rest between queue checks. If that doesn't degrade the program performance too much (i.e., if everyone can tolerate waiting an extra second before the company calendars get updated) and if the CPU usage still seems high, try sleep 2, sleep 5, etc.
cpan Linux::Inotify2
The kernel knows when files change and sends this information to your program which runs the sub. Maybe this will be better because the program will run the sub only when the file is changed.