Stopping cURL transfer in Matlab - matlab

I wrote a Matlab GUI that's used to automate the transfer of data to and from an ftp server, which is done using cURL, e.g.
str = sprintf(' "%s" -O "ftp://%s:%s#%s" -Q "CWD %s%s/" ', ...
handles.curl, username, password, ...
strcat(ftpname, d{1}), '/users/', username);
% Try to transfer file until successful (s=0)
s = 1;
while s ~= 0
s = dos(str);
end
Typically this GUI will be ran on a slow network, so transferring a 50 MB file could take up to 30 minutes or longer.
What I'd like to know is that if a "Stop" button is pressed on the GUI while it's in the middle of a data transfer, is there a way in cURL to cancel that transfer, or do I need to let it complete?

You could setup a timer object with a callback to check if the user hit the cancel button and then try to kill the process via a different dos command. The only thing I don't like is the whole external process cURL, it may be difficult to know you for sure got the right process. Is there a reason you didn't try any of the Matlab transfer commands?

Related

Tcl/Tk - How to keep other buttons useable while separate function still running?

I'm very new to Tcl/Tk and have been dealing with an issue for the last couple of days. Basically I have a server written in C and a client GUI written in Tcl/Tk. So far it doesn't do a ton. To test it, I start up the server so that it's listening for connections, then run my GUI. When I click one of the buttons, the GUI should open up a separate toplevel window with a text widget embedded in it. (This part works.) Then, my client connects to the server and gives it a couple of settings, and through this the server decides what info to send back. The server's response is what gets printed to that second window's text widget.
What I'm trying to add in now is a Stop button. Right now, my server is set up to wait a couple of seconds, then write the same message to the client. This is set up inside a loop that is waiting to hear a "Stop" command from my client. I have a Stop button in the GUI with a command set up to write that command to the server when clicked. However, all of my buttons get frozen as soon as I hit the begin button and messages are written to the client.
Basically, how can I keep allowing my server to write to my client while still keeping the rest of my GUI usable? I want my client to write a new line to the text widget on my separate window whenever it receives a new message from the server, but I still want the main GUI window that has all my command buttons to behave independently.
In general, it depends on whether what you are doing is CPU-intensive (where reading from a plain file counts as CPU-intensive) or I/O-intensive (where running things in another process counts as I/O-intensive; database calls often count as CPU-intensive here despite not really needing to). I'm only going to mention summaries of what's going on as you aren't quite providing enough information.
For I/O-intensive code, you want to structure your code to be event-driven. Tcl has good tools for this, in that fileevent works nicely on sockets, terminals and pipelines on all supported platforms. The coroutine system of Tcl 8.6 can help a lot with preventing the callbacks required from turning your code into a tangled mess!
For CPU-intensive code, the main option is to run in another thread. That thread won't be able to touch the GUI directly (which in turn will be free to be responsive), but will be able to do all the work and send messages back to the main thread with whatever UI updates it wants done. (Technically you can do this with I/O-intensive code too, but it's more irritating than using a coroutine.) Farming things out to a subprocess is just another variation on this where the communications are more expensive (but much isolation is enforced by the OS).
If you're dealing with sockets, you're probably I/O-intensive. Assume that until you show otherwise. Here's a simple example:
proc gets_async {sock} {
set sock [lindex $args end]
fileevent $sock readable [info coroutine]
while {[gets $sock data] < 0 && [fblocked $sock]} {
yield
}
fileevent $sock readable {}
return $data
}
proc handler {socket} {
set n 0
while {![eof $socket]} {
# Write to the server
puts $socket "this is message [incr n] to the server"
# Read from the server
puts [gets_async $socket]
}
close $socket
}
proc launchCommunications {host port} {
set sock [socket $host $port]
fconfigure $sock -blocking 0 -encoding utf-8
coroutine comms($host:$port) handler $socket
}
Note that gets_async is much like coroutine::util gets in Tcllib.

While loop stuck

I have written a callback function in Matlab. My laptop is communicating with another laptop that is sending it bytes every few seconds that are recorded in a text file. For e.g. the laptop sends "66" and my laptop writes to the file Event_Markers.txt "66" continuously until the other laptop sends something else. The code is below.
The problem that I am currently facing is that in my callback function (below) I use a while loop to continuously write the same "information" (e.g. "66") to the text file until the other laptop sends something else. But this while loop gets stuck. This part is of a larger script that is acquiring data from a spectrometer and adding it to my script and causes everything to become stuck and the rest of the script is not executed. I tried to use an if loop instead of while and it only writes "66" twice instead of writing it continuously. It is, however, writing to the text file as I want it to.
Does anybody know if I need to add some other line of code to stop it becoming stuck?
Thanks!
appenderFile=fopen('Event_Markers.txt','a+t');
s=serial('COM3');
set(s,'BytesAvailable',{#myCallback,appenderFile});
set(s,'BytesAvailableFcnCount',1);
set(s,'BytesAvailableFcnMode','byte');
fopen(s);
function myCallback(s,~,appenderFile)
bytes=(s,'BytesAvailable')
if(bytes==1)
[data count msg] = fread(s,bytes);
end
fprintf(appenderFile,'%d \n',data);
bytes=(s,'BytesAvailable');
while bytes==0
fprintf(appenderFile,'%d \n',data);
bytes=get(s,'BytesAvailable');
end
end
You need to break out of the loop when "something else is sent", something like:
while bytes==0
fprintf(appenderFile,'%d \n',data);
bytes=get(s,'BytesAvailable');
if s ~= 66
break
end
end

How do I send message from the command line to an Erlang process?

I am trying to notify an Erlang process that an external program (a Matlab script) has completed. I am using a batch file to do this and would like to enter a command that will notify the Erlang process of completion. Here is the main code:
In myerlangprogram.erl:
runmatlab() ->
receive
updatemodel->
os:cmd("matlabscript.bat"),
...
end.
In matlabscript.bat:
matlab -nosplash -nodesktop -r "addpath('C:/mypath/'); mymatlabscript; %quit;"
%% I would like to notify erlang of completion here....
exit
As you can see I am using the 'os:cmd' erlang function to call my matlab script.
I am not sure that this is the best approach. I have been looking into using ports (http://www.erlang.org/doc/reference_manual/ports.html) but am struggling to understand how/where the ports interact with the operating system.
In summary, my 2 questions are:
1. What is the easiest way to send a message to an Erlang process from the command line?
2. Where/how do erlang ports receive/send data from/to the operating system?
Any advice on this would be gratefully received.
N.b. the operating system is windows 7.
I assume that you want to call os:cmd without blocking your main process loop. In order to accomplish that you will need to call os:command from a spawned process and then send a message back to the Parent process indicating completion.
Here is an example:
runmatlab() ->
receive
updatemodel ->
Parent = self(),
spawn_link(fun() ->
Response = os:cmd("matlabscript.bat"),
Parent ! {updatedmodel, Response}
end),
runmatlab();
{updatedmodel, Response} ->
% do something with response
runmatlab()
end.
For the first, Erlang process is something definitely different from os process. There is no "notifications" mechanism or "messaging" mechanism between them. What can you do is a) run new erlang node, b) connect to target node, c) send message to remote node.
But. Regarding to your question.
runmatlab() ->
receive
updatemodel->
BatOutput = os:cmd("matlabscript.bat"),
%% "here" BAT script has already finished
%% and output can be found in BatOutput variable
...
end.
For the second, ports are about encoding/decoding erlang data type (in short words).

Run URL in Background as Service

I'm looking for a way to create a file that runs a specific URL in the background at a specific time. I'll run the service on a timer, and the user won't be logged in. Basically, the server restarts every day, and I want to run a service at 6AM that just goes to a URL (which will automatically complete some tasks). I was thinking batch file or even AHK..but is there a simple way to do this?
My comment:
You just want to open a URL that itself start some tasks? Tried iexplore.exe "your url" as a windows task?
Add:
If this don't work you can just write a small script in vbscript which you fire as task sheduler task.
This script just opens your url:
Dim l_lTimeoutResolve, l_lTimeoutConnect, l_lTimeoutSend, l_lTimeoutReceive, l_sUrl
l_lTimeoutResolve = 5000
l_lTimeoutConnect = 60000
l_lTimeoutSend = 10000
l_lTimeoutReceive = 10000
l_sUrl = "http://yoururl"
'Dim l_sRequestText : l_sRequestText = "stuff you want to send"
Dim l_oXML : Set l_oXML = CreateObject("MSXML2.ServerXMLHTTP")
l_oXML.setTimeouts l_lTimeoutResolve, l_lTimeoutConnect, l_lTimeoutSend, l_lTimeoutReceive
l_oXML.open "POST", l_sUrl, False
l_oXML.setRequestHeader "Content-Type", "text/xml"
' you can post data and als get a result then to write to a logfile e.g.
'l_oXML.send l_sRequestText
'l_sResponseText = l_oXML.responseText
Pro's prolly post that you better do this as powershell .NET script instead, but I never get used to that now. If you want todo coding in .NET instead (vbscript is a bit tricky to setup security on windows 2008 servers) then better write a simple .exe file in .NET that you just start.
Wrap it in a batch file and create a Windows scheduled tasks to execute it?
edit: massive assumption that you are running in windows... but you get the gist

perl: Launch process with system(), then check if its running under a specific display number

I have a locked down "kiosk" terminal server.
This terminal server has a perl script as its .Xsession, and launches a Tk interface. When that Tk interface is done, the perl script launches "process2" and lets the user interact with "process2" (which is a graphical application).
If a user tampers with "process2", and make it crash, the user might be able to access the underlying desktop, therefore I would want to check if "process2" is running, and if "process2" is not running on $display, I would want to just execute logout (which would logout the display the perl script is currently running as).
Since the system is running 10 instances of "process2" to 10 different users simultanuosly, I cant just check if "process2" is running on the system with "ps" or someting like that. I need to check if "process2" is running under that specific display $display.
Note that all 10 users log on as the same username in all sessions, so I cannot check all processes run by a specific user, that would return all 10 instances too.
Like:
system("process2 &");
while(1) {
sleep(1);
if (is_it_running("process2", $display) == false) {
system("logout &");
}
}
Its the function "is_it_running" that I need to get to know how it should look.
$display can either contain the raw display number, like this: ":1.0", or it can contain the display number parsed out, like this: "1".
If you use fork and exec instead of system("...&"), you can store the Process IDs of your child processes and more directly check their status. See also perlipc.
Why not just run process2 in the foreground? Then your perl script won't get control back until it's done executing, at which point it can exit:
system("process2");
system("logout");
Of course, if that's the entire script, maybe a bash script would make more sense.
I solved it after many attempts.
Did a piped open
$pidofcall = open(HANDLE, "process2|");
Then I did whatever I did need to do, and I made the server send a signal to me if it loses connection with process2. If I did need to bang out, I simply did a "goto killprocess;" Then I simply had:
killprocess:
kill(9,$pidofcall);
close(HANDLE);
$mw->destroy;
system("logout");