I am trying to access an OLE object "Broker.Application" via Win32::OLE
The application "Broker.exe" is already launched manually.
Running the following snippet from a bash shell using cygwin perl, it attaches to the running instance "Broker.exe" correctly
$broker = Win32::OLE->new('Broker.Application') or die "Can't load Broker.Application";
But When I run this snippet from inside screen, it creates a new instance as shown below
$ ps -W | grep -i broker
1912 0 0 1912 ? 0 01:07:37 C:\Program Files\AmiBroker\Broker.exe #Manually started
3896 0 0 3896 ? 0 14:39:41 C:\PROGRA~1\AMIBRO~1\Broker.exe #created by Win32::OLE from inside screen
Tried Win32::OLE::GetActiveObject() but of no help as it returns undef even if an instance is running.
When tried logging into this machine remotely using SSH, that time also this script creates a new instance instead of attaching to the running instance.
Not sure what is making the difference between running this perl script from a stand-alone bash shell and from inside gnu screen.
Please suggest if there is any workaround to run it from inside screen.
Related
Here is what I am trying to achieve:
run a matlab command/script that starts a unix terminal and from within that terminal starts external software. Matlab itself should be decoupled from that shell immediately.
On a Unix system, I am currently trying to start an external program from within matlab. I know that I can basically use the matlab command prompt as a terminal by adding an ! in front of every command. However then, the program's output is also displayed within on the matlab command prompt and the program is killed as soon as matlab is closed.
To start an external terminal, call that terminal emulator using the matlab system command. If gnome-terminal is your terminal:
system('gnome-terminal');
To pass parameters to the terminal use -e
system('gnome-terminal -e echo hello World');
This terminal will close immediately after it's finished running. So too keep it open:
system('gnome-terminal -e "bash -c \"echo Hello World; exec bash\""');
Hope this helps. I similar command should work for other terminal emulators beside gnome-terminal.
I'm running a script in solaris 11 with different results depending of the shell used.
The script has an echo redirecting to a file given by an environment value:
echo "STARTING EXEC" >> $FILE
ps. EXEC is just the message the script show, it's not using exec command.
If I execute the script without the variable FILE defined (using /usr/bin/ksh):
./start.sh[10]: : cannot open
and the script continue the execution.
The flags for ksh are:
echo $-
imsuBGEl
But if I change to /usr/xpg4/bin/sh, the script show me the echo in stdout and there is no error shown.
The flags for xpg4 sh are:
echo $-
imsu
I tried to change the flags with set +- (I can't remove El flags, but BG are removed ok), but can't get the same behavior.
Is there anything I can do to get the same result using ksh without cannot open error?
/usr/bin/ksh --version
version sh (AT&T Research) 93u 2011-02-08
I'll want the script keep going, showing the message in stdout, instead of showing the error just like it does now.
Like shellter said in the comments, the good thing to do is to check if the FILE variable is defined before doing anything. This is a script migration from an HPUX to a SOLARIS environment, and client think they must have the same result as before (we unset FILE variable before execution to test it).
You are likely running Solaris 11, not Solaris 64.
Should you want to have your scripts to work under Solaris 11 without having to search everywhere the bogus redirections, you can simply replace all shebangs (first line) by #!/usr/xpg4/bin/sh.
The final solution we are going to take is to install the ksh88 package and use it like default shell (/usr/sunos/bin/ksh). This shell have the same behavior the client had before, and we can let the scripts with no modifications.
The ksh used in solaris 11 is the 93 (http://docs.oracle.com/cd/E23824_01/html/E24456/userenv-1.html#shell-1)
Thanks #jlliagre and #shellter for your help.
Does anybody have an example of how to create a windows service on windows 7 64-bit from PERL script?
On windows XP professional 32-bit, I have created windows service successfully with Win32::Daemon which has call back functions. This doesn't work on 64-bit.
I have seen this http://nssm.cc/usage and created a service using that but it doesn't keep state and it gives errors. If anybody has a proper example perhaps ...
Some idea much appreciated.
I managed to solved this now.
I now have my perl script running as a windows service now on windows 7 64-bit.
Basically win32::Daemon works on win 7 64-bit but the service creation needs to be done manually. I.e. the callback functions and start up is fine.
Here is a example to create perl windows service from scratch.
Create folder c:/myservice
Copy the code example from this link and save it to the directory above. Call it myservice.pl (http://www.roth.net/forums/topic.php?id=106)
Add these two lines after the print hello statement in the script.
$Context->{last_state} = SERVICE_RUNNING;
Win32::Daemon::State( SERVICE_RUNNING );
This is needed to keep the service running. Otherwise it stops.
Open a Dos cmd terminal in admin mode. Create service using following command
% sc create myservice binpath= "c:\strawberry\perl\bin\perl.exe"
It will display following message on success.
[SC] CreateService SUCCESS
Now we need to edit the registry. Open registry editor. (start -> then type regedit)
Find the service under HKEY_LOCAL_MACHINE->SYSTEM->CurrentControlSet->Services->myservice
Click on 'myservice' and edit the imagePath variable to be :
c:\strawberry\perl\bin\perl.exe -I "C:\myservice" "C:\myservice\myservice.pl" --run
Now open the services window and start the service. (start->control panel -> Administrative tools -> services)
In the current directory of the script a log is created and updated every couple of seconds. If using cygwin for windows, you can tail it.
% tail -f *.log
Process will print Hello! periodically ...
Thanks.
I am trying to write a script for starting tomcat server which get disassociated from the shell once the execution of the script complete. For example please see below snapshot of the screen.
bash-3.00# ./startup.sh
Using CATALINA_BASE: /opt/tomcat/6.0.32
Using CATALINA_HOME: /opt/tomcat/6.0.32
Using CATALINA_TMPDIR: /opt/tomcat/6.0.32/temp
Using JRE_HOME: /opt/jdk1.6.0_26/
Using CLASSPATH: /opt/tomcat/6.0.32/bin/bootstrap.jar
bash-3.00# ps -eaf | grep tomcat
root 4737 2945 0 02:45:53 pts/24 0:00 grep tomcat
root 4734 29777 1 02:45:42 pts/24 0:19 /opt/jdk1.6.0_26//bin/java -Djava.util.logging.config.file=/opt/tomcat/6.0.32/c
Now as you can see that once the execution of the script complete the tomcat process is associated with pts/24 till I close the shell.
But what I want is that even if the shell is kept open the process should show a behavior like below
bash-3.00# ps -eaf | grep tomcat
root 13985 2945 0 22:40:13 pts/24 0:00 grep tomcat
root 13977 29777 1 22:40:01 ? 0:22 /opt/jdk1.6.0_26//bin/java -Djava.util.logging.config.file=/opt/tomcat/6.0.32//
The operating System is Solaris. The various option I used to accomplish the same are using nohup, and disown but still the process is associated with shell.
The other mechanism is to put in crontab or use svc to make the process start as system comes up i.e. daemon or we can write a small C program which forks a process and goes away.
Here please note that the process is running in background.
But I want to achieve the same using a shell or perl script. So any thought on the same will help me a lot.
Thanks in advance.
Well, you could go and do all the hard work yourself, but why when there's a module for that: Proc::Daemon (Not sure if it works on solaris)
The documentation also describes the process used, which is useful for you to understand anyhow, if you decided to go ahead and craft your own daemonizing code.
( nohup ./script.bash & )
The parenthesized sub-shell exits immediately and ps -ef |grep script.bash returns:
501 59614 1 0 0:00.00 ttys005 0:00.00 /bin/bash ./script.bash
My problem is connecting throguh Net::SSH::Perl vs Connecting through SSH.
my shell variables ( command: export ) aren't the same.
This fact prevents me from executing user's script that depends on the shell env vars.
For example:
if i execute through perl ( Net::SSH::Perl ) the command: "export" I get:
MAIL=/var/mail/username
PATH=/usr/local/bin:/bin:/usr/bin
PWD=/username
SHELL=/bin/ksh
SSH_CLIENT='myIPAddress 1022 22'
SSH_CONNECTION='myIPAddress 1022 remoteIPAddress 22'
USER=myusername
while executing the same command through regular ssh connecting I get 42 rows of ENV VARS.
Other example:
if i execute through perl ( Net::SSH::Perl ) the command: "tty" I get:
not a tty
while executing the same command through regular ssh connecting I get:
/dev/pts/3
What am I missing here ?
For the environment variables, it sounds like ~/.bashrc isn't getting sourced. You should be able to source it yourself:
$ssh->cmd(". ~/.bashrc");
For the terminal, allocating a pty is not necessary for many tasks, so it is not normally done for non-interactive shells; however, you can pass the use_pty option when creating your ssh object to tell it to create a pty.
from the Net::SSH::Perl documentation:
use_pty
Set this to 1 if you want to request a pseudo tty on the remote machine. This is really only useful if you're setting up a shell connection (see the shell method, below); and in that case, unless you've explicitly declined a pty (by setting use_pty to 0), this will be set automatically to 1. In other words, you probably won't need to use this, often.
The default is 1 if you're starting up a shell, and 0 otherwise.
Chas, that would not work either as Net::SSH::Perl (and for that matter, most other Perl SSH clients) runs every command in a new shell session, and so side effects do not persist between commands.
The solution is to combine both commands as follows:
$ssh->cmd(". ~/.bashrc && $cmd");