I have a quick question about creating files with perl and executing them. I wanted to know if it was possible to generate a file using perl (I actually need a .bat script) and then execute this file internally to the program. I know I can create files, and I have with perl, however, I'm wanting to do this internally to the program. So, what I want it to do is actually create a batch script internally to the program (no file is actually written to the disk, everything remains in memory, or the perl program), and then once it completes the writing of the file, I'd like to be able to actually execute this file, and then discard the file it just wrote. I'm basically trying to have it create a batch script on the fly, so that I can just have output text files from the output of the script, rather than creating the batch script on disk, then executing it, and then deleting the batch file from disk when its done.
Can this be done and how would I go about doing this?
Regards,
Drew
Do you really need a batch script? Perhaps everything you want to do can be done directly from Perl or invoked directly by Perl via its system command.
If a batch script is essential, what's wrong with creating a temporary file for the script and then executing it with system? See File::Temp, which will even delete the temporary file automatically after you are done.
If the virtual-batch-file strategy is unavoidable, you might be able to leverage the /C and maybe /S options of cmd. Something like this:
use strict;
use warnings;
my #batch_commands = (
'dir',
q{echo "Make sure quoting isn't busted"},
'ipconfig',
);
# Use & or &&, depending on your needs. Run `cmd /?` for details.
my $virtual_bat_file = join " &\n", #batch_commands;
system "cmd /C $virtual_bat_file";
But this feels very wrong. There has to be a better way to accomplish whatever the larger goal of your application is. By the way, when you run cmd /? to learn about /C, /S, and & vs. &&, you'll quickly appreciate how terrible it is in the Land of Batch. Stay away if at all possible.
open the file; create the contents; close the file; execute the file (with system(), for example); remove the file.
Related
I'm working with an existing framework of WinDbg scripts that go through a series of test scripts Test1.txt, Test2.txt, etc., which are generated by C++ code and which output results.
For example a chunk of one of the test scripts would be,
.if (($spat(#"${var}","18300.000000")==1))
{
.logappend C:\Tests\TestResults.txt
.printf "TestNumber=\t1\tExpected=\t18300.000000\tActual=\t%.6f\t******PASSED******\n",poi(poi(#$t2+#$t6)+0x10)
.logclose
}
I'm trying to add functionality that will create a file whose name displays the current # of the test being run, so that users can see their progress without needing to open a file.
My thought process was that I would set up the script generator, so that at the start of Test #N, it would add a line to the script to create a file 'currentlyRunningTestN.txt', and at the end of Test #N, it would add a line to the script to delete that file. However, I don't see any delete function in the WinDbg meta command glossary: https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/meta-commands, or in the list of supported C functions like printf. Am I just missing something, or is deleting files not supported by WinDbg (or equivalently renaming files, which would also serve my purpose?) If deleting/renaming don't work, is there another way to achieve the functionality I'm looking for?
With the .shell command, you can execute any DOS-like command. Although I never tried deleting a file, it should be possible.
As you may have noticed, WinDbg scripting does not always work on first attempt, please make sure your scripting will not result in a big data loss on your customer's PC whilst deleting files.
From my PERL script, I am calling child shell script.
There are few db environment variables which are exported by child shell script
But when I try to use those in perl script, they are not shown. Here is my code:
my $commanLine = ". SetConnection.sh -n $TaskName";
system $commanLine;
my $dbConnectString = "$ENV{'DB_USER'}/$ENV{'DB_PASSWORD'}";
print "$dbConnectString";
Please suggest.
TL;DR
Exported variables are inherited by child processes from the parent. You can't modify the environment of the parent process from the child directly, but you can certainly exchange data using files, pipes, or other forms of interprocess communication.
Source a Perl File Holding Variables
The easiest solution is to have the child process write a file that can then be sourced by the parent. For example, security issues aside, SetConnection.sh could write to a file like /tmp/variables.pl, which you could then source as a Perl script inside the parent script.
For example, consider the following file, presumably written by the child process:
# /tmp/foo.pl
$foo='bar';
Now you require the file in your parent script:
$ perl -e 'require "/tmp/foo.pl"; print "$foo\n"'
bar
This isn't really very secure, but it does work. Think of it as similar to eval, along with race conditions and access issues. Nevertheless, it's a very pragmatic solution.
Use a Real Configuration File
Alternatively, you could use a format like JSON, YAML, or CSV (created any way you like, including by your child process) to create a configuration file which you could then parse for values. This is generally the best approach, but your use case may vary.
The benefit of this approach is that you can validate and sanitize values, and don't need to worry about the security or uniqueness of temp files. It's really the right way to do these things, but will require much more work on your part.
I'm trying to write a script that will grab logs across a network and parse them for relevant information and perform some action (email if there's a critical issue, simply write to a log file if its a warning). I am using an AIX machine with syslogd to process the logs. Right now it is performing like usual, writing all logs to files ... a lot of files.
I was advised to use Perl and Named Pipes to implement the script. I've just spent some time reading up on named pipes and I find them quite fascinating. However, I'm stumped as to how the "flow" of information should work in this situation and how to make perl handle it.
For example, should I create a fifo outside of the script and tell syslogd to write to it by default and have my script on the other end parsing it? Can Perl do that and (for you sysadmins) is this a smart/possible option?
This is my first encounter with Perl and with named pipes.
You can surely create a named pipe in Perl, although it seems to me that for what you are trying to do, it is better to create the named pipe outside of perl, as you are suggesting, and then have syslogd write to it, and read the pipe from perl.
I don't know very well AIX, but this could do for creating a pipe (source):
mkfifo -p /var/adm/syslog.pipe
To have syslogd write to it, define this in /var/adm/syslog.pipe:
*.info |/var/adm/syslog.pipe
Then:
kill -HUP `cat /var/run/syslogd.pid`
You could also put all this stuff into your perl script: in case the pipe did not exist or syslogd were not using it, the script would arrange all required things for you.
Possibly you could provide some more details as to what you are trying to do, if you need more help.
I have both Sybase and MSFT SQL Servers installed. There is a time when Sybase interferes with MS SQL because they have they have some overlapping commands.
So, I need two scripts:
A) When runs, script A backs up the current path, grabs all paths that contain sybase or SYBASE or SyBASE (you get the point) in them and move them all at the very end of the path, while preserving the order.
B) When it runs, script B restores the path from back-up.
Both script a and script b should affect the path immediately. So, if a.bat that calls patha.ps1, pathb.ps1 looks like so:
#REM Old path here
call patha.ps1
#REM At this point the effective path should be different.
call pathb.ps1
#REM Effective old path again
Please let me know if this does not make sense. I am not sure if call command is the best one to use.
I have never used P.S. before. I can try to formulate the same thing in Python (I know S.O. users tend to ask for "What have you tried so far"). Well, at this point I am VERY slow at writing anything in Power Shell language.
Please help.
First of all: call will be of no use here as you are apparently writing a batch file and PowerShell scripts have no association to run them by default. call is for batch files or subroutines.
Secondly, any PowerShell script you call from a batch file cannot change environment variables of the caller's environment. That's a fundamental property of how processes behave and since you are calling another process, this is never going to work.
I'm not so sure why you are even using a batch file here in the first place if you have PowerShell. You might just as well solve this in PowerShell completely.
However, what I get from your problem is that the best way to resolve this is probably the following: Create two batch files that each set the PATH appropriately. You can probably leave out both the MSSQL and Sybase paths from your usual PATH and add them solely in the batch files. Then create shortcuts to
cmd /k set_mssql_path.cmd
and
cmd /k set_sybase_path.cmd
each of which now is a shortcut to a shell to work with the appropriate database's tools. This is how the Visual Studio Command Prompt works and it's probably the cleanest solution you have. You can use the color and prompt commands in those batches to make the two different shells distinct so you always know what environment you have. For example the following two lines will color the console white on blue and set a prompt indicating MSSQL:
color 1f
prompt MSSQL$S$P$G
This can be quite handy, actually.
Generally, trying to rearrange the PATH environment variable isn't exactly easy. While you could trivially split at a ; this will fail for paths that itself contain a semicolon (and which need to be quoted then). Even in PowerShell this will take a while to get right so I think creating shortcuts specific to the tools is probably the nicest way to deal with this.
I'm writing a script that performs the same function several times, but when I run the script only one of commands executes leaving the rest not executed after the .bat file has run.
Does this have to do with the long time it takes for my commands to run (15-20 sec)? I've written plenty of bat files and I've never run into this. Do I need to have a sleep function between each command?
I've been trying to figure this one out on google, but my available search terms makes my search results vague and difficult.
Any help is definitely appreciated.
the bat file looks something like the following
IF input1 == "search term" goto location
do something
do something
do something
etc
goto end of file
:location
do something else
do something else
do something else
...
Does one of your "do something else" lines involve calling another batch file? If so, do you use the CALL command?
If you want to call another batch file recursively, you need to use CALL. Otherwise, when the called batch file exits, it does not return to the calling batch file and simply exits. This is a relic from the MS-DOS days; since memory was at a premium, the MS developers decided that the batch interpreter shouldn't keep a call stack by default -- so if you wanted one, you had to use CALL.
See call /? for more information.