Below is the script which I am using to execute the "read sub" but unfortunately not able to get the desired result.
#!/bin/sh
{
echo '%macro read;'
echo '%sysexec ( echo -n "Sub setting condition:");'
echo '%sysexec ( read sub) ;'
echo '%sysexec ( echo "Checking the macro [$sub]");'
echo '%mend;'
echo '%read;'
} > "/home/read.sas"
cd /home
sas /home/read.sas
Below is the result which I was expecting from above script:
Checking the macro [<text after entering the sub setting condition:>]
Thank you in advance for your help.
Your output is probably saved in the sas log file.
See sample command below: specifying the file to execute and where to save the log file:
/sas/940/SASFoundation/9.4/sas /projects/program1.sas -log "/projects/program1.log"
Make sure that the .sas file you are executing is the SAS program not the file containing the macro function only.
You need to explain what you are trying to do. Your program might succeed in getting user input from the console, but the value will be stored in an environment variable that the SAS code cannot access.
You might be able to retrieve the value entered by writing it to a file and then reading from the file. But you need to make sure the read command and the command that references the environment variable read run in the same sub-shell.
So if I create this program as read.sas.
data _null_;
call system ( 'echo -n "Sub setting condition:";read sub ; echo "Sub is $sub" >read.txt ' );
run;
data _null_;
infile 'read.txt';
input;
put _infile_;
run;
And run it using sas read. Then the SAS log looks like this:
1 data _null_;
2 call system ( 'echo -n "Sub setting condition:";read sub ; echo "Su
b is $sub" >read.txt ' );
3 run;
NOTE: DATA statement used (Total process time):
real time 2.80 seconds
cpu time 0.02 seconds
4 data _null_;
5 infile 'read.txt';
6 input;
7 put _infile_;
8 run;
NOTE: The infile 'read.txt' is:
Filename=.../test/read.txt,
Owner Name=xxxx,Group Name=xxx,
Access Permission=-rw-rw-r--,
Last Modified=05Apr2018:08:45:02,
File Size (bytes)=16
Sub is Hi there
NOTE: 1 record was read from the infile 'read.txt'.
The minimum record length was 15.
The maximum record length was 15.
NOTE: DATA statement used (Total process time):
real time 0.00 seconds
cpu time 0.00 seconds
Related
Currently, the only POSIX compliant way of creating a unique directory (that I know) is by creating a unique file using the mkstemp() function exposed by m4 and then replacing this file with a directory:
tmpdir="$(printf "mkstemp(tmp.)" | m4)"
unlink "$tmpdir"
mkdir "$tmpdir"
This seems rather hacky though, and I also don't know how safe/secure it is.
Is there better/more direct POSIX compliant way to create a unique temporary directory in shellscript, or is this as good as it gets?
The mktemp command is out of the question because it is not defined in POSIX.
I'd expect using unlink/mkdir to be statistically safe as the window of opportunity for another process to create the directory is likely to be small. But a simple fix is just to retry on failure:
while
tmpdir="$(printf "mkstemp(tmp.)" | m4)"
unlink "$tmpdir"
! mkdir "$tmpdir"
do : ; done
Similarly, we could simply attempt to create a directory directly without creating a file first. Directory creation is atomic so there is no race condition. We do have to pick a name that doesn't exist but, as above, if we fail we can just try again.
For example, using a simple random number generator:
mkdtemp.sh
#!/bin/sh
# initial entropy, the more we can get the better
random=$(( $(date +%s) + $$ ))
while
# C standard rand(), without truncation
# cf. https://en.wikipedia.org/wiki/Linear_congruential_generator
random=$(( (1103515245*random + 12345) % 2147483648 ))
# optionally, shorten name a bit
tmpdir=$( printf "tmp.%x" $random )
# loop until new directory is created
! mkdir $tmpdir 2>&-
do : ; done
printf %s "$tmpdir"
Notes:
%s (seconds since epoch) is not a POSIX-standard format option to date; you could use something like %S%M%H%j instead
POSIX says "Only signed long integer arithmetic is required" which I believe means at least 2^31
I have a .bat (Windows command) file that includes invocations of SQLCMD and other commands. (Of course SQLCMD is sending my T-SQL code to SQL Server.) I want to detect certain conditions in the SQL code, and conditionally exit the entire batch file. I've tried various combinations of RAISERROR, THROW, and deliberate division by 0 (I'm not proud) along with various command line switches on SQLCMD and handling of errorlevel in the .bat file.
I tried the answer to 5789568 but could not get it to work in my case. Here are two files which show one failed attempt. It tries to abort if there are more than 3 tables. But it doesn't abort the bat file, as you can see when the final command (echo) executes. It doesn't even abort the run of SQLCMD, as you can see when it tells you how many tables there are.
example.bat
set ERRORLEVEL=0
sqlcmd -b -S dbread.vistaprint.net -E -d columbus -e -i example.sql
if %ERRORLEVEL% neq 0 exit /b %ERRORLEVEL%
echo we got to the end of the BAT file
example.sql
SET XACT_ABORT ON
if ((SELECT COUNT(*) FROM sys.tables) > 3)
begin
RAISERROR ('There are more than 3 tables. We will try to stop', 18, -1)
end
SELECT COUNT(*) FROM sys.tables
%ERRORLEVEL% is not a normal environment variable. It is a dynamic pseudo variable that returns the current ERRORLEVEL. If you explicitly define a true environment variable with set ERRORLEVEL=0, then the dynamic nature is destroyed, and %ERRORLEVEL% will forever return the value of your user defined environment variable until you undefine your user value. You should never define your own values for ERRORLEVEL, RANDOM, CD, TIME, DATE etc.
If you want to clear the ERRORLEVEL, then you must execute a command that sets the value to 0. I like to use (call ).
Assuming there are no other problems with your code, it should be fixed with the following:
(call )
sqlcmd -b -S dbread.vistaprint.net -E -d columbus -e -i example.sql
if %ERRORLEVEL% neq 0 exit /b %ERRORLEVEL%
echo we got to the end of the BAT file
sqlcmd is an external command (exe program), so it always sets ERRORLEVEL. Therefore you should not have to clear the ERRORLEVEL, and (call ) could be removed. You really only have to worry about clearing the ERRORLEVEL before you run internal commands that do not set ERRORLEVEL to 0 upon success. (See Which cmd.exe internal commands clear the ERRORLEVEL to 0 upon success?)
#dbenson's answer solves the problem with the .bat file. But there is also a problem with the .sql file, whose symptom is that the last line executes despite the error. Here is a complete solution. That is, these two files work correctly.
The "error" for which the code is checking is that a database named 'master' exists. If you replace 'master' with something that is not the name of a database, it runs without error.
example.bat
REM Replace the server name with any SQL Server server to which you
REM have at least read access. The contents of the server don't matter.
sqlcmd -b -S dbread.vistaprint.net -E -i example.sql
if %errorlevel% neq 0 exit /b %errorlevel%
echo 'In the .bat file, we got past the error check, so run was successful.'
example.sql
if exists (select * from master.sys.databases where name = 'master')
begin
RAISERROR ('Found an error condition', 18, 10)
return
end
print 'In the SQL file, we got past the error check, so run was successful.'
I have been searching for a utility/tool that can provide the md5sum(or any unique checksum) of a data block inside ext3 inode structure.
The requirement is to verify whether certain data blocks get zeroed, after a particular operation.
I am new to file systems and do not know if any existing tool can do the job, or I need to write this test utility myself.
Thanks...
A colleague provided a very elegant solution. Here is the script.
It needs the name of file as a parameter, and assumes the file system blocksize to be 4K
A further extension of this idea:
If you know the data blocks associated with the file (stat ), you can use 'skip' option of 'dd' command and build small files, each of 1 block size length. Further, you can get the md5sum of these blocks. So, this way you can get md5sum directly from the block device. Not something you would want to do everyday, but a nice analytical trick.
==================================================================================
#!/bin/bash
absname=$1
testdir="/root/test/"
mdfile="md5"
statfile="stat"
blksize=4096
fname=$(basename $absname)
fsize=$( ls -al $absname | cut -d " " -f 5 )
numblk=$(( fsize/blksize ))
x=1
#Create the test directory, if it does not exist already
if [[ ! -d $testdir ]];
then
`mkdir -p $testdir`
fi
#Create multiple files from the test file, each 1 block sized
while [[ $x -le $numblk ]]
do
(( s=x-1 ))
`dd if=$absname of=$testdir$fname$x bs=4096 count=1 skip=$s`
`md5sum $testdir$fname$x >> $testdir$mdfile`
(( x=x+1 ))
done
I have a program that performs some operations on a specified log file, flushing to the disk multiple times for each execution. I'm calling this program from a perl script, which lets you specify a directory, and I run this program on all files within the directory. This can take a long time because of all the flushes.
I'd like to execute the program and run it in the background, but I don't want the pipeline to be thousands of executions long. This is a snippet:
my $command = "program $data >> $log";
myExecute("$command");
myExecute basically runs the command using system(), along with some other logging/printing functions. What I want to do is:
my $command = "program $data & >> $log";
This will obviously create a large pipeline. Is there any way to limit how many background executions are present at a time (preferably using &)? (I'd like to try 2-4).
#!/bin/bash
#
# lets call this script "multi_script.sh"
#
#wait until there are less then 4 instances running
#polling with interval 5 seconds
while [ $( pgrep -c program ) -gt 4 ]; do sleep 5; done
/path/to/program "$1" &
Now call it like this:
my $command = "multi_script.sh $data" >> $log;
Your perl script will wait if the bash script waits.
positives:
If a process crashes it will be replaced (the data goes, of course, unprocessed)
Drawbacks:
It is important for your perl script to wait a moment between starting instances
(maybe a sleep period of a second)
because of the latency between invoking the script and passing the while loop test. If you spawn them too quickly (system spamming) you will end up with much more processes than you bargained for.
If you are able to change
my $command = "program $data & >> $log";
into
my $command = "cat $data >>/path/to/datafile";
(or even better: append $data to /path/to/datafile directly from perl )
And when your script is finished that the last line will be:
System("/path/to/quadslotscript.sh");
then I have the script quadslotscript.sh here:
4 execution slots are started and stay until the end
all slots get input from the same datafile
when a slot is ready with processing it will read a new entry to process
until the datafile/queue is empty
no processtable lookup during execution, only when all work is done.
the code:
#!/bin/bash
#use the datafile as a queue where all processes get their input
exec 3< "/path/to/datafile"
#4 seperate processes
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
#only exit when 100% sure that all processes ended
while pgrep "program" &>"/dev/null" ; do wait ; done
I used to useĀ a server with LSF but now I just transitioned to one with SLURM.
What is the equivalent command of bpeek (for LSF) in SLURM?
bpeek
bpeek Displays the stdout and stderr output of an unfinished job
I couldn't find the documentation anywhere. If you have some good references for SLURM, please let me know as well. Thanks!
You might also want to have a look at the sattach command.
I just learned that in SLURM there is no need to do bpeek to check the current standard output and standard error since they are printed in running time to the files specified for the stdout and stderr.
Here's a workaround that I use. It mimics the bpeek functionality from LSF
Create a file bpeek.sh:
#!/bin/bash
# take as input an argument - slurm job id - and save it into a variable
jobid=$1
# run scontrol show job $jobid and save the output into a variable
#find the string that starts with StdOut= and save it into a variable without the StdOut= part
stdout=$(scontrol show job $jobid | grep StdOut= | sed 's/StdOut=//')
#show last 10 rows of the file if no argument 2 is given
nrows=${2:-10}
tail -f -n $nrows $stdout
Then you can use it:
sh bpeek.sh JOBID NROWS(optional)
Or add an alias to ~/.bashrc file:
alias bpeek="sh ~/bpeek.sh $1 $2"
and then use it:
bpeek JOBID NROWS(optional)