Validate if command was successfull in a batch file or revert back all settings - powershell

Please check my batch file command as I need to dis-join computer from old domain and join it to a new one.
Sometimes computer succeeded to dis-join the computer but fails to add it in a new domain so this computer will be out of domain. I need to add command to validate if the computer is successfully joined to the new domain otherwise revert it back to the old domain.
#echo off
netdom.exe remove %computername% /domain:MyOlddomain.local /UserD:Myusername /PasswordD:Mypassword
Ping 127.0.0.1 -n 5 >nul
netdom.exe join %computername% /domain:MyNewDomain /UserD:Myusername /PasswordD:Mypassword
Ping 127.0.0.1 -n 5 >nul
shutdown -t 5 -r -f

The best way about it is to use a IF statement.
The basic structure of a IF statement is as follows.
IF EXIST "%1" (
echo "it's here!"
) ELSE (
echo "it isn't here!"
)
Going about it like this will allow you to revert when needed.
More in depth reading!

Related

sh script gets stuck on read command in while loop

I'm trying to write a script that I'll put in my pi's cron to check for network connectivity every 10 seconds, if it fails a ping to google it will write a text file as false, then next time it succeeds, it will restart a program, because the specific program has issues with reconnecting to the network automatically.
The script seemed to be working when I was executing it from the terminal out of the same directory, then I cd back to / and added a bunch of comments, and now it just exits the script without any output, and for the life of me I can't figure out where i messed it up - I'm still relatively new to scripting so I could be missing something absolutely obvious here, but I couldn't find anything useful on google.
file heirarchy:
/home/pi/WEB_UI/
inside the WEB_UI folder are both of the scripts i'm running here.
nonet.sh - the script in question
pianobar.sh - a simple script to pkill a program and reload it after 5 seconds.
var.txt - a text file that will only ever contain "true" or "false
I've tried removing all of the comments, changing the file locations to ./ and making the while; do commands a single line, but I can't figure out where the issue is. if I run sh -x for the script, it returns:
pi#raspberrypi:~/WEB_UI $ sh -x nonet.sh
+ ping -q -c 1 -W 1 google.com
+ read line
interestingly I get the same result from a test script I was using that was basically
"if var.txt says 'true', echo 'up', else echo 'down'"
I wonder if something is wrong with my sh interpreter?
#!/bin/sh
#ping google, if successful return true
if ping -q -c 1 -W 1 google.com >/dev/null; then
#read variable line, perform action do
while read line
do
#leading $ means return previous output. if line is false:
if [ "$line" = "false" ]
then
#return network up text, run pianobar script, set var.txt to true.
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#otherwise return network is up, set var.txt to true
echo "the network is up"
echo true > /home/pi/WEB_UI/var.txt
#fi ends an if statement, done ends a while loop.
#text after done tells the while loop where to get the line variable
fi
done < /home/pi/WEB_UI/var.txt
else
while read line
do
if [ "$line" = "false" ]
then
#if var.txt is already false, ping google again
if ping -q -c 1 -W 1 google.com >/dev/null; then
#if ping works, the network is back, restart pianobar, set var to true
echo "the network is back up"
sh /home/pi/WEB_UI/pianobar.sh
echo true > /home/pi/WEB_UI/var.txt
else
#if var.txt is false, network is still down. wait.
echo "the network is still down"
fi
else
echo "the network is down"
echo false > /home/pi/WEB_UI/var.txt
fi
done < /home/pi/WEB_UI/var.txt
fi
the script SHOULD just echo a simple line saying whether the network is up, down, back up, or still down, depending on how many flags it passes/fails. Any assistance would be greatly appreciated!
as Shellter said in comments above, the issue was that I needed to add \n to the end of the line in my var.txt
I think I saw another post recently where while read... was frustrated by a missing \n char, so maybe you want to do printf "false\n" > file instead. Good luck.

Perforce - getting stream name in a trigger script

I want to create a trigger for preventing check-out on files if they belong to a specific stream.
I am using the pre-user-edit trigger.
The trigger executes a Perl script.
When I execute in the Perl script a p4 command, then I get this error:
Perforce password (P4PASSWD) invalid or unset.
What I did is getting the stream name out of the client name (which is passed to the Perl script):
$ stream = `p4 client -o $ client | grep ^Stream: | awk '{print \$ 2}'`;
chomp $ stream;
This does not work.
Also, trying to assign $p4 with new P4, fails.
Anyone has a clue how to solve this?
At least, give me a way to extract the stream name out the client name?
Thanks,
You must be logged in to Perforce to run p4 client. In interactive shell you do this by p4 login which prompts you for a password. Once that's validated, Perforce keeps you logged in for a week or so (depends on your P4 server setting). During that week your command should succeed, but once your session expires it will start failing again.
If session expiry is a problem for you, you will need to get hold of a non-expiring ticket. That must be enabled by your server admin (read "To create tickets that do not expire..." att p4 login). See also P4TICKETS.
As an alternative,
$ stream = p4 -F "%Stream%" -ztag client -o;
This would just give you the stream name(no trimming is required)
As an other alternative, use p4 switch to show the name of the current stream.

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"

Gathering Files from multiple computers into one

I am trying to gather files/folders from multiple computers in my network into one centralized folder in the command console (this is the name of the pseudo server for this set of computers)
Basically, what i need is to collect a certain file from all the computers connected to my network and back it up in the console.
Example:
* data.txt // this is the file that i need to back up and its located in all the computers in the same location
* \console\users\administrator\desktop\backup\%computername% // i need each computer to create a folder with its computer name into the command console's desktop so i can keep track of which files belongs to which computer
I was trying to use psexec to do this using the following code:
psexec #cart.txt -u administrator -p <password> cmd /c (^net use \\console /USER:administrator <password> ^& mkdir \\console\users\Administrator\Desktop\backup\%computername% ^& copy c:\data.txt \\console\USERS\Administrator\DESKTOP\backup\%computername%\)
any other suggestions since im having trouble with this command
Just use the command copy must easy.
take a look:
for /F %%a in (computerslist.txt) do (
copy \\%%a\c$\users\administrator\desktop\%%a\*.txt c:\mycollecteddata\%%a
)
that will copy all files *.txt for all computers that are on computereslist.txt; the copy will be with the current credentials. Save the code on a file *.cmd and execute with the right user, you can create a scheduled taks to start with a user thant is commom for all computers.
Good work.

How to get rid off Password expiry error message when connecting to Oracle/DB2 Database in solaris shell script/perl program?

I am connecting to Oracle/DB2 databases through shell script/ Perl program. Databases that i am connecting will need password change every 60 days. This is according to our security policy and cannot be changed. But this is creating problem when connecting to Databases through shell script or perl program. To connect to oracle DB we use below through shell script:
sqlplus -s ${USER_NAME}/${PASSWD}#${DATABASE_NAME} <<EOF > $SQL_LOG/SITE_SQL.log
set echo off
set trimspool on
set pages 0
set linesize 1500
set feedback off
set head off
spool ${ETL_DIR}/SITE.txt
select LTRIM(RTRIM(COLUMN1))||'|'||LTRIM(RTRIM(COLUMN2)) from TABLE where COLUMN2 IN (${SITES});
exit
EOF
grep -i 'error' $SQL_LOG/SITE_SQL.log
if [ $? -ne 0 ]
then
echo "\n\n---------------------------->>`date`extraction successful\n\n---------------------------->>" >> $log
else
echo "\n\n---------------------------->>`date` Error with extraction from Table\n\n---------------------------->>" >> $log
exit -5
fi
But SITE_SQL.log which holds the log for database connectivity part is getting below error message in it.
ERROR:
ORA-28002: the password will expire within 13 days
which is making scripts to fail. but connecting happens to Database and we get required data in spool file. When script checks for error in log file SITE_SQL.log its failing. I dont want to change the error handling part but to suppress this message to be displayed/logged into logfile, so that script will not see this error message in logfile.
Also we have got a perl script which is facing same problem.Below is the code used.
my $l_Var_SQL_Statement="Select to_date('$Var_Data_Date_1','YYYY-MM-DD')-max(load_date) from TABLE where LOAD_STATUS='Success'";
$RetVal=SubExecuteSQL($Var_REP_TMP,$Var_USER_DB,$Var_USER_DBUSER,$Var_USER_DBPASSWORD,$l_Var_SQL_Statement);
if($RetVal eq "ERROR") {
$system_date=`date`;
chomp($system_date);
$Message="$system_date:Error Executing Query :$l_Var_SQL_Statement\n$system_date:Database Details:DB=$Var_USER_DB,Use
r ID=$Var_USER_DBUSER, Password= $Var_USER_DBPASSWORD for $my_filename Repository";
SubWriteLogMsg("$Var_REP_LOG","$Var_REP_LOGFILE","$Message");
$Message="Error Executing Query :$l_Var_SQL_Statement. Check log file for connection details.";
SubWriteMailMsg("$Var_INFA_MAILFOLDER","$Var_INFA_MAILFILE","$Message");
SubLogLoadAbort("$Var_REP_LOG","$Var_REP_LOGFILE","$Var_INFA_MAILFOLDER","$Var_INFA_MAILFILE");
exit -1;
}
Here since we are getting the password expiry alert error message SubExecuteSQL function is returning "ERROR" as return value which is making perl script to fail.
DBA's are not agreeing to set password does not expire option as its against security policy. Password is set to change every 60 days. so this error message will start popuping up and causing failure.
Please let me know how can i suppress this error message from getting/ logging into logfile.
Thanks in advance
Before your redirection to the log file, put a grep command in a pipe such as:
| grep -v '^\s*\(ERROR:$\|ORA-\)'
ie:
sqlplus -s ${USER_NAME}/${PASSWD}#${DATABASE_NAME} <<EOF | grep -v '^\s*\(ERROR:|ORA-)' > $SQL_LOG/SITE_SQL.log
Verify first that it works with a sample file: not all versions of grep support \s. If yours does not, use [ \t] instead (yes, the space character must be there, it's not a typo).