How do I convert a Binary .SQL file into readable text? - powershell

Let's say that I have backed up a Database in SQL from a format that sends all Tables to individual .SQL files. They're encrypted / converted to Binary in such a way that makes them very painful to read anything.
The command to export these are in a batch file with this as the producing command:
sqlcmd -S "%DB%,%PORT%" -d "%DBNAME%" -U %DBID% -P %DBPASSWORD% -Q "set nocount on;SELECT name FROM sys.tables order by name;" -h -1 -o table.txt
Followed by:
bcp.exe %DBNAME%.dbo.%%a out %%a.sql -S "%DB%,%PORT%" -n -U %DBID% -P %DBPASSWORD%
Conversely, the command to Import these files is:
bcp.exe %DBNAME%.dbo.%TableName% in %TableName%.sql -S "%HOST_IP%,%PORT%" -T -E -n 1
The problem is - these scripts are executed on the PC Hosting the Database - one that neither I nor my colleagues will always have access to. The purpose of trying to get some of the values out of the raw .SQL files into plain text is to provide some information as a 'checksum' to validate some of the information found within the files without the need of hosting SQL on our local machines.
With this, some of the words are almost plaintext . . . almost. Using a VERY cumbersome method I can get most of the values, but I'm curious to know if there's a more native way to do this very thing. . . .
Unfortunately, SSMS and other sqlcmd or bcp commands won't work (to my knowledge?) without having SQL / SSMS installed on our local machines. . . .
Example of some of the Binary output:
Here is some of the code from before. Basically I'm taking the hex equivalents of the binary characters and replacing them with plain text and then parsing them out with different -replace commands (still a little uglier than I like, but I'm getting an output I can actually use. . .).
$PSFindStr_0='\x12' #DC2#
$PSFindStr_1='\x14' #DC4#
$PSFindStr_2='\x13' #DC3#
$PSFindStr_3='\x0c' #FF#
$PSFindStr0='\x00'
$File = C:\Foo.sql
$OutFile= C:\Foo.txt
$Finding = $PSFindStr_0
$Repl="`r`n#DCTWO#"
$fil2Parse=[IO.File]::ReadAllText($File)
$filparsed=$fil2parse -replace $Finding, $Repl
$Finding = $PSFindStr_1
$Repl="`r`n#DCFOUR#"
$fil2Parse=$filparsed
$filparsed=$fil2parse -replace $Finding, $Repl
$Finding = $PSFindStr_2
$Repl="`r`n#DCTHREE#"
$fil2Parse=$filparsed
$filparsed=$fil2parse -replace $Finding, $Repl
$Finding = $PSFindStr_3
$Repl="`r`n#FF#"
$fil2Parse=$filparsed
$filparsed=$fil2parse -replace $Finding, $Repl
$Finding = $PSFindStr0
$Repl=''
$fil2Parse=$filparsed
$filparsed=$fil2parse -replace $Finding, $Repl
[IO.File]::WriteAllText($OutFile, $filparsed, [System.Text.ASCIIEncoding]::ASCII)
Get-Content $OutFile | Sort-Object -Unique |Set-Content $OutFile
The extended version of the code snip above gives me an output similar to what's below, but the question remains - is there a better way without having to install MSSQL?
ADAM
ADAMO
BENJAMINA
BLAKET
BRIARS
BRIARSKATERINEJ

Related

Powershell - Select a specific word from text

I'm still very much learning Powershell and I'm a bit lost regarding a command to use.
I have this
docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com
and I want to select PASSWORD.
Ideally, I would like this PASSWORD to get into a file, and use it after (but that I can do).
I am lost on what command to use. I know awk '{print $6}' would work but I need the powershell as I'm using a windows machine.
I know it's a really simple question, I have been reading answers, but I am just confused by the different parameters, and the different ways by different people and well, Powershell is wonderful but I'm still learning.
Thanks a lot!!
The default field separator(s) in awk is whitespace, so we can do the same in PowerShell and then grab the 6th resulting substring.
For this, we can use the -split regex operator - it supports the following syntaxes:
[strings] -split [pattern]
or
-split [strings]
When used as in the second example above, it defaults to splitting on whitespace just like awk:
-split "docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com"
If we wrap the expression in #(), we can index into it:
$Password = #(
-split "docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com"
)[5]
or we can use the Select-Object cmdlet to grab it:
$Password = -split "docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com" |Select-Object -Index 5
If, however, we always want to grab the substring immediately after -p instead of whatever the 6th string is, we could use the -replace regex operator instead:
$string = "docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com"
$string -replace '^.*-p\s+(\S+).*$','$1'
You can make this more complex :) I just used simple regexp, but you can make it better :)
$x = "docker login -u AWS -p PApSSWORD -e none number.dkr.ecr.REGION.amazonaws.com "
$x -match "(\-p[a-z0-9 ]*\-e)"
$matches[1] -replace '(\-p) ([a-z0-9 ]*) (\-e)' , ' $2'
Another solution using a Regular Expression,
here with lookarounds
$string = 'docker login -u AWS -p PASSWORD -e none number.dkr.ecr.REGION.amazonaws.com'
if ($string -match '(?<=-p\s+).*(?=\s+-e)'){
$Password = $Matches.Value
} else {
"No password found"
}

How to copy a csv file from a url to Postgresql

Is there any way to use copy command for batch data import and read data from a url. For example, copy command has a syntax like :
COPY sample_table
FROM 'C:\tmp\sample_data.csv' DELIMITER ',' CSV HEADER;
What I want is not to give a local path but a url. Is there any way?
It's pretty straightforward, provided you have an appropriate command-line tool available:
COPY sample_table FROM PROGRAM 'curl "http://www.example.com/file.csv"'
Since you appear to be on Windows, I think you'll need to install curl or wget yourself. There is an example using wget on Windows here which may be useful.
My solution is
cat $file |
tail -$numberLine |
sed 's/ / ,/g' |
psql -q -d $dataBaseName -c "COPY tableName FROM STDIN DELIMITER ','"
You can insert a awk between sed and psql to add missing column.
Interesting if already you know what to put in the missing column.
awk '{print $0" , "'info_about_missing_column'"\n"}'
I have done that and it works and faster than INSERT.

Running Perl within another Perl

I have an existing perl script which I run like this in UNIX.
echo $query | runQuery.pl -a -f,
It runs a query and returns rows with -f delimiter which in above case would be ",".
I want to run the same command in another perl script where I have already defined $query and store the results in an array (#results).
I was not able to get system() work properly. Please assist.
Thanks and Regards,
-Tushar
Something like this ought to work for you:
my $arguments = '-a -f,';
my #results = `echo $query | runQuery.pl $arguments`;

Dynamically building a exlude list for both rsync & egrep format

I wonder if anyone out there can assist me in trying to solve a issue with me.
I have written a set of shell scripts with the purpose of auditing remote file systems based on a GOLD build on a audit server.
As part of this, I do the following:
1) Use rsync to work out any new files or directories, any modified or removed files
2) Use find ${source_filesystem} -ls on both local & remote to work out permissions differences
Now as part of this there are certain files or directories that I am excluding, i.e. logs, trace files etc.
So in order to achieve this I use 2 methods:
1) RSYNC - I have an exclude-list that is added using --exclude-from flag
2) find -ls - I use a egrep -v statement to exclude the same as the rsync exclude-list:
e.g. find -L ${source_filesystem} -ls | egrep -v "$SEXCLUDE_supt"
So my issue is that I have to maintain 2 separate lists and this is a bit of a admin nightmare.
I am looking for some assistance or some advice on if it is possible to dynamically build a list of exlusions that can be used for both the rsync or the find -ls?
Here is the format of what the exclude lists look like::
RSYNC:
*.log
*.out
*.csv
logs
shared
tracing
jdk*
8.6_Code
rpsupport
dbarchive
inarchive
comms
PR116PICL
**/lost+found*/
dlxwhsr*
regression
tmp
working
investigation
Investigation
dcsserver_weblogic_*.ear
dcswebrdtEAR_weblogic_*.ear
FIND:
SEXCLUDE_supt="\.log|\.out|\.csv|logs|shared|PR116PICL|tracing|lost\+found|jdk|8\.6\_Code|rpsupport|dbarchive|inarchive|comms|dlxwhsr|regression|tmp|working|investigation|Investigation|dcsserver_weblogic_|dcswebrdtEAR_weblogic_"
You don't need to create a second list for your find command. grep can handle a list of patterns using the -f flag. From the manual:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero
patterns, and therefore matches nothing. (-f is specified by POSIX.)
Here's what I'd do:
find -L ${source_filesystem} -ls | grep -Evf your_rsync_exclude_file_here
This should also work for filenames containing newlines and spaces. Please let me know how it goes.
In the end the grep -Evf was a bit of a nightmare as rsync didnt support regex, it uses regex but not the same.
So I then pursued my other idea of dynamically building the exclude list for egrep by parsing the rsync exclude-list and building variable on the fly to pass into egrep.
This the method I used:
#!/bin/ksh
# Create Signature of current build
AFS=$1
#Create Signature File
crSig()
{
find -L ${SRC} -ls | egrep -v **"$SEXCLUDE"** | awk '{fws = ""; for (i = 11; i <= NF; i++) fws = fws $i " "; print $3, $6, fws}' | sort >${BASE}/${SIFI}.${AFS}
}
#Setup SRC, TRG & SCROOT
LoadAuditReqs()
{
export SRC=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $2'}`
export TRG=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $3'}`
export SCROOT=`grep ${AFS} ${CONF}/fileSystem.properties | awk {'print $4'}`
**export BEXCLUDE=$(sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' ${CONF}/exclude-list.${AFS} | tr "\n" "|")**
**export SEXCLUDE=$(echo ${BEXCLUDE} | sed 's/\(.*\)|/\1/')**
}
#Load Properties File
LoadProperties()
{
. /users/rpapp/rpmonit/audit_tool/conf/environment.properties
}
#Functions
LoadProperties
LoadAuditReqs
crSig
So with these new variables:
**export BEXCLUDE=$(sed -e 's/[*/]//g' -e 's/\([._+-]\)/\\\1/g' ${CONF}/exclude-list.${AFS} | tr "\n" "|")**
**export SEXCLUDE=$(echo ${BEXCLUDE} | sed 's/\(.*\)|/\1/')**
I use them to remove "*" and "/", then match my special characters and prepend with "\" to escape them.
Then it using "tr" replace a newline with "|" and then rerunning that output to remove the trailing "|" to make the variable $SEXCLUDE to use for egrep that is used in the crSig function.
What do you think?

Pulling hostname from TNS entry

I am working on a script that will need to determine which node a db being used by a local app is running on. I've been trying to use this as a chance to force myself to learn awk/sed and have a test script to test the statements. It's working off a copy of the tnsnames.ora file I have moved to the home folder the script is located in.
Here is a valid tnsnames.ora stanza:
(
DESCRIPTION = (
ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=iplab)(Port=1521))
)
(CONNECT_DATA=(SID=spurs1))
)
After doing some research and getting the awk expression to pull the tns entry to $host I came up with the below script but it doesn't seem to work.
#!/bin/ksh
db=spurs
host=$(awk -v db=$db "/${db}/ {for(i=1; i<=5; i++) {getline; print}}" tnsnames.ora)
echo $host
host= $host | sed 's/Host\s=\s\([a-z]+[0-9]?\)/\1/'
echo $host
When I run it the awk statement I get the following:
(DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=hostname)(Port=1521))) (CONNECT_DATA=(SID=spurs1)) )
./tns.ksh: line 6: (DESCRIPTION: not found
(DESCRIPTION = (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP) (Host=hostname)(Port=1521))) (CONNECT_DATA=(SID=spurs1)) )
From what I have seen reading tutorials and forums I think sed is setup correctly and should be setting $host to one or more lowercase letters followed by 0 or 1 numbers after Host = . Since (DESCRIPTION is the start of $host before and after the sed statement I'm not sure how it isn't finding it, an
This worked for me:
tnsping $db | grep HOST | cut -d\ -f 14 | sed 's/).*//g'
On my system I can use this to get the host as long as the host name doesn't have an equals sign (or the actual literal word HOST in the name of the host):
echo $TNS_STRING | sed 's/.HOST//g' | sed 's/).//g' | sed 's/=//g' | sed 's/\s*//g'
Your value for $host is likely a multiline value, so you need to quote it anyplace you use it, i.e.
host=$(awk -v db=$db "/${db}/ {for(i=1; i<=5; i++) {getline; print}}" tnsnames.ora)
echo "$host"
You also need to capture the output (using command-substitution) via $(...)
host=$(echo "$host" | sed 's/Host\s=\s\([a-z]+[0-9]?\)/\1/')
echo "$host"
(and echo it), so it can be processed by sed
Revise
host=$(echo $host | sed 's/.*Host=//; s/).*$//)
echo "$host"
I've switched back to just $host, without the dbl-quotes, as you don't want the linebreaks in the data. Now it is all one big string, and the regex, strips every upto host=, and then strips everything after the first remaining ) char.
If you still get error messages, I don't have access to a tnsnames.ora record, so please edit your query to include a valid record.
I hope this helps.
you may be better relying on the output of tnsping instead of parsing the file: tnsping appears to emit the description on one line:
host=$(
tnsping $db | while read line; do
if [[ "$line" == *HOST* ]]; then
s=${line#*HOST=}; s=${s%%)*}; echo "$s"; break
fi
done
)
This might work for you:
db=spurs
host=$(sed '/^(/,/^)/!d;/^(/{h;d};H;/^)/!d;g;/'"$db"'/!d;s/.*Host=\([^)]*\).*/\1/' tnsnames.ora)
Tested Code:
OIFS=$IFS;
IFS="(";
tns=`tnsping TNS_ALIAS`
tns_arr=($tns);
tns_info=(`(for ((i=0; i<${#tns_arr[#]}; ++i)); do echo "${tns_arr[$i]/)/}"; done)| grep 'HOST\|PORT'|sed 's/)//g'|sed 's/ //g'`)
for ((i=0; i<${#tns_info[#]}; ++i)); do eval "export ${tns_info[$i]}"; done
echo "host:" $HOST
echo "port:" $PORT
IFS=$OIFS;