Chef inspec question bash/command resource - inspec

I am trying to create a chef inspec resource for bash sciprt to execute, but it seems it is not able to check for the condition.
describe bash("cat /etc/shadow | awk -F: '($2 == "")') do
its('stdout') { should be_empty }
end
Script simply check if the $2 filed is empty or not. But i can see they are empty. but the inspec keeps on failing. I changed from bash to command and yet no luck. Can any one help pls

Related

Get ansible to read value from mysql and/or perl script

There may be a much better way to do what i need altogether. I'll give the background first then my current (non-working) approach.
The goal is to migrate a bunch of servers from one SLES 11 to SLES 12 making use of ansible playbooks. The problem is that the newserver and the oldserver are supposed to have the same nfs mounted dir. This has to be done at the beginning of the playbook so that all of the other tasks can be completed. The name of the dir being created can be determined in 2 ways - on the oldserver directly or from a mysql db query for the volume name on that oldserver. The newservers are named migrate-(oldservername). I tried to prompt for the volumename in ansible, but that would then apply the same name to every server. Goal Recap: dir name must be determined from the oldserver, created on new server.
Approach 1: I've created a perl script that ansible will copy to the newserver and it will execute the mysql query, and create the dir itself. There are 2 problems with this - 1) mysql-client needs to be installed on each server. This is completely unneccesary for these servers and would have to then be uninstalled after the query is run. 2) Copying files and remotely executing them seems like a bad approach in general.
Approach 2: Create a version of the above except run it on the ansible control machine (where mysql-client is installed already) and store the values in key:value pairs in a file. Problems - 1) I cannot figure out how to in the perl script determine what hosts Ansible is running against and would have to manually enter them into the perl script. 2) I cannot figure out how to get Ansible to import those values correctly from the file created.
Here's the relevant perl code I have for this -
$newserver = migrate-old.server.com
($mig, $oldhost) = split(/\-/, $newserver);
$volname=`mysql -u idm-ansible -p -s -N -e "select vol_name from assets.servers where hostname like '$oldhost'";`;
open(FH, ">vols.yml");
$values = $newhost{$volname};
print FH "$newhost:$volname\n";
close(FH);
My Ansible code is all over the place as I've tried and commented out a ton of things. I can share that here if it is helpful.
Approach 3: Do it completely in Ansible Basically a mysql query for loop over each host. Problem - I have absolutely no idea how to do this. Way too unfamiliar with ansible. I think this is what I would prefer to try and do though.
What is the best approach here? How do I go about getting the right value into ansible to create the correct directory?
Please let me know if I can clarify anything.
Goal Recap: dir name must be determined from the oldserver, created on new server.
Will magic variables do?
Something like this:
---
- hosts: old-server
tasks:
- shell: "/get-my-mount-name.sh"
register: my_mount_name
- hosts: new-server
tasks:
- shell: "/mount-me.sh --mount_name={{ hostvars['old-server'].my_mount_name.stdout }}"

Perforce p4 sync command fails for a folder which has a space in it's name

There is a folder with the name Test Logs. As it can be seen there is a space between Test and Logs
When I try to get it locally using sync command in perl script it fails.
The script has the code:
system("p4 sync -f //depot/Test Logs/OnTargetLogs/...");
I get the following error:
> //depot/Test - no such file(s).
> Logs/OnTargetLogs/... - no such file(s).
Quote the argument maybe?
system("p4 sync -f \"//depot/Test Logs/OnTargetLogs/...\"");
– Sobrique
What you said worked. Also I found another way of doing this :
my #a1 = ("p4","sync","-f","//depot/Test Logs/OnTargetLogs/..."); system #a1;
– Vishal Khemani

Is it possible to list all tags across all behat tests?

I have several hundred behat tests created by many people who used different tags. I want to clean this up, and to start with I want to list out all the tags which have been used so far.
I wanted to answer my own question as it was something I could not find an answer to elsewhere.
I tried initially to use a custom formatter but that did not work.
https://gist.github.com/paulmozo/fb23d8fb436700381a06
Eventually I crafted a Bash command to suit my purposes
bin/behat --dry-run 2>&1 | tr ' ' '\n' | grep -w #.* | sort -u
This runs the behat command with --dry-run which does not execute the tests, merely outputs the steps so I can pipe them to another tool. The 2>&1 redirects the standard error to null (this is shell dependent). The tr tool breaks every word in the stream into a separate line. The grep searches for lines starting with the # symbol. Finally sort -u sorts the list and returns the uniques.
This command takes about 15 seconds to run and did the job perfectly for me.

sed command is not working properly

I'm trying to replace the word in shell script with sed -e command but its not replacing , please help on that, i have tried the below
we have separate file in /data/docs/config.log, in that file there is a word ?account for example ,
username acc, passsword acc, ?account.name
this ?account word needs to be replaced with word 'GLOBAL' using sed -e command ,
reacc = GLOBAL
sed -e "s/?account/$reacc/g" /data/docs/config.log > /data/docs/newconfig.log
but here the file newconfig.log has created with 0 size , no output written to the file , its not replacing its an empty file,
the output should be username acc, passsword acc, GLOBAL.name in newconfig.log
Being the only person who can reproduce the problem, you are pretty much on your own. There are plenty of things you can do to analyze the problem, though.
Double-check the shell. Don't have blind faith in #!/bin/sh. In cygwin for example, /bin/sh is an alias for bash. Verify with: echo $SHELL
Check permissions and file system. Do you have rights to write to the output file? Is the disk full? Does cat /data/docs/config.log > /data/docs/newconfig.log work? Test again in a different folder.
Double-check the output file. Is it really empty, or is the file system just slow with updating the file size? Is sed really finished? Test without output redirection; see if the output is dumped to stdout.
Test with a small file; one or two lines is enough.
If even that does not work, then test sed itself. Who knows, maybe you have a weird alias that hides the real sed. The most trivial filter is sed -e '', which should simply echo every line you type (just like cat without parameters). Does that work? Then try some simple patterns.
Systematically iterate between test cases that succeed and test case that fail, until you have found the breaking point. Doing so, you should be able to find the cause. Sorry, that's all I can do for you right now.
Remove spaces around =. Try after making
reacc=GLOBAL

Script response if md5sum returns FAILED

Say I had a script that checked honeypot locations using md5sum.
#!/bin/bash
#cryptocheck.sh
#Designed to check md5 CRC's of honeypot files located throughout the filesystem.
#Must develop file with specific hashes and create crypto.chk using following command:
#/opt/bin/md5sum * > crypto.chk
#After creating file, copy honeypot folder out to specific folders
locations=("/share/ConfData" "/share/ConfData/Archive" "/share/ConfData/Application"
"/share/ConfData/Graphics")
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done
And the output looked like this:
http://pastebin.com/b4AU4s6k
Where would you start to try and recognize the output and perhaps trigger some sort of response by the system if there is a 'FAILED'?
I've worked a bit with PERL trying to parse log files before but my attempts typically failed miserably for one reason or another.
This may not be the proper way to go about this, but I'd want to be putting this script into a cronjob that would run every minute. I had some guys telling me that an inotify job or script (I'm not familiar with this) would be better than doing it this way.
Any suggestions?
--- edit
I made another script to call the script above and send the output to a file. The new script then runs a grep -q on 'FAILED' and if it picks anything up, it sounds the alarm (tbd what the alarm will be).
#!/bin/bash
#cryptocheckinit.sh
#
#rm /share/homes/admin/cryptoalert.warn
/share/homes/admin/cryptocheck.sh > /share/homes/admin/cryptoalert.warn
grep -q "FAILED" /share/homes/admin/cryptoalert.warn && echo "LIGHT THE SIGNAL FIRES"
Use:
if ! /opt/bin/md5sum -c /share/homes/admin/crypto.chk
then
# Do something
fi
Or pipe the output of the loop:
for i in "${locations[#]}"
do
cd "$i/aaaCryptoAudit"
/opt/bin/md5sum -c /share/homes/admin/crypto.chk
done | grep -q FAILED && echo "LIGHT THE SIGNAL FIRES"