Use a Chef recipe to modify a single line in a config file - mongodb

I'm trying to automate disabling the Transparent Huge Pages (THP) Settings for MongoDB using a Chef Recipe.
The THP setting is explained here: MongoDocs THP Settings
I'm trying to follow the first option "In Boot-Time Configuration (Preferred)" by editing the grub configuration file at "/etc/grub.conf"
All I need to do is append "transparent_hugepage=never" to the end of the existing line that starts with "kernel "
I know I can replace a line with Chef::Util::FileEdit, using something like this:
ruby_block "replace_line" do
block do
file = Chef::Util::FileEdit.new("/etc/grub.conf")
file.search_file_replace_line("/kernel/", "kernel <kernel path> <kernel options> transparent_hugepage=never")
file.write_file
end
end
but I need to keep the existing kernel path and kernel options.
I've tried playing around with Chef::Util::Editor, but haven't been successful initializing the constructor. Chef::Util::FileEdit is initialized with a file path (per above), but the ruby docs say that Chef::Util::Editor is initialized with "lines". I've tried
lines = Chef::Util::Editor.new(<lines>)
where <lines> = file path, = Chef::Util::FileEdit.new(), and = 'test string', but nothing seems to work.
Does anyone have any experience with the Chef::Util::Editor? Or a better solution?
Thanks

I never figured out how to modify a single line in a config file using Chef, but here's the recipe I ended up using to disable THP settings for MongoDB.
Recipe: Install MongoDB
# Install MongoDB on Amazon Linux
# http://docs.mongodb.org/manual/tutorial/install-mongodb-on-amazon/
# 1: configure the package management system (yum)
# 2: install mongodb
# 3: configure mongodb settings
# 3.A: give mongod permission to files
# data & log directories (everything in /srv/mongodb)
# http://stackoverflow.com/questions/7948789/mongodb-mongod-complains-that-there-is-no-data-db-folder
execute "mongod_permission" do
command "sudo chown -R mongod:mongod /srv/mongodb"
#command "sudo chown mongod:mongod /var/run/mongodb/mongod.pid"
#command "sudo chown -R $USER /srv/mongodb"
end
# 3.B: edit Transparent Huge Pages (THP) Settings
# get rid of mongod startup warning
# http://docs.mongodb.org/manual/reference/transparent-huge-pages/#transparent-huge-pages-thp-settings
# 3.B.1: disable
execute "disable_thp_khugepaged_defrag" do
command "echo 0 | sudo tee /sys/kernel/mm/transparent_hugepage/khugepaged/defrag" # different b/c file doesn't have options list
end
execute "disable_thp_hugepage_defrag" do
command "echo 'never > /sys/kernel/mm/transparent_hugepage/defrag' | sudo tee --append /sys/kernel/mm/transparent_hugepage/defrag"
end
execute "disable_thp_hugepage_enables" do
command "echo 'never > /sys/kernel/mm/transparent_hugepage/enabled' | sudo tee --append /sys/kernel/mm/transparent_hugepage/enabled"
end
# 3.B.2: verify disabled on reboot
template "/etc/rc.local" do
source "init-rc.local.erb"
owner 'root'
group 'root'
mode '0775'
end
# 4: use upstart & monit to keep mongod alive
Template: init-rc.local.erb
touch /var/lock/subsys/local
if test -f /sys/kernel/mm/transparent_hugepage/khugepaged/defrag; then
echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi

The problem with your own solution is the template can be overwritten by another recipe with it's own rc.local template.
To change that, I add the lines to the existing rc.local
execute "disable_thp_hugepage_defrag" do
command "sudo sed -i -e '$i \\echo never > /sys/kernel/mm/transparent_hugepage/defrag\\n' /etc/rc.local"
not_if 'grep -c "transparent_hugepage/defrag" /etc/rc.local'
end
execute "disable_thp_hugepage_enables" do
command "sudo sed -i -e '$i \\echo never > /sys/kernel/mm/transparent_hugepage/enabled\\n' /etc/rc.local"
not_if 'grep -c "transparent_hugepage/enabled" /etc/rc.local'
end
The grep makes sure that the line is not already in it.
Maybe chef has something better to manage that?

We can efficietly replace contents of file by grouping the elements
e.g.
appending "transparent_hugepage=never" to the end of the existing line that starts with "kernel "
ruby_block "replace_line" do
block do
file = Chef::Util::FileEdit.new("/etc/grub.conf")
file.search_file_replace_line(/kernel.*/, '\0 tansparent_hugepage=never')
file.write_file
end
end
\0 adds whole mached string
note: ' '(single quote)

I disabled hugepages by replicating the following in chef (looks the same as above but with the addition of a not_if statement):
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
E.G
execute "disable_hugepage_defrag" do
not_if "grep -F '[never]' /sys/kernel/mm/transparent_hugepage/defrag"
command "echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
end
I have also had success inserting lines with file.insert_line_if_no_match the ruby line replace feature will probably work for you.
search_file_replace_line(regex, newline) ⇒ Object
ruby_block 'replace_line' do
block do
file = Chef::Util::FileEdit.new('/path/to/file')
file.search_file_replace_line('/Line to find/', 'Line to replace with')
file.write_file
end
end

Related

setup new database in ubuntu using a script [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

How can I edit crontabs in VS Code?

If I try to use Visual Studio Code (on macOS 10.15) to edit my crontab, it opens an empty file without the contents of my crontab.
$ VISUAL='code' crontab -e
crontab: no changes made to crontab
I didn't actually expect this to work (without -w) but include it for completeness. But when I add the -w it still fails.
$ VISUAL="code -w" crontab -e
crontab: code -w: No such file or directory
crontab: "code -w" exited with status 1
It occurred to me that there may be some weirdness with quoting, but neither single quotes nor the following fixed anything:
$ function codew() {
function> code -w "$1"
function> }
$ export VISUAL='codew'
$ crontab -e
The problem seems to be that the crontab's tempfile is not actually present. But how do I solve this? How can I use VS Code to edit crontabs?
Create a file touch ~/code-wait.sh:
#!/bin/bash
OPTS=""
if [[ "$1" == /tmp/* ]]; then
OPTS="-w"
fi
/usr/local/bin/code ${OPTS:-} -a "$#"
Make this file executable:
chmod 755 ~/code-wait.sh
Add to your .bashrc or .bash_profile or .zshrc:
export VISUAL=~/code-wait.sh
export EDITOR=~/code-wait.sh
Run command:
EDITOR='code' crontab -e
here the setting works for me.
.bashrc
## vscode
export VISUAL=/path/to/code-wait.sh
export EDITOR=/path/to/code-wait.sh
code-wait.sh
#!/bin/sh
code -w $*
That is quite a complex issue because there is no way to detect which tool calls the preferred editor. The TTY is the same and no environment variables can help.
Still, I was able to come up with a solution that enables the foreground mode (wait) for temporary files. IMHO, most if not all tools that use external editors and are waiting for them to save the file do use temporary files.
Full script is at https://github.com/ssbarnea/harem/blob/master/bin/edit but I will include here the main snippet:
#!/bin/bash
OPTS=""
if [[ "$1" == /tmp/* ]]; then
OPTS="-w"
fi
/usr/local/bin/code ${OPTS:-} -a "$#"

Perl using the -i option on a vboxsf share: Can't remove input_file Text file busy, skipping file

System: Arch Linux in VirtualBox 5.1.26 on Windows 10 Host
I try to use perl like sed in the terminal for in place substitution the input file:
perl -i -p -e 's/orig/replace/g' input_file
But I always get:
Can't remove input_file Text file busy, skipping file
This happens only if the file is inside a VirtualBox vboxsf share. With all other tools (sed, mv, vim or whatever) it is no problem to change the file.
This problem seems to be related to:
https://www.virtualbox.org/ticket/2553
https://forums.virtualbox.org/viewtopic.php?t=4437
I can't find any solution googling around :(
Update:
Using perl -i.bak -p -e 's/orig/replace/g' input_file I get a similar message:
Can't rename input_file to input_file.bak: Text file busy, skipping file.
This is exactly the same message as gedit shows:
So it is the same behavior, but googling around I can only find the Gedit topic. It seems noone has noticed this with perl -i.
While you are running a unix OS, you are still using a Windows file system. NTFS doesn't support anonymous files like unix file systems, and Perl -i requires support for anonymous files.
The workaround is to use a temporary files by using -i<ext> (e.g. -i~) instead of -i.
I have same problem. My solution is a bashscript. Copy files to tmp. Search and Replace. Overwrite tmp-files with original-files. Than delete tmp-dir. If you need you can use parameter in script for dynamic search&replace and create an alias for call the script direct and everywhere.
#!/bin/bash
echo "Removing text from .log files..."
echo "Creating tmp-dir..."
mkdir /tmp/myTmpFiles/
echo "Copy .log files to tmp..."
cp -v /home/user/sharedfolder/*.log /tmp/myTmpFiles/
echo "Search and Replace in tmp-files..."
perl -i -p0e 's/orig/replace/g' /tmp/myTmpFiles/*.log
echo "Copy .log to sharedfolder"
cp -v /tmp/myTmpFiles/*.log /home/user/sharedfolder/
echo "Remove tmp-dir..."
rm -vr /tmp/myTmpFiles/
echo "Done..."

Supervisord- Execute a command before starting the application / program

Using supervisord, how do I execute a command before running the program?
For example in the code below, I want a file to be created before starting the program. In the code below I am using tail -f /dev/null to simulate the background process but this could be any running program like '/path/to/application'.
I tried '&&' and this doesn't seem to work. The requirement is that the file has to be created first in order for the application to work.
[supervisord]
nodaemon=true
logfile=~/supervisord.log
[program:app]
command:touch ~a.c && tail -f /dev/null
The problem is that supervisor isn't running a shell to interpret command sections, so "&&" is just one of 5 space separated arguments it is passing to the touch command; if this ran successfully, then there should be some unusual filenames in its working directory now.
You can use a shell as your command and pass it the shell logic you would like:
command=/bin/sh -c "touch ~a.c && tail -f /dev/null"
Usually, this type of shell wrapper should be the interface provided and managed by the app and is what supervisord and others just know how to call with paths and options, i.e.:
command=myappswrapper.sh ~a.c
(where myappswrapper.sh is:)
#!/bin/sh
touch $1 && tail -f /dev/null
Here is a trick.
You use a shell script to do that and beyond that
[program:app]
command:sh /path/to/your/script.sh
It's can your script.sh
touch ~a.c
exec tail -f /dev/null
notice exec

How can I make a shell script indicate that it was successful?

If I have a basic .sh file containing the following script code:
#!/bin/sh
rm -rf "MyFolder"
How do I make this running script file display results to the terminal that will indicate if the directory removal was successful?
You don't really need to make it say it was successful. You could have it say something only on error ✖, and then silence means success ✔.
That's how the Unix philosophy works:
The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users' attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue. http://www.linfo.org/rule_of_silence.html
That's the way rm itself behaves.
If you are asking about the general case, as suggested by your question's title, you can run your script with sh -x scriptname to see what it's doing. It's also quite common to write diagnostic output into the script itself, and control it with an option.
#!/bin/sh
verbose=false
case $1 in -v | --verbose )
verbose=true
shift ;;
esac
say () {
$verbose || return
echo "$0: $#" >&2
}
say "Removing $dir ..."
rm -rf "$dir" || say "Failed."
If you run this script without any options, it will run silently, like a well-behaved Unix utility should. If you run it with the -v option, it will print some diagnostics to standard error.
rm -rf "My Folder" && echo "Done" || echo "Error!"
You can read more on creating a sequence of pipelines in bash manual
In the bash (and other similar shells) the ? environment variable gives you the exit code of the last executed command. So you can do:
#!/bin/sh
rm -rf "My Folder"
echo $?
UPDATE
If once the rm command has been executed the directory doesn't exist (because it has been successfully removed or because it didn't exist when the command was executed) the script will print 0. If the directory exists (which will mean that the command has been unable to remove it) then the script will print an exit code other than 0. If I understand properly the question this is exactly the requested behavior. If it is not, please correct me.
The previous answers was wrong : rm don't exit with error code > 0 when the dir isn't present.
Instead, I recommend to use :
dir='/path/to/dir'
if [[ -d $dir ]]; then
rm -rf "$dir"
fi
If you want rm to return a status, remove -f flag.
Example on Linux Mint (the dir doesn't exists):
$ rm -rf /tmp/sdfghjklm
$ echo $?
0
$ rm -r /tmp/sdfghjklm
$ echo $?
1