Use Sed to Update Text in Bourne Shell - sed

I'm trying to update a file in Bourne Shell. The user inputs a name and then is prompted to change the persons name, age and courses. Here's part of the code I've written:
echo "please enter the name: \c"
read updateInput
updateNumber=$(grep -cwi "$updateInput" database.txt)
updateRecord=$(grep -i "$updateInput" database.txt)
test=$(! grep -i "$updateInput" database.txt)
if [ "$updateNumber" -gt "1" ]
then
echo "ambiguous input"
elif [ "$updateRecord" = "" ]
then
echo "record not found"
else
lineNumber=$(grep -ni $updateInput database.txt | cut -f1 -d:)
grep -i $updateInput database.txt > tmp
read first last age course1 course2 < tmp
echo "record found, enter new value now:"
echo "name ($first $last): \c"
read uFirst uLast
if [ "$uFirst" != "" ]
then
sed "$lineNumber s/$first/$uFirst/" database.txt
fi
if [ "$uLast" != "" ]
then
sed "$lineNumber s/$last/$uLast/g" database.txt
fi
When run, sed outputs the correct output with the right things changed, but it doesn't actually update the database file at all. I've tried googling all sorts of things, but nothing is working. If someone could point me in the right direction, that would be awesome. Thanks so much :)

If this is GNU sed, you can use the -i option to edit the file in place:
sed -i "$lineNumber s/$first/$uFirst/" database.txt
Otherwise, you will need to capture the sed output into a temporary file, then copy it over the original file.

Related

Replacing characters in a sh script

I am writing an sh script and need to replace the . and - with a _
Current:
V123_45_678_910.11_1213-1415.sh
Wanted:
V123_45_678_910_11_1213_1415.sh
I have used a few mv commands, but I am having trouble.
for file in /virtualun/rest/scripts/IOL_Extra/*.sh ; do mv $file ${file//V15_IOL_NVMe_01./V15_IOL_NVMe_01_} ; done
You don't need to match any of the other parts of the file name, just the characters you want to replace. To avoid turning foo.sh into foo-sh, remove the extension first, then add it back to the result of the replacement.
for file in /virtualun/rest/scripts/IOL_Extra/*.sh ; do
base=${file%.sh}
mv -i -- "$file" "${base//[-.]/_}".sh
done
Use the -i option to make sure you don't inadvertently replace one file with another when the modified names coincide.
This should work:
#!/usr/bin/env sh
# Fail on error
set -o errexit
# Disable undefined variable reference
set -o nounset
# Enable wildcard character expansion
set +o noglob
# ================
# CONFIGURATION
# ================
# Pattern
PATTERN="/virtualun/rest/scripts/IOL_Extra/*.sh"
# ================
# LOGGER
# ================
# Fatal log message
fatal() {
printf '[FATAL] %s\n' "$#" >&2
exit 1
}
# Info log message
info() {
printf '[INFO ] %s\n' "$#"
}
# ================
# MAIN
# ================
{
# Check directory exists
[ -d "$(dirname "$PATTERN")" ] || fatal "Directory '$PATTERN' does not exists"
for _file in $PATTERN; do
# Skip if not file
[ -f "$_file" ] || continue
info "Analyzing file '$_file'"
# File data
_file_dirname=$(dirname -- "$_file")
_file_basename=$(basename -- "$_file")
_file_name="${_file_basename%.*}"
_file_extension=
case $_file_basename in
*.*) _file_extension=".${_file_basename##*.}" ;;
esac
# New file name
_new_file_name=$(printf '%s\n' "$_file_name" | sed 's/[\.\-][\.\-]*/_/g')
# Skip if equals
[ "$_file_name" != "$_new_file_name" ] || continue
# New file
_new_file="$_file_dirname/${_new_file_name}${_file_extension}"
# Rename
info "Renaming file '$_file' to '$_new_file'"
mv -i -- "$_file" "$_new_file"
done
}
You can try this:
for f in /virtualun/rest/scripts/IOL_Extra/*.sh; do
mv "$f" $(sed 's/[.-]/_/g' <<< "$f")
done
The sed command is replacing all characters .- by _.
I prefer using sed substitute as posted by oliv.
However, if you have not familiar with regular expression, using rename is faster/easier to understand:
Example:
$ touch V123_45_678_910.11_1213-1415.sh
$ rename -va '.' '_' *sh
`V123_45_678_910.11_1213-1415.sh' -> `V123_45_678_910_11_1213-1415_sh'
$ rename -va '-' '_' *sh
`V123_45_678_910_11_1213-1415_sh' -> `V123_45_678_910_11_1213_1415_sh'
$ rename -vl '_sh' '.sh' *sh
`V123_45_678_910_11_1213_1415_sh' -> V123_45_678_910_11_1213_1415.sh'
$ ls *sh
V123_45_678_910_11_1213_1415.sh
Options explained:
-v prints the name of the file before -> after the operation
-a replaces all occurrences of the first argument with the second argument
-l replaces the last occurrence of the first argument with the second argument
Note that this might not be suitable depending on the other files you have in the given directory that would match *sh and that you do NOT want to rename.

Linux command Line find and replace

i have a file.txt with contents
2021-12-03;12.20.31;13;00000.00;00000.00;NO LINK
2021-12-03;12.33.31;15;00199.94;00000.00;Status OK
2021-12-03;12.35.33; 2;01962.33;00015.48;;Status OK
2021-12-03;13.05.31;13;00000.00;00000.00;NO LINK
so what command to output like below
2021-12-03;12:20:31;13;00000.00;00000.00;NO LINK
2021-12-03;12:33:31;15;00199.94;00000.00;Status OK
2021-12-03;12:35:33; 2;01962.33;00015.48;Status OK
2021-12-03;13:05:31;13;00000.00;00000.00;NO LINK
note.
cut -b 12-19 file.txt (is time)
Thanks for your help.
Rido
I assumed that the lines you want to modify are contained in a file (which I called filea.txt). The script should solve your problem.
Contents of the file 'filea.txt':
$> cat filea.txt
2021-12-03;12.20.31;13;00000.00;00000.00;NO LINK
2021-12-03;12.33.31;15;00199.94;00000.00;Status OK
2021-12-03;12.35.33; 2;01962.33;00015.48;;Status OK
2021-12-03;13.05.31;13;00000.00;00000.00;NO LINK
Script File:
$> cat refrm
#!/usr/bin/bash
in_file="filea.txt"
while read -r line || [ -n "$line" ];
do
line=$(echo "${line}" | sed -E 's/;{2,}/;/g')
IFS=$'\n'
line=$(echo ${line} | sed 's/;/\n/g')
arr=($(IFS='\n' ; echo "${line}"))
for ((n=0; n < ${#arr[*]}; n++))
do
if [[ ${arr[n]} =~ ^[0-9]{2}\.[0-9]{2}\.[0-9]{2} ]];
then
arr[n]=`echo ${arr[n]} | sed 's/\./:/g'`
fi
done
nline=$(IFS=";" ; echo "${arr[*]}")
echo "${nline}"
done < ${in_file}
Output:
$> refrm
2021-12-03;12:20:31;13;00000.00;00000.00;NO LINK
2021-12-03;12:33:31;15;00199.94;00000.00;Status OK
2021-12-03;12:35:33; 2;01962.33;00015.48;Status OK
2021-12-03;13:05:31;13;00000.00;00000.00;NO LINK

How to remove YAML frontmatter from markdown files?

I have markdown files that contain YAML frontmatter metadata, like this:
---
title: Something Somethingelse
author: Somebody Sometheson
---
But the YAML is of varying widths. Can I use a Posix command like sed to remove that frontmatter when it's at the beginning of a file? Something that just removes everything between --- and ---, inclusive, but also ignores the rest of the file, in case there are ---s elsewhere.
I understand your question to mean that you want to remove the first ----enclosed block if it starts at the first line. In that case,
sed '1 { /^---/ { :a N; /\n---/! ba; d} }' filename
This is:
1 { # in the first line
/^---/ { # if it starts with ---
:a # jump label for looping
N # fetch the next line, append to pattern space
/\n---/! ba; # if the result does not contain \n--- (that is, if the last
# fetched line does not begin with ---), go back to :a
d # then delete the whole thing.
}
}
# otherwise drop off the end here and do the default (print
# the line)
Depending on how you want to handle lines that begin with ---abc or so, you may have to change the patterns a little (perhaps add $ at the end to only match when the whole line is ---). I'm a bit unclear on your precise requirements there.
If you want to remove only the front matter, you could simply run:
sed '1{/^---$/!q;};1,/^---$/d' infile
If the first line doesn't match ---, sed will quit; else it will delete everything from the 1st line up to (and including) the next line matching --- (i.e. the entire front matter).
If you don't mind the "or something" being perl.
Simply print after two instances of "---" have been found:
perl -ne 'if ($i > 1) { print } else { /^---/ && $i++ }' yaml
or a bit shorter if you don't mind abusing ?: for flow control:
perl -ne '$i > 1 ? print : /^---/ && $i++' yaml
Be sure to include -i if you want to replace inline.
you use a bash file, create script.sh and make it executable using chmod +x script.sh and run it ./script.sh.
#!/bin/bash
#folder articles contains a lot of markdown files
files=./articles/*.md
for f in $files;
do
#filename
echo "${f##*/}"
#replace frontmatter title attribute to "title"
sed -i -r 's/^title: (.*)$/title: "\1"/' $f
#...
done
This AWK based solution works for files with and without FrontMatter, doing nothing in the later case.
#!/bin/sh
# Strips YAML FrontMattter from a file (usually Markdown).
# Exit immediately on each error and unset variable;
# see: https://vaneyckt.io/posts/safer_bash_scripts_with_set_euxo_pipefail/
set -Ee
print_help() {
echo "Strips YAML FrontMattter from a file (usually Markdown)."
echo
echo "Usage:"
echo " `basename $0` -h"
echo " `basename $0` --help"
echo " `basename $0` -i <file-with-front-matter>"
echo " `basename $0` --in-place <file-with-front-matter>"
echo " `basename $0` <file-with-front-matter> <file-to-be-without-front-matter>"
}
replace=false
in_file="-"
out_file="/dev/stdout"
if [ -n "$1" ]
then
if [ "$1" = "-h" ] || [ "$1" = "--help" ]
then
print_help
exit 0
elif [ "$1" = "-i" ] || [ "$1" = "--in-place" ]
then
replace=true
in_file="$2"
out_file="$in_file"
else
in_file="$1"
if [ -n "$2" ]
then
out_file="$2"
fi
fi
fi
tmp_out_file="$out_file"
if $replace
then
tmp_out_file="${in_file}_tmp"
fi
awk -e '
BEGIN {
is_first_line=1;
in_fm=0;
}
/^---$/ {
if (is_first_line) {
in_fm=1;
}
}
{
if (! in_fm) {
print $0;
}
}
/^(---|...)$/ {
if (! is_first_line) {
in_fm=0;
}
is_first_line=0;
}
' "$in_file" >> "$tmp_out_file"
if $replace
then
mv "$tmp_out_file" "$out_file"
fi

Insert Header Row using sed

I need to run a bash script via cron to update a file.
The file is a .DAT (similar to csv) and contains pipe separated values.
I need to insert a header row at the top.
Here's what I have so far:
#!/bin/bash
# Grab the file, make a backup and insert the new line
sed -i.bak 1i"red|blue|green|orange|yellow" thefilename.dat
Exit
But how can I save the file as a different file name so that it always takes fileA, edits it and then saves it as fileB
do you really rename the old one to xxx.bak or can you just save a new copy?
either way, just use redirection.
sed 1i"red|blue|green|orange|yellow" thefilename.dat > newfile.dat
or if you want the .bak as well
sed 1i"red|blue|green|orange|yellow" thefilename.dat > newfile.dat \
&& mv thefilename.dat thefilename.dat.bak`
which would create your new file and then, only if the sed completed sucessfully, rename the orig file.
In case anyone finds it useful, here is what I ended up doing....
Grab the original file, convert it to the desired file type, whilst inserting a new header row and making a log of this.
#!/bin/bash -l
####################################
#
# This script inserts a header row in the file $DAT and resaves the file in a different format
#
####################################
#CONFIG
LOGFILE="$HOME/bash-convert/log-$( date '+%b-%d-%y' ).log"
HOME="/home/rootname"
# grab original file
WKDIR="$HOME/public_html/folder1"
# new location to save
NEWDIR="$HOME/public_html/folder2"
# original file to target
DAT="$WKDIR/original.dat"
# file name and type to convert to
NEW="$NEWDIR/original-converted.csv"
####################################
# insert a new header row
HDR="header-row-1|header-row-2|header-row-2 \r"
# and update the log file
{
echo "---------------------------------------------------------" >> $LOGFILE 2>&1
echo "Timestamp: $(date "+%d-%m-%Y: %T") : Starting work" >> $LOGFILE 2>&1
touch "$LOGFILE" || { echo "Can't create logfile -- Exiting." && exit 1 ;} >>"$LOGFILE"
# check if file is writable
sudo chmod 755 -R "$NEW"
echo "Creating file \"$NEW\", and setting permissions."
touch "$NEW" || {
echo "Can't create file \"$NEW\" -- Operation failed - exiting" && exit 1 ;}
} >>"$LOGFILE" 2>&1
{
echo "Prepending line \"$HDR\" to file $NEW."
{ echo "$HDR" ; cat "$DAT" ;} > "$NEW"
{
if [ "$?" -ne "0" ]; then
echo "Something went wrong with the file conversion."
exit 1
else echo "File conversion successful. Operation complete."
fi
}
} >>"$LOGFILE" 2>&1
exit 0
I found more clear an consistent the syntax with 'i' between the two single quotes for the pattern to 'insert'.
You can simply add a header, and save it in a different file with:
sed '1i header' file > file2
In your case:
sed '1i red|blue|green|orange|yellow' file > file2
If you wanted to save it on the same file, you'd use -i option:
sed -i '1i red|blue|green|orange|yellow' file

How to delete multiple empty lines with SED?

I'm trying to compress a text document by deleting of duplicated empty lines, with sed. This is what I'm doing (to no avail):
sed -i -E 's/\n{3,}/\n/g' file.txt
I understand that it's not correct, according to this manual, but I can't figure out how to do it correctly. Thanks.
I think you want to replace spans of multiple blank lines with a single blank line, even though your example replaces multiple runs of \n with a single \n instead of \n\n. With that in mind, here are two solutions:
sed '/^$/{ :l
N; s/^\n$//; t l
p; d; }' input
In many implementations of sed, that can be all on one line, with the embedded newlines replaced by ;.
awk 't || !/^$/; { t = !/^$/ }'
As tripleee suggested above, I'm using Perl instead of sed:
perl -0777pi -e 's/\n{3,}/\n\n/g'
Use the translate function
tr -s '\n'
the -s or --squeeze-repeats reduces a sequence of repeated character to a single instance.
This is much better handled by tr -s '\n' or cat -s, but if you insist on sed, here's an example from section 4.17 of the GNU sed manual:
#!/usr/bin/sed -f
# on empty lines, join with next
# Note there is a star in the regexp
:x
/^\n*$/ {
N
bx
}
# now, squeeze all '\n', this can be also done by:
# s/^\(\n\)*/\1/
s/\n*/\
/
I am not sure this is what the OP wanted but using the awk solution by William Pursell here is the approach if you want to delete ALL empty lines in the file:
awk '!/^$/' file.txt
Explanation:
The awk pattern
'!/^$/'
is testing whether the current line is consisting only of the beginning of a line (symbolised by '^') and the end of a line (symbolised by '$'), in other words, whether the line is empty.
If this pattern is true awk applies its default and prints the current line.
HTH
I think OP wants to compress empty lines, e.g. where there are 9 consecutive emty lines, he wants to have just three.
I have written a little bash script that does just that:
#! /bin/bash
TOTALLINES="$(cat file.txt|wc -l)"
CURRENTLINE=1
while [ $CURRENTLINE -le $TOTALLINES ]
do
L1=$CURRENTLINE
L2=$(($L1 + 1))
L3=$(($L1 +2))
if [[ $(cat file.txt|head -$L1|tail +$L1) == "" ]]||[[ $(cat file.txt|head -$L1|tail +$L1) == " " ]]
then
L1EMPTY=true
else
L1EMPTY=false
fi
if [[ $(cat file.txt|head -$L2|tail +$L2) == "" ]]||[[ $(cat file.txt|head -$L2|tail +$L2) == " " ]]
then
L2EMPTY=true
else
L2EMPTY=false
fi
if [[ $(cat file.txt|head -$L3|tail +$L3) == "" ]]||[[ $(cat file.txt|head -$L3|tail +$L3) == " " ]]
then
L3EMPTY=true
else
L3EMPTY=false
fi
if [ $L1EMPTY = true ]&&[ $L2EMPTY = true ]&&[ $L3EMPTY = true ]
then
#do not cat line to temp file
echo "Skipping line "$CURRENTLINE
else
echo "$(cat file.txt|head -$CURRENTLINE|tail +$CURRENTLINE)">>temp.txt
echo "Writing line " $CURRENTLINE
fi
((CURRENTLINE++))
done
cat temp.txt>file.txt
rm -r temp.txt
FINALTOTALLINES="$(cat file.txt|wc -l)"
EMPTYLINELINT=$(( $CURRENTLINE - $FINALTOTALLINES ))
echo "Deleted " $EMPTYLINELINT " empty lines."