Can someone explain me this recipe? - sed

$(Q)makedepend $(CFLAGS) -o.o -f- $< 2> nul: | sed -e s!$(<:.cpp=.o)!$#! -e s!\\!/!g > $(#:.o=.d)
It is part of following rule in contiki makefile.
CUSTOM_RULE_CPP_TO_OBJECTDIR_O = 1
$(OBJECTDIR)/%.o: %.cpp | $(OBJECTDIR)
$(TRACE_CC)
$(Q)cl -nologo $(VCFLAGS) -c $< -Fo$#
$(Q)makedepend $(CFLAGS) -o.o -f- $< 2> nul: | sed -e s!$(<:.cpp=.o)!$#! -e s!\\!/!g > $(#:.o=.d)
Please let me know if you want more info before downvoting. Thanks

I'm not sure what exactly you want explained. This is a command line which invokes the makedepend program, redirects its stderr to nul: (looks like this expects to be run on a DOS system? But using UNIX tools like sed?) and sends its output to sed. The sed program is converting the .o filename in the makedepend output to the actual target name (including the path), and converting backslashes to forward slashes.
Then the results are written to a .d file. Presumably somewhere in the makefile you'll find a line that uses include to include all the .d files.
Basically, this is a way of auto-generating header file dependencies so that make will rebuild object files when they change, without you having to write them all by hand in the makefile.
ETA
$(Q)
A make variable reference which probably expands to either # or not, to control verbosity of the makefile.
makedepend $(CFLAGS) -o.o -f- $<
Invoke makedepend with the CFLAGS options and write the results to stdout (-f-), reading the source file ($<)
2> nul:
Redirect stderr to nul: which on Windows is a special character device which means "throw it away".
|
Send the output of the previous command to the input of the next command.
sed
Run the sed (stream editor) program, which is a UNIX/POSIX program that manipulates files on a line-by-line basis (usually).
-e s!$(<:.cpp=.o)!$#!
The first manipulation substitutes the plain name of the object file (e.g., if you're compiling foo.cpp then $< is foo.cpp, and $(<:.cpp=.o) replaces the .cpp with a .o so the result is foo.o) with the name of the target: $# will be the full target name like obj/foo.o, if OBJECTDIR is obj.
-e s!\\!/!g
The second substitution replaces all backslashes (escaped from the shell, so \\) with forward slashes.
> $(#:.o=.d)
Write the output of the sed command to the file $# with the .o replaced by .d, so if OBJECTDIR is obj it will write to obj/foo.d

Related

Why does "sed -n -i" delete existing file contents?

Running Fedora 25 server edition. sed --version gives me sed (GNU sed) 4.2.2 along with the usual copyright and contact info. I've create a text file sudo vi ./potential_sed_bug. Vi shows the contents of this file (with :set list enabled) as:
don't$
delete$
me$
please$
I then run the following command:
sudo sed -n -i.bak /please/a\testing ./potential_sed_bug
Before we discuss the results; here is what the sed man page says:
-n, --quiet, --silent
suppress automatic printing of pattern space
and
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied). The default operation mode is to break symbolic and hard links. This can be changed with --follow-symlinks and --copy.
I've also looked other sed command references to learn how to append with sed. Based on my understanding from the research I've done; the resulting file content should be:
don't
delete
me
please
testing
However, running sudo cat ./potential_sed_bug gives me the following output:
testing
In light of this discrepancy, is my understanding of the command I ran incorrect or is there a bug with sed/the environment?
tl;dr
Don't use -n with -i: unless you use explicit output commands in your sed script, nothing will be written to your file.
Using -i produces no stdout (terminal) output, so there's nothing extra you need to do to make your command quiet.
By default, sed automatically prints the (possibly modified) input lines to whatever its output target is, whether implied or explicitly specified: by default, to stdout (the terminal, unless redirected); with -i, to a temporary file that ultimately replaces the input file.
In both cases, -n suppresses this automatic printing, so that - unless you use explicit output functions such as p or, in your case, a - nothing gets printed to stdout / written to the temporary file.
Note that the automatic printing applies to the so-called pattern space, which is where the (possibly modified) input is held; explicit output functions such as p, a, i and c do not print to the pattern space (for potential subsequent modification), they print directly to the target stream / file, which is why a\testing was able to produce output, despite the use of -n.
Note that with -i, sed's implicit printing / explicit output commands only print to the temporary file, and not also to stdout, so a command using -i is invariably quiet with respect to stdout (terminal) output - there's nothing extra you need to do.
To give a concrete example (GNU sed syntax).
Since the use of -i is incidental to the question, I've omitted it for simplicity. Note that -i prints to a temporary file first, which, on completion, replaces the original. This comes with pitfalls, notably the potential destruction of symlinks; see the lower half of this answer of mine.
# Print input (by default), and append literal 'testing' after
# lines that contain 'please'.
$ sed '/please/ a testing' <<<$'yes\nplease\nmore'
yes
please
testing
more
# Adding `-n` suppresses the default printing, so only `testing` is printed.
# Note that the sequence of processing is exactly the same as without `-n`:
# If and when a line with 'please' is found, 'testing' is appended *at that time*.
$ sed -n '/please/ a testing' <<<$'yes\nplease\nmore'
testing
# Adding an unconditional `p` (print) call undoes the effect of `-n`.
$ sed -n 'p; /please/ a testing' <<<$'yes\nplease\nmore'
yes
please
testing
more

Awk inside of qsub

I have a bash script in which I have a few qsubs. Each of them are waiting for a preivous qsub to be done before starting.
My first qsub consist of sending files in a certain directory to a perl program and having the outfiles printed in a new directory. At the end, I echo the array with all my jobs names. This script works as intented.
mkdir -p /perl_files_dir
for ID_FILES in `ls Infiles_dir/*.txt`;
do
JOB_ID=`echo "perl perl_scirpt.pl $ID_FILES" | qsub -j oe `
JOB_ID_ARRAY="${JOB_ID_ARRAY}:$JOB_ID"
done
echo $JOB_ID_ARRAY
My second qsub is meant to sort all my previous files made with my perl script in a new outfile and to start after all these jobs are done (about 100 jobs) with depend=afterany. Again, this part is working fine.
SORT_JOB=`echo "sort -m -n perl_files_dir/*.txt >>sorted_file.txt" | qsub -j oe -W depend=afterany$JOB_ID_ARRAY`
SORT_ARRAY="${SORT_ARRAY}:$SORT_JOB"
My issue is that in my sorted file, I have a few columns I wish to remove (2 to 6), so I came up with this last line using awk piped to sed with another depend=afterany
SED=`echo "awk '{\$2="";\$3="";\$4="";\$5="";\$6=""; print \$0}' sorted_file.txt \
| sed 's/ //g' >final_file.txt" | qsub -j oe -W depend=afterany$SORT_ARRAY`
This last step creates final_file.txt, but leaves it empty. I added SED= before my echo because it would otherwise give me Command not found.
I tried without the pipe so it would just print everything. Unfortunately it prints nothing.
I assume it is not opening my sorted file and this is why my final file is empty after my sed. If it's the case, then why won't awk read it?
In my script, I am using variables to define my directories and files (with the correct path). I know my issue is not about find my files or directories since they are perfectly defined at the beginning and used throughout the script. I tried to write the whole path instead of a variable and I get the same results.
for ID_FILES in `ls Infiles_dir/*.txt`
Simplify this to
for ID_FILES in Infiles_dir/*.txt
ls lists the files you pass it (except when you pass it directories, then it lists their content). Rather than telling it to display a list of files and parse the output, use the list of files you already have! This is more reliable (parsing the output of ls will fail if the file names contain whitespace or wildcard characters), clearer and faster. Don't parse the output of ls.
SORT_JOB=`echo "sort -m -n perl_files_dir/*.txt >>sorted_file.txt" | qsub -j oe -W depend=afterany$JOB_ID_ARRAY`
You'd make your life simpler if you used the right form of quoting in the right place. Don't use backquotes, because it's difficult to know how to quote things inside. Use $(…) instead, it's exactly equivalent except that it is parsed in a sane way.
I recommend using a here document for the shell snippet that you're feeding to qsub. You have fewer quoting issues to worry about, and it's more readable.
While we're at it, always put double quotes around variable substitutions and command substitutions: "$some_variable", "$(some_command)". Annoyingly, $var in shell syntax doesn't mean “take the value of the variable var”, it means “take the value of the variable var, parse it as a list of wildcard patterns, and replace each pattern by the list of matching files if there are matching files”. This extra stuff is turned off if the substitution happens inside double quotes (or in a here document, by the way): "$var" means “take the value of the variable var”.
SORT_JOB=$(qsub -j oe -W depend="afterany$JOB_ID_ARRAY" <<'EOF'
sort -m -n perl_files_dir/*.txt >>sorted_file.txt
EOF
)
We now get to the snippet where the quoting was actually causing a problem.
SED=`echo "awk '{\$2="";\$3="";\$4="";\$5="";\$6=""; print \$0}' sorted_file.txt \
| sed 's/ //g' >final_file.txt" | qsub -j oe -W depend=afterany$SORT_ARRAY`
The string that becomes the argument to the echo command is:
awk '{$2=;$3=;$4=;$5=;$6=; print $0}' sorted_file.txt | sed 's/ //g' >final_file.txt
This is syntactically incorrect, and that's why you're not getting any output.
You didn't escape the double quotes inside what was meant to be the awk snippet. It's a lot clearer if you use a here document. Also, you don't need the SED= part. You added it because you had a command substitution (a command between …), which substitutes the output of a command. But since you aren't interested in the output of the qsub command, don't take its output, just execute it.
qsub -j oe -W depend="afterany$SORT_ARRAY" <<'EOF'
awk '{$2="";$3="";$4="";$5="";$6=""; print $0}' sorted_file.txt |
sed 's/ //g' >final_file.txt
EOF
I'm not familiar with qsub, but presumably there's a way to get the error output and the return status of the commands it runs. Inspect that error output, you should have seen the errors from awk.
The version of awk that I am using, does not like the character escapes
awk --version
GNU Awk 3.1.7
spuder#cent64$ awk '{\$2="";\$3="";\$4=""; print \$0}' foo.txt
awk: {\$2="";\$3="";\$4=""; print \$0}
awk: ^ backslash not last character on line
Try the following syntax
awk '{for(i=2;i<=7;i++) $i="";print}' foo.txt
As a side note, if you are using Torque 4.x you may not be able to use a comma separated list of jobs with -W depend=, instead you may need to create a new PBS declarative (-W) for each job.
eg...
#Invalid syntax in newer versions of torque
qsub -W depend=foo,bar
Resources
backslash in gawk fields
Print all but the first three columns
http://docs.adaptivecomputing.com/torque/help.htm#topics/commands/qsub.htm#-W

Clarification of 'sed' usage

I just blindly followed a command from a tutorial to rename several folders at a time. Can anyone explain the meaning of "p;s" given as the argument to sed's -e option.
[root#LinuxD delsure]# ls
ar1 ar2 ar3 ar4 ar5 ar6 ar7
[root#LinuxD delsure]# find . -type d -name "ar*"|sed -e "p;s/ar/AR/g"|xargs -n2 mv
[root#LinuxD delsure]# ls
AR1 AR2 AR3 AR4 AR5 AR6 AR7
A sed script (the bit following the -e option) can contain multiple commands, separated by ;
The script in your example uses the p command to print the pattern space (i.e. the line just read from the input) followed by the s command to perform a substitution on the pattern space.
By default (unless the pattern space is cleared or the -n option is given to sed) after processing each line the current pattern spaceline is printed again, so the result of the substitution will be printed.
Another way to write the same thing would be:
sed -e "p" -e "s/ar/AR/g"
This separates the commands into two scripts. Another way would be:
sed "p;s/ar/AR/g"
because if the only argument to sed is a script then the -e option is not needed
The argument to the -e option is a script consisting of two commands. The first is p, which prints the unadulterated input, the second is a standard, global substitution. So for input ar1, this should output
ar1
AR1
The other part of this trick is the -n2 option on xargs, which forces it to only use two arguments at a time (instead of as many as it can handle, which would produce very different results).
One way in bash:
$ ls
ar6 ar7
$ find . -name 'ar*' | while IFS= read -r file; do echo mv "$file" "${file^^}"; done
mv ./ar6 ./AR6
mv ./ar7 ./AR7
get rid of the "echo" when you're happy with the output.

Sed on AIX does not recognize -i flag

Does sed -i work on AIX?
If not, how can I edit a file "in place" on AIX?
The -i option is a GNU (non-standard) extension to the sed command. It was not part of the classic interface to sed.
You can't edit in situ directly on AIX. You have to do the equivalent of:
sed 's/this/that/' infile > tmp.$$
mv tmp.$$ infile
You can only process one file at a time like this, whereas the -i option permits you to achieve the result for each of many files in its argument list. The -i option simply packages this sequence of events. It is undoubtedly useful, but it is not standard.
If you script this, you need to consider what happens if the command is interrupted; in particular, you do not want to leave temporary files around. This leads to something like:
tmp=tmp.$$ # Or an alternative mechanism for generating a temporary file name
for file in "$#"
do
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
sed 's/this/that/' $file > $tmp
trap "" 0 1 2 3 13 15
mv $tmp $file
done
This removes the temporary file if a signal (HUP, INT, QUIT, PIPE or TERM) occurs while sed is running. Once the sed is complete, it ignores the signals while the mv occurs.
You can still enhance this by doing things such as creating the temporary file in the same directory as the source file, instead of potentially making the file in a wholly different file system.
The other enhancement is to allow the command (sed 's/this/that' in the example) to be specified on the command line. That gets trickier!
You could look up the overwrite (shell) command that Kernighan and Pike describe in their classic book 'The UNIX Programming Environment'.
#!/bin/ksh
host_name=$1
perl -pi -e "s/#workerid#/$host_name/g" test.conf
Above will replace #workerid# to $host_name inside test.conf
You can simply install GNU version of Unix commands on AIX :
http://www-03.ibm.com/systems/power/software/aix/linux/toolbox/alpha.html
You can use a here construction with vi:
vi file >/dev/null 2>&1 <<#
:1,$ s/old/new/g
:wq
#
When you want to do things in the vi-edit mode, you will need an ESC.
For an ESC press CTRL-V ESC.
When you use this in a non-interactive mode, vi can complain about the TERM not set. The solution is adding export TERM=vt100 before calling vi.
Another option is to use good old ed, like this:
ed fileToModify <<EOF
,s/^ff/gg/
w
q
EOF
you can use perl to do it :
perl -p -i.bak -e 's/old/new/g' test.txt
is going to create a .bak file.

How to search and replace in text files only?

I have a directory containing a bunch of files, some text some binary, with no consistent naming. I want to search and replace a string in text files only. So I went with:
perl -i -pne 's#/some/text/to/replace#/replacement/text#' *
Remove the -i option and you will see that binary files get caught. How do I modify this one-liner to skip binary files?
ack -n --text --sort -f . | xargs perl -i -pne 's…'
Abusing ack goes much quicker than writing your own solution with -T.
Well, this is all based on what your definition of a text file is. Perl 5 has the -T filetest operator that will tell you if a filename or filehandle is a text file (using Perl 5's definition):
perl -i -pne 'BEGIN{#ARGV=grep-T,#ARGV}s#regex#replacement#' *
The BEGIN block will filter out any files that don't pass the -T test, so they won't even be read (except for their first block because that is what -T uses to determine if they are text).
From perldoc -f -X
The -T and -B switches work as follows. The first block or so of the file is examined for odd characters such as strange control codes or characters with the high bit set. If too many strange characters (>30%) are found, it's a -B file; otherwise it's a -T file. Also, any file containing a zero byte in the first block is considered a binary file. If -T or -B is used on a filehandle, the current IO buffer is examined rather than the first block. Both -T and -B return true on an empty file, or a file at EOF when testing a filehandle. Because you have to read a file to do the -T test, on most occasions you want to use a -f against the file first, as in next unless -f $file && -T $file .