sed -n function calling in same line repeatedly - sed

I'm a complete novice wrt unix and writing shell scripts, so apologies if the solution to my problem is quite banal.
Essentially though, I'm working on a shell script that reads from a TextEdit file called "sursecout.txt", and runs it through another script called "sursec.x" (where sursec.x is simply a series of FORTRAN integrations). It then creates a folder named after a certain Jacobi integral ("CJ ="), and stores the ten SurSec[n] files there (where n = integer). My problem is that the different folders are created correctly with appropriate names, but are each filled with identical output files. My suspicion is that something is wrong with my sed command, in that it's reading the same two lines over and over again (where as it should be reading the first two lines of sursecout.txt, then next two, etc.)
Here are the first two folders I want to make, but I have 30 so any help would be appreciated.
./sursec.x < ./sursecout.txt
sed -n '1,2p;3q' sursecout.txt
cd ..
mv ./data ./CJ=3.029990
mkdir data
cd SurSec
./sursec.x < ./sursecout.txt
sed -n '3,4p;5q' sursecout.txt
cd ..
mv ./data ./CJ=3.030659
mkdir data
cd SurSec

Related

Git Bash find exec recursively on folders and files containing spaces

Question: In Git Bash on windows, how would you run the following in a way that it will also search folders with spaces in the name, and execute on files with spaces in the name?
$ find ./ -type f -name '*.png' -exec sh -c 'cwebp -q 75 $1 -o "${1%.png}.webp"' _ {} \;
Context I'm running Git Bash on windows, trying to execute a command on all found .png files to convert them to .webp format. It works for all files without spaces in the path, but it's failing to find files with spaces in the filename or files within folders that have spaces in the folder name.A few considerations:
I have many, many levels of folders to iterate through, and I can't run this command separately for each. I really need the recursion to work.I cannot change the folder names; it will break other dependencies (nor did I create the folder or filenames originally, so cut me some slack!)I arrived here by following the suggestions from this article: https://www.smashingmagazine.com/2018/07/converting-images-to-webp/the program, to my knowledge, doesn't ship with any built-in recursive command... golly that'd be handy
Any help you can provide will be appreciated. Thanks!

Using tac on most recent log file out of several log files in a directory

I have several log files in a directory that we’ll call path/to/directory that are in the following format after long listing in Red Hat Enterprise 6:
-rw-r——-. 1 root root 17096 Sep 30 11:00 logfile_YYYYDDMM_HHMMSS.log
There are several of these log files that are generated everyday. I need to automatically tac the most recently-modified file without typing the exact name of the log file. For example, I’d like to do:
tac /path/to/directory/logile*.log | grep -m 1 keyword
And have it automatically tac the most recently modified file and grep the keyword in the reverse direction from the end of the log file so it runs quicker. Is this possible?
The problem I’m running into is that there is always more than one log file in the /path/to/directory and I can’t get Linux to automatically tac the most recently modified file as of yet. Any help would be greatly appreciated.
I’ve tried:
tac /path/to/directory/logfile_$(date +%Y%m%d)*.log
which will tac a file created on the present date but the part that I’m having trouble with is using tac on the newest file (by YYYYMMDD AND HHMMSS) because multiple files can be generated on the same date but only one of them can be the most current and the most current log file is the only one I care about. I can’t use a symbolic link either.. Limitations, sigh.
The problem you seem to be expressing in your question isn't so much about tac, but rather .. how to select the most recent of a set of predictably named files in a directory.
If your filenames really are in the format logfile_YYYYDDMM_HHMMSS.log, then they will sort lexically without the need for an innate understanding of dates. Thus, if your shell is bash, you might:
shopt -s nullglob
for x in /path/to/logfile_*.log; do
[[ "$x" > "$file" ]] && file="$x"
done
The nullglob option tells bash to expand a glob matching no files as a null rather than as a literal string. Following the code above, you might want to test for the existence of $hit before feeding it to tac.

Copy lines from multiple files in subfolders into one file

I'm very very very new to programming and trying to learn how to make tedious analysis tasks a little faster. I have a master folder (Master) with 50 experiment folders and within each experiment folder are another set of folders holding text files. I want to extract 2 lines from one of the text fiels (experiment title on line 7, slope on line 104) and copy them to a new single file.
So far, all I have learned is how to extract the lines and add to a new file.
sed -n '7p; 104 p' reco.txt >> results.txt
How can I extract these two lines from all files 'reco.txt' in the subfolder of the folder 'Master' and export into a single text file?
As much explanation as you can bear would be great to help me learn.
You can use find in combination with xargs for this. On its own, you can get a list of all relevant files:
find . -name reco.txt -print
This finds all files named reco.txt in the current directory (.) or any subdirectories and writes them to standard output.
Now, normally you can use the -exec argument to find, which will run a program for each file found, except that typically multiple results are combined into a single execution (appended to the command line). Your particular invocation of sed only works on one file at a time.
So, instead of -exec, you can use xargs which is essentially the same thing but with more control.
find Master -name reco.txt -print0 | xargs -0 -n1 sed -n '7p; 104 p' > results.txt
This does the following:
Searches in the directory Master or subdirectories for any file named reco.txt.
Outputs each filename with null-terminator instead of newline (-print0) -- this allows the full path to contain characters that usually need escaping (such as spaces)
Pipes the result into xargs, which does the following:
Accepts null-terminated strings (-0)
Only puts at most one file into each command (-n1)
Runs sed -n '7p; 104 p' on that file
Entire output is redirected to results.txt, which will overwrite any existing contents in the file.

How do I run the sed command with input and output as the same file?

I'm trying to do use the sed command in a shell script where I want to remove lines that read STARTremoveThisComment and lines that read removeThisCommentEND.
I'm able to do it when I copy it to a new file using
sed 's/STARTremoveThisComment//' > test
But how do I do this by using the same file as input and output?
sed -i (or the extended version, --in-place) will automate the process normally done with less advanced implementations, that of sending output to temporary file, then renaming that back to the original.
The -i is for in-place editing, and you can also provide a backup suffix for keeping a copy of the original:
sed -i.bak fileToChange
sed --in-place=.bak fileToChange
Both of those will keep the original file in fileToChange.bak.
Keep in mind that in-place editing may not be available in all sed implementations but it is in GNU sed which should be available on all variants of Linux, as per your tags.
If you're using a more primitive implementation, you can use something like:
cp oldfile oldfile.bak && sed 'whatever' oldfile >newfile && mv newfile oldfile
You can use the flag -i for in-place editing and the -e for specifying normal script expression:
sed -i -e 's/pattern_to_search/text_to_replace/' file.txt
To delete lines that match a certain pattern you can use the simpler syntax. Notice the d flag:
sed -i '/pattern_to_search/d' file.txt
You really should not use sed for that. This question seems to come up ridiculously often, and it seems very strange that it does since the general solution is so trivial. It seems bizarre that people want to know how to do it in sed, and in python, and in ruby, etc. If you want to have a filter operate on an input and overwrite it, use the following simple script:
#!/bin/sh -e
in=${1?No input file specified}
mv $in ${bak=.$in.bak}
shift
"$#" < $bak > $in
Put that in your path in an executable file name inline, and then the problem is solved in general. For example:
inline input-file sed -e s/foo/bar/g
Now, if you want to add logic to keep multiple backups, or if you have some options to change the backup naming scheme, or whatever, you fix it in one place. What's the command line option to get 1-up counters on the backup file when processing a file in-place with perl? What about with ruby? Is the option different for gnu-sed? How does awk handle it? The whole friggin' point of unix is that tools do one thing only. Handling logic for backup files is a second thing, and needs to be factored out. If you are implementing a tool, do not add logic to create backup files. Tell your users to use a 2nd tool for that. Integration is bad. Modularity is good. That is the unix way.
Notice that this script has several problems. The permissions/mode of the input file may be changed, for example. I'm sure there are innumerable other issues. However, by putting the backup logic in a wrapper script, you localize all of these issues and don't have to worry that sed overwrites the files and changes mode, while python keeps the file in place and does not change the inode (I made up those two cases, the point being that not all tools will use the same logic, while the wrapper script will.)
As far as I know it is not possible to use the same file for input and output. Though one solution is make a shell script which will save it to another file, delete the old input and rename the output to the input file name.
sed -e s/try/this/g input.file > output.file;mv output.file input.file
I suggest using sponge
sponge reads standard input and writes it out to the specified file.
Unlike a shell redirect, sponge soaks up all its input before writing
the output file. This allows constructing pipelines that read from and
write to the same file.
cat test | sed 's/STARTremoveThisComment//' | sponge test

Appending and overwriting the beginning of a text file (windows)

I have two text files. I'd like to take the content of file1.txt that has four lines and append on the first four lines of the file2.txt. That has to be done overwriting all the records of the first four lines of file2.txt but keeping the rest of the original content (the other lines).
How can I do that using a batch or the windows prompt?
copy file1.txt temp.txt
echo. >> temp.txt
more +5 file2.txt >> temp.txt
move /y temp.txt file2.txt
EDIT: added the "echo. >> temp.txt" instruction, which should add a newline to temp.txt, thereby allowing for a "clean" merge of file2.txt (if file1.txt doesn't end with a newline).
Unless the four lines at the start of the two files occupy exactly the same amount of space, you can't, without rewriting the whole file.
You can't insert or delete data into files at arbitrary points - you can overwrite existing data (byte for byte), truncate the file or append to the end, but not remove or insert into the middle.
So basically you'd need to:
Start a new file consisting of the first four lines of file1.txt
Skip past the first four lines of file2.txt
Append the rest of file1.txt to the new file2.txt
You can do this fairly easily with the head/tail commands from Unix, which you could get from Cygwin if that's an acceptable solution. It's likely that the head/tail from the Windows Services for Unix would work too.
If you grab the coreutils from Gnutils you'll be able to do a lot of the stuff you can do with Cygwin without having to install cygwin.
Then you can use things like head, tail and cat which will allow you to do what you're looking to.
e.g.
head -n 4 file2.txt
to get the first four lines of file2.
Extract the zip from the page linked above, and grab whichever of the utils you need to use out of the bin directory and put them in a directory in your path - e.g. for the below you'd want mv, head and tail. You could use the built in DOS move command, but you'd need to change the options slightly.
The question is a little unclear, but if you're looking to remove the first four lines of file2.txt and append them to file1.txt you can do the following:
head -n 4 file2.txt >> file1.txt
tail -n +5 file2.txt >> temp.txt
mv temp.txt file2.txt
With batch alone I'm not sure you can do it.
With Unix commands you can -- and you can easily use Unix commands under Windows using Cygwin.
In that case you want:
#!/bin/bash
head -n 4 file1.txt > result.txt # first 4 lines of file1
tail -n +5 file2.txt >> result.txt # append lines 5, 6, 7... of file2
mv result.txt file2.txt # replace file2.txt with the result
you could do it if you wrote a script in something other than windows batch. vbscript or jscript with windows scripting host should be able to do it. Each of those would have a method to grab lines from one file and overwrite the lines of another.
You can do this by creating a temporary third file, pulling the lines from the first file and adding them to the temp file, then reading the second file and, after reading in four carriage return/linefeed pairs, write the rest to the temp file. Then, delete the second file and rename the temp file to the second file name.