Appending and overwriting the beginning of a text file (windows) - append

I have two text files. I'd like to take the content of file1.txt that has four lines and append on the first four lines of the file2.txt. That has to be done overwriting all the records of the first four lines of file2.txt but keeping the rest of the original content (the other lines).
How can I do that using a batch or the windows prompt?

copy file1.txt temp.txt
echo. >> temp.txt
more +5 file2.txt >> temp.txt
move /y temp.txt file2.txt
EDIT: added the "echo. >> temp.txt" instruction, which should add a newline to temp.txt, thereby allowing for a "clean" merge of file2.txt (if file1.txt doesn't end with a newline).

Unless the four lines at the start of the two files occupy exactly the same amount of space, you can't, without rewriting the whole file.
You can't insert or delete data into files at arbitrary points - you can overwrite existing data (byte for byte), truncate the file or append to the end, but not remove or insert into the middle.
So basically you'd need to:
Start a new file consisting of the first four lines of file1.txt
Skip past the first four lines of file2.txt
Append the rest of file1.txt to the new file2.txt
You can do this fairly easily with the head/tail commands from Unix, which you could get from Cygwin if that's an acceptable solution. It's likely that the head/tail from the Windows Services for Unix would work too.

If you grab the coreutils from Gnutils you'll be able to do a lot of the stuff you can do with Cygwin without having to install cygwin.
Then you can use things like head, tail and cat which will allow you to do what you're looking to.
e.g.
head -n 4 file2.txt
to get the first four lines of file2.
Extract the zip from the page linked above, and grab whichever of the utils you need to use out of the bin directory and put them in a directory in your path - e.g. for the below you'd want mv, head and tail. You could use the built in DOS move command, but you'd need to change the options slightly.
The question is a little unclear, but if you're looking to remove the first four lines of file2.txt and append them to file1.txt you can do the following:
head -n 4 file2.txt >> file1.txt
tail -n +5 file2.txt >> temp.txt
mv temp.txt file2.txt

With batch alone I'm not sure you can do it.
With Unix commands you can -- and you can easily use Unix commands under Windows using Cygwin.
In that case you want:
#!/bin/bash
head -n 4 file1.txt > result.txt # first 4 lines of file1
tail -n +5 file2.txt >> result.txt # append lines 5, 6, 7... of file2
mv result.txt file2.txt # replace file2.txt with the result

you could do it if you wrote a script in something other than windows batch. vbscript or jscript with windows scripting host should be able to do it. Each of those would have a method to grab lines from one file and overwrite the lines of another.

You can do this by creating a temporary third file, pulling the lines from the first file and adding them to the temp file, then reading the second file and, after reading in four carriage return/linefeed pairs, write the rest to the temp file. Then, delete the second file and rename the temp file to the second file name.

Related

I want to rename multiple files in a directory in Linux based on the delimeter

I need help to rename multiple files in a directory based on the delimeter.
Sample:
From
R01235-XYZ-TRAIL.PDF
TO
R01234-TRAIL.PDF
and
From
XYZ-C12345-TRAIL.PDF
TO
C12345-TRAIL.PDF
is it possible to delete based on - delimeter?
I am not specifically removing XYZ but rather remove anything before the first - and the middle occurence between two -.. XYZ is just a representation of the characters in that field.
Thanks!
I tried SED, LS, MV, I also tried RENAME but it seems not working for me.
This might work for you:
rename -n 's/XYZ-//' file
This removes XYZ- from the file name.
If this meets your requirements, remove the -n option for the renaming to take place.
On retrospect, perhaps:
rename -n 's/([A-Z][0-9]{5}-).*-/$1/;s/^.*-([A-Z][0-9]{5}-)/$1/' file
With sed:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file
Check the results and then:
sed -E 's/^([A-Z][0-9]{5}-).*-|^.*([A-Z][0-9]{5}-.*)/mv & \1\2/' file | sh

sed -n function calling in same line repeatedly

I'm a complete novice wrt unix and writing shell scripts, so apologies if the solution to my problem is quite banal.
Essentially though, I'm working on a shell script that reads from a TextEdit file called "sursecout.txt", and runs it through another script called "sursec.x" (where sursec.x is simply a series of FORTRAN integrations). It then creates a folder named after a certain Jacobi integral ("CJ ="), and stores the ten SurSec[n] files there (where n = integer). My problem is that the different folders are created correctly with appropriate names, but are each filled with identical output files. My suspicion is that something is wrong with my sed command, in that it's reading the same two lines over and over again (where as it should be reading the first two lines of sursecout.txt, then next two, etc.)
Here are the first two folders I want to make, but I have 30 so any help would be appreciated.
./sursec.x < ./sursecout.txt
sed -n '1,2p;3q' sursecout.txt
cd ..
mv ./data ./CJ=3.029990
mkdir data
cd SurSec
./sursec.x < ./sursecout.txt
sed -n '3,4p;5q' sursecout.txt
cd ..
mv ./data ./CJ=3.030659
mkdir data
cd SurSec

Copy lines from multiple files in subfolders into one file

I'm very very very new to programming and trying to learn how to make tedious analysis tasks a little faster. I have a master folder (Master) with 50 experiment folders and within each experiment folder are another set of folders holding text files. I want to extract 2 lines from one of the text fiels (experiment title on line 7, slope on line 104) and copy them to a new single file.
So far, all I have learned is how to extract the lines and add to a new file.
sed -n '7p; 104 p' reco.txt >> results.txt
How can I extract these two lines from all files 'reco.txt' in the subfolder of the folder 'Master' and export into a single text file?
As much explanation as you can bear would be great to help me learn.
You can use find in combination with xargs for this. On its own, you can get a list of all relevant files:
find . -name reco.txt -print
This finds all files named reco.txt in the current directory (.) or any subdirectories and writes them to standard output.
Now, normally you can use the -exec argument to find, which will run a program for each file found, except that typically multiple results are combined into a single execution (appended to the command line). Your particular invocation of sed only works on one file at a time.
So, instead of -exec, you can use xargs which is essentially the same thing but with more control.
find Master -name reco.txt -print0 | xargs -0 -n1 sed -n '7p; 104 p' > results.txt
This does the following:
Searches in the directory Master or subdirectories for any file named reco.txt.
Outputs each filename with null-terminator instead of newline (-print0) -- this allows the full path to contain characters that usually need escaping (such as spaces)
Pipes the result into xargs, which does the following:
Accepts null-terminated strings (-0)
Only puts at most one file into each command (-n1)
Runs sed -n '7p; 104 p' on that file
Entire output is redirected to results.txt, which will overwrite any existing contents in the file.

Concatenate txt file contents and/or add break to all

I have a bunch of.txt files that need to be made into one big file that can be read by programs such as Microsoft Excel.
The problem is that the files currently do not have a break at the end of them, so they end up in one long line.
Here's an example of what I have (the numbers represent the line number):
1. | first line of txt file
2. | second line
Here's what I want to turn that into:
1. | first line of txt file
2. | second line
3. |
I have around 3000 of these files in a folder, all in the same format. Is there any way to take these files and add a blank line to the end of them all? I'd like to do this without the need for complicated code, i.e. PHP, etc.. I know there are similar things you can do using the terminal (I'm on CentOS), but if something does specifically what I require I'm missing it.
The simplest way to achieve this is with a bash for-loop:
for file in *.txt; do
echo >> "$file"
done
This iterates over all .txt files in the current directory and appends a newline to each file. It can be written in one line, you only need to add a ; before the done.
Note that $file is quoted to handle files with spaces and other funny characters in their names.
If the files are spread across many directories and not all in the same one, you can replace *.txt with **/*.txt to iterate over all .txt files in all subdirectories of the current folder.
An alternative way is to use sed:
sed -i "$ s:$:\n:" *.txt
The -i flag tells sed to edit the files in-place. $ matches the last line, and then the s command substitutes the end of the line (again $) with a new line (\n), thus appending a line to the end of the file.
Try this snippet:
for f in *; do ((cat $f && echo "") > $f.tmp) done && rename -f 's/\.tmp$//' *.tmp
This basically takes any file in the folder (for f in *; do).
Outputs the file on STDOUT (cat $f) followed by a newline (echo "")
and redirects the output into filename.tmp (> $f.tmp)
and then moves the *.tmp files to the original files (rename -f 's/\.tmp$//' *.tmp).
Edit:
Or even simpler:
for f in *; do (echo "" >> $f) done
This basically takes any file in the folder (for f in *; do).
Outputs a newline (echo "")
and appends it to the file (>> $f)

matlab, textfile

I have a bunch of text files which have both strings and numbers in it, but the string are just in the first few rows.
I'm trying to write a script which go in to my folder search all the file in the folder and delete the text from the files and write the rest as it is in the new text file.
Does anybody know how?
I don't think this is a good use of MATLAB.
I think you'd be better off scripting this in Python or shell. Here is one way you could do it with tr in shell if you're on *nix or mac and if your files are all in the same directory and all have the file extension .txt:
#!/bin/sh
for i in `ls *.txt`
do
cat $i | tr -d "[:alpha:]" > $i.tr.txt
done
To run. save the code above as a file, make it executable (chmod a+x filename), and run it in the directory with your text files.
If the number of string lines is always the same, you can use textread() with 'headerlines' option to skip over those string lines, then write the entire text buffer out.