split a large text (xyz) database into x equal parts - sed

I want to split a large text database (~10 million lines). I can use a command like
$ sed -i -e '4 s/(dB)//' -e '4 s/Best\ unit/Best_Unit/' -e '1,3 d' '/cygdrive/c/ Radio Mobile/Output/TRC_TestProcess/trc_longlands.txt'
$ split -l 1000000 /cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands/trc_longlands.txt 1
The first line is to clean the databse and the next is to split it -
but then the output files do not have the field names. How can I incorporate the field names into each dataset and pipe a list which has the original file, new file name and line numbers (from original file) in it. This is so that it can be used in the arcgis model to re-join the final simplified polygon datasets.
ALTERNATIVELY AND MORE USEFULLY -as this needs to go into a arcgis model, a python based solution is best. More details are in https://gis.stackexchange.com/questions/21420/large-point-to-polygon-by-buffer-join-buffer-dissolve-issues#comment29062_21420 and Remove specific lines from a large text file in python
SO GOING WITH A CYGWIN based Python solution as per answer by icyrock.com
we have process_text.sh
cd /cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
mkdir processing
cp trc_longlands.txt processing/trc_longlands.txt
cd txt_processing
sed -i -e '4 s/(dB)//' -e '4 s/Best\ unit/Best_Unit/' -e '1,3 d' 'trc_longlands.txt'
split -l 1000000 trc_longlands.txt trc_longlands_
cat > a
h
1
2
3
4
5
6
7
8
9
^D
split -l 3
split -l 3 a 1
mv 1aa 21aa
for i in 1*; do head -n1 21aa|cat - $i > 2$i; done
for i in 21*; do echo ---- $i; cat $i; done
how can "TRC_Longlands" and the path be replaced with the input filename -in python we have %path%/%name for this.
in the last line is "do echo" necessary?
and this is called by python using
import os
os.system("process_text.bat")
where process_text.bat is basically
bash process_text.sh
I get the following error when run from dos...
Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft
Corporation. All rights reserved.
C:\Users\georgec>bash
P:\2012\Job_044_DM_Radio_Propogation\Working\FinalPropogat
ion\TRC_Longlands\process_text.sh 'bash' is not recognized as an
internal or external command, operable program or batch file.
also when I run the bash command from cygwin -I get
georgec#ATGIS25
/cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
$ bash process_text.sh : No such file or directory:
/cygdrive/P/2012/Job_044_DM_Radio_Propogation/Working/FinalPropogation/TRC_Longlands
cp: cannot create regular file `processing/trc_longlands.txt\r': No
such file or directory : No such file or directory: txt_processing :
No such file or directoryds.txt
but the files are created in the root directory.
why is there a "." after the directory name? how can they be given a .txt extension?

If you want to just prepend the first line of the original file to all but the first of the splits, you can do something like:
$ cat > a
h
1
2
3
4
5
6
7
^D
$ split -l 3
$ split -l 3 a 1
$ ls
1aa 1ab 1ac a
$ mv 1aa 21aa
$ for i in 1*; do head -n1 21aa|cat - $i > 2$i; done
$ for i in 21*; do echo ---- $i; cat $i; done
---- 21aa
h
1
2
---- 21ab
h
3
4
5
---- 21ac
h
6
7
Obviously, the first file will have one line less then the middle parts and the last part might be shorter, too, but if that's not a problem, this should work just fine. Of course, if your header has more lines, just change head -n1 to head -nX, X being the number of header lines.
Hope this helps.

Related

Make single instance of `sed` with multiple filenames skip to next file

The next command in sed skips to the next line, but with multiple files there doesn't seem to be any command to skip to the next file.
Is there any workaround using only a single invocation of sed?
Demonstration of problem...
Make two simple 3-number data files:
seq 3 > three ; seq 10 1 13 > thirteen
Show that sed handles multiple files, (by finding all lines ending with 3 and printing the filenames), and is somewhat aware of them as distinct objects:
sed -n '/3$/{p;F}' three thirteen
Output:
3
three
13
thirteen
This next attempt to print both last lines doesn't work however, or rather it works as though both files were a single stream:
sed -n '$p' three thirteen
Output:
13
See if your version supports the -s option:
$ seq 3 > three ; seq 10 1 13 > thirteen
$ sed -n '$p' three thirteen
13
$ sed -n '2p' three thirteen
2
$ sed -sn '$p' three thirteen
3
13
$ sed -sn '2p' three thirteen
2
11
From man sed:
-s, --separate
consider files as separate rather than as a single continuous long stream.
When using the -i option, GNU sed uses -s by default.
In case the -s option is not available, here's an alternative with perl:
$ perl -ne 'print if eof' three thirteen
3
13

Replace first line in directory files

I would like to execute this make command to first replace the first line of all csv files inside the directory and then replace the # for commas through the other lines.
The second command is working fine and does what it is supposed to do, but the first one only replaces the line on the first file.
Could anyone give me a help on that?
csv:
$(DOCKER_RUN) npm run csv-generator
make format-csv
format-csv:
#sed -i '' '1 s/^.*$$/"bar","repository"/g' $(CURDIR)/foo/npm/*.csv
#sed -i '' 's/\(.*\)#/\1","/g' $(CURDIR)/foo/npm/*.csv
The reason that the first sed command "fails" is that sed doesn't reset the line counter between input files (on your system, and neither on my Mac OS X machine, see comments):
$ cat test1
a
b
g
$ cat test2
aa
bb
cc
$ sed -n '=' test1 test2 # the '=' sed command outputs line numbers
1
2
3
4
5
6
This is why the first sed command isn't doing what you want it to do, it only affects the first file's first line.
The solution is to loop over the files and call sed for each of them (untested in Makefile):
#for f in $(CURDIR)/foo/npm/*.csv; do \
sed -i '' '1 s/^.*$$/"bar","repository"/g' $f; \
done
Using find and xargs will also work, just make sure that find isn't picking up files further down in the folders.
EDIT: In light of the comments on this answer, I would recommend avoiding the use of sed -i on multiple files altogether, and convert both statements into for-loops (in this case, they may be collapsed into one loop with two statements):
#for f in $(CURDIR)/foo/npm/*.csv; do \
sed -i '' '1 s/^.*$$/"bar","repository"/g' $f; \
sed -i '' 's/\(.*\)#/\1","/g' $f; \
done
In my experience, using for-loops in Makefiles seems to be far more common compared to using find and xargs. This is probably due to incompatibility between find and xargs versions between Unices. It also makes the Makefile a lot easier to read if one uses explicit loops.
I managed to solve with:
#find $(CURDIR)/foo/npm -name "*.csv" -type f | xargs -L 1 sed -i '' '1 s/^.*$$/"bar"/g'

Delete lines by pattern in specific range of lines

I want to remove lines from file by regex pattern using sed just like in this question Delete lines in a text file that containing a specific string, but only inside a range of lines (not in the whole file). I want to do it starting from some line number till the end of file.
This is how I've done it in combination with tail:
tail -n +731 file|sed '/some_pattern/d' >> file
manually remove edited range in file from previous step
Is there a shorter way to do it with sed only?
Something like sed -i '731,1000/some_pattern/d' file?
You can use this sed,
sed -i.bak '731,1000{/some_pattern/d}' yourfile
Test:
$ cat a
1
2
3
13
23
4
5
$ sed '2,4{/3/d}' a
1
2
23
4
5
You need $ address to match end of file. With GNU sed:
sed -i '731,${/some_pattern/d;}' file
Note that this can be slower than tail -n +number, because sed will start processing at start of file instead of doing lseek() like tail.
(With BSD sed you need sed -i '' ...)
sed is for simple substitutions on individual lines, that is all. For anything even marginally more interesting an awk solution will be clearer, more robust, portable, maintainable, extensible and better in just about ever other desirable attribute of software.
Given this sample input file:
$ cat file
1
2
3
4
1
2
3
4
1
2
3
4
The following script will print every line except a line containing the number 3 that occurs after the 6th line of the input file:
$ awk '!(NR>6 && /3/)' file
1
2
3
4
1
2
4
1
2
4
Want to only do the deletion between lines 6 and 10? No problem:
$ awk '!(NR>6 && NR<10 && /3/)' file
1
2
3
4
1
2
4
1
2
3
4
Want the skipped lines written to a log file? No problem:
awk 'NR>6 && /3/{print > "log";next} {print}' file
Written to stderr?
awk 'NR>6 && /3/{print | "cat>&2";next} {print}' file
Want a count of how many lines you deleted also written to stderr?
awk 'NR>6 && /3/{print | "cat>&2"; cnt++; next} {print} END{print cnt | "cat>&2"}' file
ANYTHING you want to do additionally or differently will be easy and build on what you start with. Try doing any of the above, or just about anything else, with a sed script that satisfies your original requirement.
awk to the rescue!
awk '!(NR>=731 && /pattern/)' input > output

Extract every nth number from a txt file

So I have a txt file where I need to extract every third number and print it to separate file using Terminal. The txt file is just a long list of numbers, tab delimited:
18 25 0 18 24 5 18 23 5 18 22 8.2 ...
I know there is a way to do this using sed or awk, but so far I've only been able to extract every third line by using:
awk 'NR%3==1' testRain.txt > rainOnly.txt
So here's the answer (or rather, the answer I utilized!):
xargs -n1 < input.txt | awk '!(NR%3)' > output.txt
This gives you an output.txt that has every third number of the original file as a separate line.
A quick pipe line to extract every 3rd number:
$ xargs -n1 < file | sed '3~3!d'
0
5
5
8.2
If you don't want each number on a newline throw the result back through xargs:
$ xargs -n1 < file | sed '3~3!d' | xargs
0 5 5 8.2
Use redirection to store the output in a new file:
$ xargs -n1 < file | sed '3~3!d' | xargs > new_file
With awk using a simple for loop you could do:
$ awk '{for(i=3;i<=NF;i+=3)print $i}' file
0
5
5
8.2
or (adds a trailing tab):
$ awk '{for(i=3;i<=NF;i+=3)printf "%s\t",$i;print ""}' file
0 5 5 8.2
Or by setting the value of RS (adds trailing newline):
$ awk '!(NR%3)' RS='\t' file
0
5
5
8.2
$ awk '!(NR%3)' RS='\t' ORS='\t' file
0 5 5 8.2
You can print every third character by substituting the next two with nothing, globally. When the count straddles a newline, using Perl might be the simplest solution:
perl -p000 -e 's/(.)../$1/gs'
If you want the first, fourth etc character from every line, a line-oriented tool like sed suffices:
sed 's/\(.\)../\1/g'
Using grep -P
grep -oP '([^\t]+\t){2}\K[^\t\n]+' file
0
5
5
8.2
This might work for you (GNU sed):
sed -r 's/(\S+\s){3}/\1/g;s/\s$//' file
#user2718946
Your solution was close, but here you are without xarg.
awk 'NR%3==1' RS=" " file
18
18
18
18
Different start:
awk 'NR%3==0' RS=" " file
0
5
5
8.2

strip the last and first character from a String

Is fairly easy to strip the first and last character from a string using awk/sed?
Say I have this string
( 1 2 3 4 5 6 7 )
I would like to strip parentheses from it.
How should I do this?
sed way
$ echo '( 1 2 3 4 5 6 7 )' | sed 's/^.\(.*\).$/\1/'
1 2 3 4 5 6 7
awk way
$ echo '( 1 2 3 4 5 6 7 )' | awk '{print substr($0, 2, length($0) - 2)}'
1 2 3 4 5 6 7
POSIX sh way
$ var='( 1 2 3 4 5 6 7 )'; var="${var#?}"; var="${var%?}"; echo "$var"
1 2 3 4 5 6 7
bash way
$ var='( 1 2 3 4 5 6 7 )'; echo "${var:1: -1}"
1 2 3 4 5 6 7
If you use bash then use the bash way.
If not, prefer the posix-sh way. It is faster than loading sed or awk.
Other than that, you may also be doing other text processing, that you can combine with this, so depending on the rest of the script you may benefit using sed or awk in the end.
why doesn't this work? sed '..' s_res.temp > s_res.temp ?
This does not work, as the redirection > will truncate the file before it is read.
To solve this you have some choices:
what you really want to do is edit the file. sed is a stream editor not a file editor.
ed though, is a file editor (the standard one too!). So, use ed:
$ printf '%s\n' "%s/^.\(.*\).$/\1/" "." "wq" | ed s_res.temp
use a temporary file, and then mv it to replace the old one.
$ sed 's/^.\(.*\).$/\1/' s_res.temp > s_res.temp.temp
$ mv s_res.temp.temp s_res.temp
use -i option of sed. This only works with GNU-sed, as -i is not POSIX and GNU-only:
$ sed -i 's/^.\(.*\).$/\1/' s_res.temp
abuse the shell (not recommended really):
$ (rm test; sed 's/XXX/printf/' > test) < test
On Mac OS X (latest version 10.12 - Sierra) bash is stuck to version 3.2.57 which is quite old. One can always install bash using brew and get version 4.x which includes the substitutions needed for the above to work.
There is a collection of bash versions and respective changes, compiled on the bash-hackers wiki
To remove the first and last characters from a given string, I like this sed:
sed -e 's/^.//' -e 's/.$//'
# ^^ ^^
# first char last char
See an example:
sed -e 's/^.//' -e 's/.$//' <<< "(1 2 3 4 5 6 7)"
1 2 3 4 5 6 7
And also a perl way:
perl -pe 's/^.|.$//g'
If I want to remove the First (1) character and the last two (2) characters using sed.
Input "t2.large",
Output t2.large
sed -e 's/^.//' -e 's/..$//'
`