I am cleaning up a dataset (csv dataset). I only want to consider registers in which all fields are complete and have the right type of values. This is what I tried:
sed -r '{
/regex_pattern/!d
more commands follow...
}' $1
The program works just fine and does what it is supposed to do. The problem is that it also removes the very first line (header line) since it does not match the specific regex_pattern. I know there is a way to specify the range in which the command should apply so for example:
sed '2,$ s/A/a/'
will do substitutions on data skipping the header line. Based on this logic I tried:
sed -r '{
2,$/regex_pattern/!d
more commands follow...
}' $1
so that the header line will be untouched however this code does not run at all.So what (and why) would be the right command to do what I am intending?
As an example, imagine my csv file is fruits.csv and that my regex_pattern is [0-9]+,[0-9]+
apples,oranges
20,5
7,3
,4
a,b
12,22
When I call the .sh script that contains the sed commands in should output:
apples,oranges
20,5
7,3
12,22
So, note that:
Header line was not deleted even though it does not match the regex_pattern.
Line number 4, i.e. ",4" was deleted as it does not match the regex_pattern.
Line number 5, i.e. "a,b" was deleted as it does not match the regex_pattern.
Any help is very much appreciated and I wish to thank you all in advance.
Kind regards.
You could write it like this, matching the whole line, starting at the second line:
sed -r '
2,${/^[0-9]+,[0-9]+$/!d}
' file
Output
apples,oranges
20,5
7,3
12,22
If you also want to allow single numbers or more than just 2 comma separated numbers:
sed -r '
2,${/^[0-9]+(,[0-9]+)*$/!d}
' file
Using sed
$ sed '2,${/[0-9]\+,[0-9]\+/!d}' input_file
apples,oranges
20,5
7,3
12,22
any one of these should work in gawk, mawk1/2, or macos nawk
mawk 'NF-_^(NF==NR)' FS='^[0-9]+,[0-9]+$'
nawk '(NF!=NR)!=NF' FS='^[0-9]+,[0-9]+$'
gawk 'NF-(NF!~NR)' FS='^[0-9]+,[0-9]+$'
'
apples,oranges
20,5
7,3
12,22
more concisely would be
mawk -F'[0-9]+,[0-9]+' '(NF<NR)-NF' # using FS
gawk '/[0-9]+,[0-9]+/^+(NF<NR)' # not using FS
nawk '(NF<NR)<=/([0-9]+,?){2}/' # same approach, rev. order
mawk '(NF~NR)-/[0-9]+,[0-9]+/' # truly fringe but
# concise syntax
nawk '(NF~NR)!=/([0-9]+,?){2}/' # same approach, to
# circumvent nawk peculiarities
sed is a bad choice for working with CSVs since it doesn't have any inbuilt functionality for working with fields, nor literal strings, nor variables, doesn't use EREs by default (all of the answers you have so far will only work with GNU sed), etc. To do what you specifically want with any awk in any shell on every Unix box is simply:
$ awk 'NR==1 || /[0-9]+,[0-9]+/' file
apples,oranges
20,5
7,3
12,22
which says "if the current line number (stored in NR) is 1 or the regexp matches the current line contents then print the line". Anything else you want to do with your CSV will also be easier with awk than with sed.
Meh, I would just preserve first line.
sed -r '
1{p;d}
/regex_pattern/!d
more commands follow...
' "$1"
or run it not for first line:
1!{
/regex_pattern/!d
more commands follow...
}
This might work for you (GNU sed):
sed -E '1!{/^[0-9]+,[0-9]+$/!d}' file
If it is not the first line, delete any line that does not match one set of comma separated natural numbers.
Alternative:
sed -E '1b;/^[0-9]+,[0-9]+$/!d' file
Or:
sed -nE '1p;1b;/^[0-9]+,[0-9]+$/p' file
Lets say I have a line #SYM
I need to replace it with all lines from file1.txt
Is it possible to do that with sed?
I have tried sed 's/#SYM/file1.txt/' updater
But that doesn't work, because I need to load file1.txt as string, and I do not know how to do that.
EDIT: I believe that there could be a way to do it in a shell script somehow.
EDIT2: I also just tried this:
#!/bin/bash
value=$(<tools/symlink)
sed -i 's/#SYM/$value/' META-INF/com/google/android/updater-script
Use r command:
sed -e '/#SYM/ {r tools/symlink' -e 'd}' META-INF/com/google/android/updater-script
/#SYM/ {r tools/symlink if a line contains #SYM, append the contents of tools/symlink
d} then delete the matching line
the two commands are separated using -e option because everything after r is considered as part of filename
Add the -i option once you are satisifed that it is working
On a Unix system I am trying to add a new line in a file using sed or perl but it seems I am missing something.
Supposing my file has multiple lines of texts, always ending like this {TNG:}}${1:F01.
I am trying to find a to way to add a new line after the }$, in this way {1 should always start on a new line.
I tried it by escaping $ sign using this:
perl -e '$/ = "\${"; while (<>) { s/\$}\{$/}\n{/; print; }' but it does not work.
Any ideas will be appreciated.
give this a try:
sed 's/{TNG:}}\$/&\n/' file > newfile
The sed will by default use BRE, that is, the {}s are literal characters. But we must escape the $.
kent$ cat f
{TNG:}}${1:F01.
kent$ sed 's/{TNG:}}\$/&\n/' f
{TNG:}}$
{1:F01.
With perl:
$ cat input.txt
line 1 {TNG:}}${1:F01
line 2 {TNG:}}${1:F01
$ perl -pe 's/TNG:\}\}\$\K/\n/' input.txt
line 1 {TNG:}}$
{1:F01
line 2 {TNG:}}$
{1:F01
(Read up on the -p and -n options in perlrun and use them instead of trying to do what they do in a one-liner yourself)
How would I delete all lines (from a text file) which contain only two dots, with random data between the dots. Some lines have three or more dots, and I need those to remain in the file. I would like to use sed.
Example Dirty File:
.dirty.dig
.please.dont.delete.me
.delete.me
.dont.delete.me.ether
.nnoooo.not.meee
.needto.delete
Desired Output:
.please.dont.delete.me
.dont.delete.me.ether
.nnoooo.not.meee
Would be simpler to use awk here
$ awk -F. 'NF!=3' ip.txt
.please.dont.delete.me
.dont.delete.me.ether
.nnoooo.not.meee
-F. use . as delimiter
NF!=3 print all lines where number of input fields is not equal to 3
this will retain lines like abc.xyz
to retain only lines with more than 2 dots, use awk -F. 'NF>3' ip.txt
sed '/^[^.]*\.[^.]*\.[^.]*$/d' file
Output:
.please.dont.delete.me
.dont.delete.me.ether
.nnoooo.not.meee
See: The Stack Overflow Regular Expressions FAQ
sed is for making substitutions, to just Globally search for a Regular Expression and Print the result there's a whole other tool designed just for that purpose and even named after it - grep.
grep -v '^[^.]*\.[^.]*\.[^.]*$' file
or with GNU grep for EREs:
$ grep -Ev '^[^.]*(\.[^.]*){2}$' file
.please.dont.delete.me
.dont.delete.me.ether
.nnoooo.not.meee
I don't have a sed handy to double-check, but
/^[^.]*\.[^.]*\.[^.]*$/d
should match and delete all lines that have two dots with non-dot strings before and between them.
This might work for you (GNU sed):
sed 's/\./&/3;t;s//&/2;T;d' file
If there are 3 or more . print. If there are 2 . delete. Otherwise print.
Another way:
sed -r '/^([^.]*\.[^.]*){2}$/d' file
I have 'file1' with (say) 100 lines. I want to use sed or awk to print lines 23, 71 and 84 (for example) to 'file2'. Those 3 line numbers are in a separate file, 'list', with each number on a separate line.
When I use either of these commands, only line 84 gets printed:
for i in $(cat list); do sed -n "${i}p" file1 > file2; done
for i in $(cat list); do awk 'NR==x {print}' x=$i file1 > file2; done
Can a for loop be used in this way to supply line addresses to sed or awk?
This might work for you (GNU sed):
sed 's/.*/&p/' list | sed -nf - file1 >file2
Use list to build a sed script.
You need to do > after the loop in order to capture everything. Since you are using it inside the loop, the file gets overwritten. Inside the loop you need to do >>.
Good practice is to or use > outside the loop so the file is not open for writing during every loop iteration.
However, you can do everything in awk without for loop.
awk 'NR==FNR{a[$1]++;next}FNR in a' list file1 > file2
You have to >>(append to the file) . But you are overwriting the file. That is why, You are always getting 84 line only in the file2.
Try use,
for i in $(cat list); do sed -n "${i}p" file1 >> file2; done
With sed:
sed -n $(sed -e 's/^/-e /' -e 's/$/p/' list) input
given the example input, the inner command create a string like this: `
-e 23p
-e 71p
-e 84p
so the outer sed then prints out given lines
You can avoid running sed/awk in a for/while loop altgether:
# store all lines numbers in a variable using pipe
lines=$(echo $(<list) | sed 's/ /|/g')
# print lines of specified line numbers and store output
awk -v lineS="^($lines)$" 'NR ~ lineS' file1 > out