my input is split into multiple lines. I want it to output in a single line.
For example Input is :
1|23|ABC
DEF
GHI
newline
newline
2|24|PQR
STU
LMN
XYZ
newline
Output:
1|23|ABC DEF GHI
2|24|PQR STU LMN XYZ
Well, here is one for awk:
$ awk -v RS="" -F"\n" '{$1=$1}1' file
Output:
1|23|ABC DEF GHI
2|24|PQR STU LMN XYZ
Related
I altered a code from solo learn app but got confused :
import re
pattern = r'(.+)(.+) \2'
match = re.match(pattern , 'ABC bca cab ABC')
if match:
print('Match 1' , match.group())
match = re.match(pattern , 'abc BCA cab BCA')
if match:
print('Match 2' , match.group())
match = re.match(pattern , 'abc bca CAB CAB')
if match:
print('Match 3' , match.group())
And am getting this output:
Match 1 ABC bca ca
Match 3 abc bca CAB CAB
Any help ?!!
I have a text file with the below format:
Text: htpps:/xxx
Expiry: ddmm/yyyy
object_id: 00
object: ABC
auth: 333
RequestID: 1234
Text: htpps:/yyy
Expiry: ddmm/yyyy
object_id: 01
object: NNN
auth: 222
RequestID: 3456
and so on
...
I want to delete all lines with the exception of lines with prefix "Expiry:" "object:" and "object_id:"
then load it into a table in postgresql
Would really appreciate your help on the above two.
thanks
Nick
I'm sure there will be other methods, but I found an iterative approach if every object has the same format of
Text: htpps:/xxx
Expiry: ddmm/yyyy
object_id: 00
object: ABC
auth: 333
RequestID: 1234
Then you can transform the above with
more test.txt | awk '{ printf "%s\n", $2 }' | tr '\n' ',' | sed 's/,,/\n/' | sed '$ s/.$//'
and, for your example it will generate the entries in CSV format
htpps:/xxx,ddmm/yyyy,00,ABC,333,1234
htpps:/yyy,ddmm/yyyy,01,NNN,222,3456
The above code does:
awk '{ printf "%s\n", $2 }': prints only the second element for each row
tr '\n' ',': transform new lines in ,
sed 's/,,/\n/': removes the empty lines
sed '$ s/.$//': removes the trailing ,
Of course this is probably an oversimplified example, but you could use it as basis. Once the file is in CSV you can load it with psql
I have a table with 3 columns - type, name, and code.
The code column contains the procedure/function source code.
I have exported it to a csv file using Import/Export option in PgAdmin 4 v5, but the code column does not stick to a single cell in the csv file. The data in it spreads across to many of the rows and columns.
I have checked Encoding as UTF8 which works fine normally while exporting other tables.
Other settings: Format: csv, Encoding: UTF8. Have not changed any other settings
Can someone help how to export it properly.
An explanation of what you are seeing:
CREATE TABLE public.csv_test (
fld_1 character varying,
fld_2 character varying,
fld_3 character varying,
fld_4 character varying
);
insert into csv_test values ('1', E'line with line end. \n New line', 'test', 'dog');
insert into csv_test values ('2', E'line with line end. \n New line', 'test', 'dog');
insert into csv_test values ('3', E'line with line end. \n New line \n Another line', 'test2', 'cat');
insert into csv_test values ('4', E'line with line end. \n New line \n \t Another line', 'test3', 'cat');
select * from csv_test ;
fld_1 | fld_2 | fld_3 | fld_4
-------+-----------------------+-------+-------
1 | line with line end. +| test | dog
| New line | |
2 | line with line end. +| test | dog
| New line | |
3 | line with line end. +| test2 | cat
| New line +| |
| Another line | |
4 | line with line end. +| test3 | cat
| New line +| |
| Another line | |
\copy csv_test to csv_test.csv with (format 'csv');
\copy csv_test to csv_test.txt;
--fld_2 has line ends and/or tabs so in CSV the data will wrap inside the quotes.
cat csv_test.csv
1,"line with line end.
New line",test,dog
2,"line with line end.
New line",test,dog
3,"line with line end.
New line
Another line",test2,cat
4,"line with line end.
New line
Another line",test3,cat
-- In text format the line ends and tabs are shown and not wrapped.
cat csv_test.txt
1 line with line end. \n New line test dog
2 line with line end. \n New line test dog
3 line with line end. \n New line \n Another line test2 cat
4 line with line end. \n New line \n \t Another line test3 cat
I want to replace all excluding first result
I have txt file:
AAA
BBB
CCC
AAA
BBB
CCC
AAA
BBB
CCC
I want to get this:
AAA
BBB <-- stay same
CCC
AAA
XXX <-- 2nd find replaced
CCC
AAA
XXX <-- 3rd and nth find replaced
CCC
I looking something similar to this, but for whole lines, not for words in lines
sed -i 's/AAA/XXX/2' ./test01
Use branching:
sed -e'/BBB/ {ba};b; :a {n;s/BBB/XXX/;ba}'
I.e. on the first BBB, we branch to :a, otherwise b without parameter starts processing of the next line.
Under :a, we read in a new line, replace all BBB by XXX and branch to a again.
Following awk may also help you on same.
awk '{++a[$0];$0=$0=="AAA"&&(a[$0]==2||a[$0]==3)?$0 ORS "XXX":$0} 1' Input_file
$ # replace all occurrences greater than s
$ # use gsub instead of sub to replace all occurrences in line
$ # whole line: awk -v s=1 '$0=="AAA" && ++c>s{$0="XXX"} 1' ip.txt
$ awk -v s=1 '/AAA/ && ++c>s{sub(/AAA/, "XXX")} 1' ip.txt
AAA
BBB
CCC
XXX
BBB
CCC
XXX
BBB
CCC
$ # replace exactly when occurrence == s
$ awk -v s=2 '/AAA/ && ++c==s{sub(/AAA/, "XXX")} 1' ip.txt
AAA
BBB
CCC
XXX
BBB
CCC
AAA
BBB
CCC
Further reading: Printing with sed or awk a line following a matching pattern
awk '/BBB/{c++;if(c >=2)sub(/BBB/,"XXX")}1' file
AAA
BBB
CCC
AAA
XXX
CCC
AAA
XXX
CCC
As soon as your file does not contain null chars (\0) you can fool sed to consider the whole file as a big string by intstructing sed to separate records using null char \0 instead of the default \n with sed option -z:
$ sed -z 's/BBB/XXX/2g' file66
AAA
BBB
CCC
AAA
XXX
CCC
AAA
XXX
CCC
/2g at the end means from second match and then globally.
You can combine -i with -z without problem.
I have a text file that looks like this:
AAA
BBB
CCC
AAA
DDD
EEE
It has a specific keyword, for example AAA. After encountering the keyword, I'd like to copy the following line and then write it a second time in my output file.
I want it to look like this:
AAA
BBB
BBB
CCC
AAA
DDD
DDD
EEE
Is there anybody who will help me to do this?
Sed can do it like this:
$ sed '/AAA/{n;p}' infile
AAA
BBB
BBB
CCC
AAA
DDD
DDD
EEE
This looks for the pattern (/AAA/), the reads the next line of input (n) and prints it (p). Because printing is the default action anyway, the line gets printed twice, which is what we want.
awk to the rescue!
$ awk 'd{print;d=0} /AAA/{d=1}1' file
AAA
BBB
BBB
CCC
AAA
DDD
DDD
EEE
Explanation
d{print;d=0}
if flag dset print the line and reset the flag,
/AAA/{d=1}
set a flag to duplicate the line for the given pattern,
1
and print all lines.
You can use perl for this
perl -e ' $a =undef;
while(<>){
chomp;
if ($a eq "AAA"){
print "$_\n"
}
print "$_\n";
$a=$_;
}' your_file.txt
This iterates through the file and prints each line. If the previous line is "AAA", it prints it twice.
I don't know whether you share my hatred of one-line programs, but this is entirely possible in Perl
$ perl -ne'print; print scalar <> x 2 if /AAA/' aaa.txt
output
AAA
BBB
BBB
CCC
AAA
DDD
DDD
EEE