I have a text file of mysqldump commands. It looks something like this...
INSERT INTO `agent_new` VALUES ('1', 'abc'), ('2', 'xyz'), ('3', 'pqr');
I need the last entry from the list of tuples. I also need the table name. So the expected output will be
INSERT INTO `agent_new` VALUES ('3', 'pqr');
The last row will always have a semi colon at the end for e.g. 'pqr');
I need to extract the last row to compare with the actual database. I can do "tail" but I will like to know if there is any better way.
Here's one way using sed:
sed 's/(.*(/(/' file
Results:
INSERT INTO `agent_new` VALUES ('3', 'pqr');
This might work for you (GNU sed):
sed "s/('[^']*', '[^']*'), //g" file
or:
sed 's/('\''.*'\''),//' file
using delimiters you should be able to pick through the stream and take what you want.
I havent looked at the syntax in a while so I dont want to write some code that isn't 100% correct, but i recommend googling some examples involving delimiters !
Sorry I couldn't be more help (n)
sed -E 's/\(.*\), //' filename
This should do what you want.
Related
I have a CSV file that is causing me serious headaches going into Tableau. Some of the rows in the CSV are wrapped in a " " and some not. I would like them all to be imported without this (i.e. ignore it on rows that have it).
Some data:
"1;2;Red;3"
1;2;Green;3
1;2;Blue;3
"1;2;Hello;3"
Do you have any suggestions?
If you have a bash prompt hanging around...
You can use cat to output the file contents so you can make sure you're working with the right data:
cat filename.csv
Then, pipe it through sed so you can visually check that the quotes were delted:
cat filename.csv | sed 's/"// g'
If the output looks good, use the -i flag to edit the file in place:
sed -i 's/"// g' filename.csv
All quotes should now be missing from filename.csv
If your data has quotes in it, and you want to only strip the quotes that appear at the beginning and end of each line, you can use this instead:
sed -i 's/^"\(.*\)"$/\1/' filename.csv
It's not the most elegant way to do it in Tableau but if you cannot remove it in the source file, you could create a calculated field for the first and last column that strips the quotation marks.
right click on the field for the first column choose Create/Calculated Field
Use this formula: INT(REPLACE([FirstColumn],'"',''))
Name the column accordingly
Do the same for the last column
Assuming the data you provided fits the data you work on. The assumption is that these fields are integer field (thus the INT() usage). In case they are string fields you would want to make sure that you don't remove quotation marks that belong to the field value.
I'm trying to format a postgres dump (pg_dump) to be able to import it using a JDBC connection. pg_dump exports text fields that contain newlines to as just that, text with newlines, so when I later try to import using JDBC I reach the end of line and the statement fails.
What I want to do is take the dump, pass it through sed and escape all newlines, so that I end up with one INSERT statement per line. Problem is that I cannot just remove all newlines, but I can remove all newlines that do no match this );\nINSERT INTO. Is there a simple way to do just this?
Update:
A sample would look like this:
INSERT INTO sometable (123, And here goes some text
with
newlines
in
it', 'some more fields');
and the result I'm looking for is something like this:
INSERT INTO sometable (123, And here goes some text\nwith\nnewlines\nin\nit', 'some more fields');
So that each INSERTstatement is on a single line, with the string's newlines escaped.
Not a sed solution, but might the following work?
cat test_dump.txt | perl -pe "s/[^(\);INSERT INTO)]\n/\\$1\\n/"
You can do it in vim.
vim my_dump.sql
:%s/\();\)\#<!\n\(INSERT\)\#!//c
% .. do for all lines
s .. substitute
\n .. newline (Unix style; you are aware, that Windows has \r\n and Apple \r for line breaks?)
flags:
c .. Confirm each substitution (for testing first)
info on negative lookahead and lookbehind
:help \#!
:help \#<!
sed normally operates on lines, it needs to go out of its way to replace line breaks.
Google for "sed multi-line replace", you'll find stuff like this.
I am trying to insert a comma after the values on line 1, 4, 8, etc using sed,
sed '0-4 s/$/,/' in.txt > in2.txt
For some reason this isn't working so I was wondering if anyone has any solutions doing this using awk, sed, or any other methods.
The error I am getting is
sed: 1: "0-4 s/$/,/": invalid command code -
Currently my data looks like this:
City
Address
Zip Code
County
and I was trying to format it like this
City,
Address
Zip Code
County
Much Appreciated.
0-4 indeed is not well-formed sed syntax. I would use awk for this, but it is easy to do it with either.
sed 's/$/,/;n;n;n' file
which substitutes one line and prints it, then prints the next three lines without substitution, then starts over from the beginning of the script; or
awk 'NR % 4 == 1 {sub(/$/,",")} {print}'
which does the substitution if the line number modulo 4 is 1, then prints unconditionally.
Sed's addressing modes are sometimes a tad disappointing; there is no standard way to calculate line offsets, relative or in reference to e.g. the end of the file. Of course, awk is more complex, but if you can only learn one or the other, definitely go for awk. (Or in this day and age, Python or Perl -- a much better investment.)
This might work for you (GNU sed):
sed '1~4s/$/,/' file
I have a comma separated CSV file looks like:
customer1,customer2,,customer4,
,customer2,,customer4,
custome1,,customer3,,
I want replace null value inside (,)with word "unknown".
How can I do that?
sed -e 's/,,/,unknown,/g'
will work except if you want to add unknown at the beginning or end of lines too.
If you also want to add something if the first is missing (line starts with ,) or the last one is missing (line ends with ,), then you could do:
sed -e 's/^,/unknown,/' -e 's/,,/,unknown,/g' -e 's/,$/,unknown/'
I'm sure there is a more elegant way, but that works.
I've inherited a few dozen sql scripts that look like this:
select
column_a,
column_b,
column_c
from
my_table
To format them so they match the rest of our sql library, I'd like to change them to look like this:
select
column_a
,column_b
,column_c
from
my_table
where the commas start at the beginning of the line instead of at the end. I've taken a few passes at this in Perl, but haven't been able to get it to work just right.
Can any of you Perl gods provide some enlightenment here?
perl -pi.bak -0777 -wle's/,[^\n\S]*\n([^\n\S]*)/\n$1,/g' file1.sql file2.sql ...
The character class is any non-newline whitespace.
-0777 causes it to operate on whole files, not lines.