I have an XML file which requires 2 values to be passed dynamically. Can anyone please assist with my query.
#!/usr/bin/ksh
sed s/a/$1/b/$2/g FILE_PATH/FILE_A_INPUT.xml > FILE_PATH/FILE_A.xml
Used the above function in .sh script, but it error-ed out.
RUN_THIS.sh 1 2
sed: Function s/a/1/g/b/2/g cannot be parsed
Try:
sed "s/a/$1/g;s/b/$2/g" INPUT > OUTPUT
Instead.
Related
I added a dummy column at the beginning of my data export to a CSV file to get rid of control characters and some specific string values as mentioned below by using a pipe '|' delimiter. This data is coming from Teradata fast export using utf-8
'''
y^CDUMMYCOLUMN|
<86>^ADUMMYCOLUMN|
<87>^ADUMMYCOLUMN|
<94>^ADUMMYCOLUMN|
{^ADUMMYCOLUMN|
_^ADUMMYCOLUMN|
y^CDUMMYCOLUMN|
[^ADUMMYCOLUMN|
k^ADUMMYCOLUMN|
m^ADUMMYCOLUMN|
<82>^ADUMMYCOLUMN|
c^ADUMMYCOLUMN|
<8e>^ADUMMYCOLUMN|
<85>^ADUMMYCOLUMN|
'''
This is completely random and not every row has these special characters. I'm sure I'm missing something here. I'm using sed to get rid of dummycolumn and control characters.
'''$ sed -e 's/.*DUMMYCOLUMN|//;/^$/d' data.csv > data_output.csv'''
After running this statement, I'm still remaining these below random values.
'''
<86>
<87>
<85>
<94>
<8a>
<85>
<8e>
'''
I could have written a sed statement to remove first three letters from each row but this series is not appearing in every row. At the same time, row count is 400 Million.
Current output.
y^CDUMMYCOLUMN|COLUMN1|COLUMN2|COLUMN3
<86>^ADUMMYCOLUMN|6218915846|36596|12
<87>^ADUMMYCOLUMN|9822354765|35325|33
t^ADUMMYCOLUMN|6788793999|111|12
g^ADUMMYCOLUMN|6090724004|7017|12
_^ADUMMYCOLUMN|IC-21357688806502|111|12
<8e>^ADUMMYCOLUMN|9682027117|35335|33
v^ADUMMYCOLUMN|6406807681|121|12
h^ADUMMYCOLUMN|6346768510|121|12
V^ADUMMYCOLUMN|6130452510|7017|12
Desired Output
COLUMN1|COLUMN2|COLUMN3
6218915846|36596|12
9822354765|35325|33
6788793999|111|12
6090724004|7017|12
IC-21357688806502|111|12
9682027117|35335|33
6406807681|121|12
6346768510|121|12
6130452510|7017|12
Please help.
Thank you.
I have a requirement to replace multiple columns of a csv file with its base64 encoding value which should be applied to some columns of the file but keep the first line unaffected as the first line contains the header of the file. I have tried out for 1 column as below but as I have given it to proceed after skipping the first line of the file it is not
gawk 'BEGIN { FS="|"; OFS="|" } NR >=2 { cmd="echo "$4" | base64 -w 0";cmd | getline x;close(cmd); print $1,$2,$3,x}' awktest
o/p:
12|A|B|Qw==
13|C|D|RQ==
36|Z|V|VQ==
Qs: It is not showing the header in the output. What should I do to make produce the header in the output? Also can I use any loop here to replace multiple columns?
input:
10|A|B|C|5|T|R
12|A|B|C|6|eee|ff
13|C|D|E|9|dr|xrdd
36|Z|V|U|7|xc|xd
Required output:
10|A|B|C|5|T|R
12|A|B|encodedvalue|6|encodedvalue|ff
13|C|D|encodedvalue|9|encodedvalue|xrdd
36|Z|V|encodedvalue|7|encodedvalue|xd
Is this possible? Have researched a lot but could not find a proper explanation. I am new to shell. Kindly help. Many thanks!!!!
It looks like you can just sequence conditionals. This may not be the best way of solving the header issue, but it's intuitive.
BEGIN { FS="|"; OFS="|" } NR ==1 {print} NR >=2 { cmd="echo "$4" | base64 -w 0";cmd | getline x;close(cmd); print $1,$2,$3,x}
As for using a loop to affect multiple columns... Loops in bash are hard. Awk is technically its own language, and may have a looping construct of it's own, IDK. But it's not clear you need a loop. If there's only a reasonable number of fields that need modifying, you can just parameterize the existing command (somehow) by the field index, and then pipe through however many instances of it. It won't be as performant as doing it all in a single pass of awk, but that's probably ok.
I wrote a Perl script to convert from TEX format to JSON format.
Calling in the batch file:
perl -w C:\test\support.pl TestingSample.tex
This is working fine now.
Perl script having two types of input from another program (might be any platform/technology) one is file (*TEX) or else content (*TEX file) either this or that option.
How can I receive the full content as the input to the Perl script?
Now my Perl script is:
my $texfile = $ARGV[0]; my $texcnt = "";
readFileinString($texfile, \$texcnt);
I am trying to update:
perl -w C:/test/support.pl --input $texcnt" #Content is Input
I am receiving error message:
The command line is too long.
Could someone please advice?
First of all regarding the error you're getting:
Perl (or your shell) is complaining that your input argument is too long.
Parsing entire files as arguments to scripts is generally a bad idea anyway, for example quotation mark escaping etc. might not be handled and thus leave a wide open vulnarbility to your entire system!
So the solution to this is to modify your script so that it can take the file as an argument (if that isn't already the case) and if you really need to have an entire file's content parsed as an argument I'd really advise you to create a temporary file in /tmp/ (if on Linux) or in your %TEMP% directory on Windows and parse the file the content into the file and after that give your support.pl script the new temp file as an argument.
I am writing one perl script which is having some if else conditions. There is another .txt file in which, I want to place that conditional statements which exists in if else (in perl file) after a certain string. i did some search for this but most of the programs are based on merging two files. But in my case one file is perl file itself in which conditional statements exist and other is text file in which I want to append that conditional statements after a certain string. My files look like-
File 1
If (n==1 && m==1){
print (".include xyz.txt")}
else if(n==1 && m==0){
print (".include abc.txt")}.....
File 2
lines....
lines....
*matching string
Here I want to append #.include xyz.txt
lines....
lines....
Can both files run simultaneously and my conditional statements can be added in another file? Or first I have to take output from file 1 in other output file then to append it in second file. Please help me out. Thanks
Using perl from command line,
perl -MFcntl=:seek -pe 'seek(ARGV,0,SEEK_END) if /match/ and !$c++' fil1 fil2
It skips to fil2 file when it finds string match within fil1, and !$c++ ensures that skipping occurs only once.
I need help to filter a part of text from my original logs:
<variable>
<status type="String"><![CDATA[-1]]></status>
<errorCode type="String"><![CDATA[[bpm]]]></errorCode>
<mensagens type="MensagemSistema[]">
<item>
<msg_err type="String"><![CDATA[ERROR1-This is error: - THIS TEXT IS VARIABLE.]]</msg_err>
<msg_err_stack type="String"><![CDATA[stack_trace]]></msg_err_stack>
</item>
</mensagens>
</variable>
The part that I want is:
<msg_err type="String"><![CDATA[ERROR1-This is error: - THIS TEXT IS VARIABLE.]]>
... and this text is variable.
I tried to perform this with sed, but I wouldn't find a example to remove text outside two strings. Just another thing this is unix
thanks in advance
Tiago
You could try the below sed command,
$ echo '<msg_err type="String"><![CDATA[ERROR1-This is error 1.]]></msg_err>' | sed 's/.*\[\([^][]*\).*/\1/g'
ERROR1-This is error 1.
This looks like a job for an XML parser. The Perl module XML::Simple is capable of retrieving the data you want:
perl -MXML::Simple -e '$xml = XMLin(\*STDIN); print $xml->{'mensagens'}->{'item'}->{'msg_err'}->{'content'};' < error.xml
Output:
ERROR1-This is error: - THIS TEXT IS VARIABLE.
Note that I added a > to close the CDATA in the msg_err tag, as I assumed this to be a typo.