Tcl Script OR Perl? - perl

I wish to replace the following verilog code by using scripting.
assign x0 = in0 + in7;
I wish to search for the "+" sign above and replace the whole line with the line below:
KSA_32 U1(.A(in0), .B(in7), .Sum(x0));
any suggestion and sample script on this?

If your Verilog file is able to fit comfortably in memory, you can simply do:
# Read in the file
set f [open $verilogfile r]
set contents [read $f]
close $f
# Perform the transform across the whole contents
regsub -all {assign\s+(\w+)\s*=\s*(\w+)\s*\+\s*(\w+);} $contents \
{KSA_32 U1(.A(\2), .B(\3), .Sum(\1));} contents
# Write the results out to a new file (different filename so you can check the results by hand)
set f [open $verilogfile.new w]
puts -nonewline $f $contents
close $f
The first and third block are standard Tcl patterns for file manipulation. The second is a standard regular expression substitution, which I made by taking what you asked for and applying guesses for what are templates. Note that the literal + needs to be escaped, and spaces are best matched as \s+ or \s*.

Related

Search for a match, after the match is found take the number after the match and add 4 to it, it is posible in perl?

I am a beginer in perl and I need to modify a txt file by keeping all the previous data in it and only modify the file by adding 4 to every number related to a specific tag (< COMPRESSED-SIZE >). The file have many lines and tags and looks like below, I need to find all the < COMPRESSED-SIZE > tags and add 4 to the number specified near the tag:
< SOURCE-START-ADDRESS >01< /SOURCE-START-ADDRESS >
< COMPRESSED-SIZE >132219< /COMPRESSED-SIZE >
< UNCOMPRESSED-SIZE >229376< /UNCOMPRESSED-SIZE >
So I guess I need to do something like: search for the keyword(match) and store the number 132219 in a variable and add the second number (4) to it, replace the result 132219 with 132223, the rest of the file must remain unchanged, only the numbers related to this tag must change. I cannot search for the number instead of the tag because the number could change while the tag will remain always the same. I also need to find all the tags with this name and replace the numbers near them by adding 4 to them. I already have the code for finding something after a keyword, because I needed to search also for another tag, but this script does something else, adds a number in front of a keyword. I think I could use this code for what i need, but I do not know how to make the calculation and keep the rest of the file intact or if it is posible in perl.
while (my $row = <$inputFileHandler>)
{
if(index($row,$Data_Pattern) != -1){
my $extract = substr($row, index($row,$Data_Pattern) + length($Data_Pattern), length($row));
my $counter_insert = sprintf "%08d", $counter;
my $spaces = " " x index($row,$Data_Pattern);
$data_to_send ="what i need to add" . $extract;
print {$outs} $spaces . $Data_Pattern . $data_to_send;
$counter = $counter + 1;
}
else
{
print {$outs} $row;
next;
}
}
Maybe you could help me with a block of code for my needs, $Data_Pattern is the match. Thank you very much!
This is a classic one-liner Perl task. Basically you would do something like
$ perl -i.bak -pe's/^< COMPRESSED-SIZE >\K(\d+)/$1 + 4/e' yourfile.txt
Which will in essence copy and replace your file with a new, edited file. This can be very dangerous, especially if you are a Perl newbie. The -i switch is here used with the .bak extension which saves a backup in yourfile.txt.bak. This does not make this operation safe, however, as running the command twice will overwrite the backup.
It is advisable to make a separate backup of the target file before using this command.
-i.bak edit "in-place", the file is overwritten, a backup of the original is created with extension .bak.
-p argument is treated as a file name, which is read, and printed back.
s/ // the substitution operator, which is applied to all lines of the file.
^ inside the regex looks for beginning of line.
\K keep the match that is to the left.
(\d+) capture () 1 or more digits \d+ and store them in $1
/e treat the right hand side of the substitution operator as an expression and use the result as the replacement string. In this case it will increase your number and return the sum.
The long version of this command is
while (<>) {
s/^< COMPRESSED-SIZE >\K(\d+)/$1 + 4/e
}
Which can be placed in a file and run with the -i switch.

tcl command for removing a word mutilple times

I want to convert .v file to .VHD ,so there are many lines of assign statements .want to delete all assign words. so is there any TCL command for that ,am using VIVADO 2018.2.
For example, I have this input text:
assign mem[0]=4b'0000;
assign mem[1]=4'b0001;
.................
assign mem[511]=4'b1010;
I want to delete all assign words at a time because to convert .V file to .VHDL
In this case, we are looking to specifically delete a word and not just the sequence of characters assign (because who knows where else that might come up?) The right tool for this is to use the regsub command so that we can specify that we want word boundaries to be matched:
set content [regsub -all {\yassign\y} $content ""]
The \y in there matches a word boundary. Just what we need! (Replacing with the empty string is an obvious way to delete things.)
To turn this into a full conversion, we need to add in the code to read in the file and write it back out again.
set filename_in "something.V"
set filename_out "something.vhdl"
# Standard pattern for reading a whole file
set f [open $filename_in]
set content [read $f]
close $f
set content [regsub -all {\yassign\y} $content ""]
# Standard pattern for writing a whole file
set f [open $filename_out "w"]
puts -nonewline $f $content
close $f
The -nonewline is because when we read in the content, we kept all the line separators and our transform hasn't touched them. Wwe don't want to add any bonus ones at the end; that can get annoying if you're doing many transformations on a file.
If the file is very large, it can help a lot to process things a bit at a time. That's easy in this case because we are not trying to do a multi-line match.
set filename_in "something.V"
set filename_out "something.vhdl"
set f_in [open $filename_in]
set f_out [open $filename_out "w"]
while {[gets $f_in line] >= 0} {
set line [regsub -all {\yassign\y} $line ""]
puts $f_out $line
}
close $f_in
close $f_out
Note that this pattern can't write back to the same file.

Trouble with backslash in Tcl exec

I am writing a data import script in Tcl (from SQL Server to Postgres) and have to call command line unix tr to scrub out null characters in a data file. I write the data to a temp file and then use exec to process the file through tr.
The tr call I would like Tcl to generate looks like this on the command line:
tr -d '\000' < blah >blah.notnull
The Tcl code I use to make the above is this, with $STATE(TMP) holding the temp file:
set ret [catch {exec tr -d '\\000' < $STATE(TMP) > $STATE(TMP).clean}]
However, sometimes this doesn't work and the PostgreSQL COPY fails because of x00 characters. If I run the command line version on the file, then COPY succeeds.
Could someone help me out understanding the exec call and quoting and backslashes? I am a bit stumped.
The error message, a reformatted version of the PG error:
Problem with COPY on blahblah: PGRES_FATAL_ERROR, ERROR: invalid byte sequence for encoding "UTF8": 0x00
Annoyingly, the Tcl exec code often works, but not always.
(We are hand rolling an import system using Tcl, Linux, BCP, SQL server, etc. beause all the off-the-shelf tools fail with the size of our data.)
Thanks to all who read or answer!
The thing is that Tcl doesn't ascribe any special meaning at all to single quotes. The equivalent in Tcl is braces, so use {\000} instead of '\000'. With what you wrote, you were sending three characters (a ', a NUL, and another ') in as that argument, and that causes all sorts of trouble since literal NUL characters don't go well as C strings.
Thus, you should be doing:
exec tr -d {\000} < blah >blah.notnull
or:
set ret [catch {
exec tr -d {\000} < $STATE(TMP) > $STATE(TMP).clean
}]
Tcl can also do that operation directly.
# Read binary data
set f [open $STATE(TMP) "rb"]
set data [read $f]
close $f
# Write transformed binary data
set f [open $STATE(TMP).clean "wb"]
puts -nonewline $f [string map [list \u0000 ""] $data]
close $f
[EDIT]: When the amount of data being transformed is large, it's better to do a bit at a time.
set fIn [open $STATE(TMP) "rb"]
set fOut [open $STATE(TMP).clean "wb"]
while true {
# 128kB chunk size; a bit arbitrary, but big enough to be OK
set data [read $fIn 131072]
# If we didn't read anything and instead got EOF, stop the loop
if {[eof $fIn]} break
puts -nonewline $fOut [string map [list \u0000 ""] $data]
}
close $fIn
close $fOut
You could also use a Tcl 8.6 channel transform to do the work and then fcopy to move things over, but there wouldn't be much difference in performance.

Sed command inside TCL script

Help me understand the syntax sed.I removed single quotes, but the code still does not work.
set id [open file.txt]
# send the request, get a lot of data
set tok [::http::geturl "http://example.com"-channel $id]
# cut out the necessary data between two words
exec sed s/{"data1":\(.*\)/data2\1/ $id
close $id
set ir [open file.txt]
set phone [read $ir]
close $ir
puts $phone
The problem is that I get data from a query of the following kind
{"id":3876,"form":"index","time":21,"data":"2529423","service":"Atere","response":"WAIT"}
The brace is an element of the syntax of the language, and I need to cut exactly the value between the word and the brace. How to implement this in a script.
Your code is rather confused, as (a) you are passing a file handle to the sed command. That's not going to work. (b) you are passing an input channel to http rather than an output channel (try opening the file for writing).
About the underlying problem.
If you are receiving basic JSON data back as shown.
a) You can use a JSON parser: tcllib's json module
b) Convert it to a form that Tcl can parse as a dictionary
# Assuming the JSON data is in the $data variable, and there's no
# other data present. This also assumes the data is very basic
# there are no embedded commas. Many assumptions means this
# code is likely to break in the future. A JSON parser would
# be a better choice.
set data "\{"
append data {"id":3876,"form":"index","time":21,"data":"2529423","service":"Atere","response":"WAIT"}
append data "\}"
regsub -all {[{}:",]} $data { } data
set mydatadict $data
puts [dict get $mydatadict id]
Edit:
For http processing:
set tok [::http::geturl "http://example.com"]
set data [::http::data $tok]
::http::cleanup $tok

MATLAB: How do you insert a line of text at the beginning of a file?

I have a file full of ascii data. How would I append a string to the first line of the file? I cannot find that sort of functionality using fopen (it seems to only append at the end and nothing else.)
The following is a pure MATLAB solution:
% write first line
dlmwrite('output.txt', 'string 1st line', 'delimiter', '')
% append rest of file
dlmwrite('output.txt', fileread('input.txt'), '-append', 'delimiter', '')
% overwrite on original file
movefile('output.txt', 'input.txt')
Option 1:
I would suggest calling some system commands from within MATLAB. One possibility on Windows is to write your new line of text to its own file and then use the DOS for command to concatenate the two files. Here's what the call would look like in MATLAB:
!for %f in ("file1.txt", "file2.txt") do type "%f" >> "new.txt"
I used the ! (bang) operator to invoke the command from within MATLAB. The command above sequentially pipes the contents of "file1.txt" and "file2.txt" to the file "new.txt". Keep in mind that you will probably have to end the first file with a new line character to get things to append correctly.
Another alternative to the above command would be:
!for %f in ("file2.txt") do type "%f" >> "file1.txt"
which appends the contents of "file2.txt" to "file1.txt", resulting in "file1.txt" containing the concatenated text instead of creating a new file.
If you have your file names in strings, you can create the command as a string and use the SYSTEM command instead of the ! operator. For example:
a = 'file1.txt';
b = 'file2.txt';
system(['for %f in ("' b '") do type "%f" >> "' a '"']);
Option 2:
One MATLAB only solution, in addition to Amro's, is:
dlmwrite('file.txt',['first line' 13 10 fileread('file.txt')],'delimiter','');
This uses FILEREAD to read the text file contents into a string, concatenates the new line you want to add (along with the ASCII codes for a carriage return and a line feed/new line), then overwrites the original file using DLMWRITE.
I get the feeling Option #1 might perform faster than this pure MATLAB solution for huge text files, but I don't know that for sure. ;)
How about using the frewind(fid) function to take the pointer to the beginning of the file?
I had a similar requirement and tried frewind() followed by the necessary fprintf() statement.
But, warning: It will overwrite on whichever is the 1st line. Since in my case, I was the one writing the file, I put a dummy data at the starting of the file and then at the end, let that be overwritten after the operations specified above.
BTW, even I am facing one problem with this solution, that, depending on the length(/size) of the dummy data and actual data, the program either leaves part of the dummy data on the same line, or bring my new data to the 2nd line..
Any tip in this regards is highly appreciated.