Let's say I have a 5000 lines of RTL .sv file called main.sv, and inside there is a always_comb block, like so:
always_comb begin
//2000 lines of code here
end
I'm trying to cut and paste this big always_comb into another separate file called sub.sv and replace it with
`include "sub.sv"
in the same place inside main.sv file where the big always_comb once was, for better readability and concise.
The problem I'm having right now is vcs compile throws syntax error on the sub.sv file I created, it simply say it does not expect "always_comb begin" on the 1st line. I guess (although not sure) this is because vcs thinks this file as an .sv file and expects a sv module definition at the beginning.
I looked for other ways using macro online but cannot find a example for my case. What do you think is the correct or better way of doing this kind of in-place code substitution in Systemverilog?
Rename your sub.sv file to sub.svh
i.e. system verilog header
Also you don't need to compile this file since it contains no modules.
Also make sure to pass the path of this file to the tool if its in different folder
Related
I have been working on a product code to resolve an issue but am stuck on a line of code
Can anyone help me understand what exactly does this command do?
perl -MText::CSV -lne 'BEGIN{$p = Text::CSV->new()} print join "|", $p->fields() if $p->parse($_)' /home/daily/${FULL_FILENAME} > /home/output.txt
I think its to copy the file to my home location with some transformations but not sure exactly
This is a slightly broken program that translates a comma-separated values (CSV) file to a pipe-separated values file.
The particular command-line switches are documented in perlrun. This is a "one-liner", so you can read about those to see what's going on there.
The Text::CSV module deals with CSV files, and the program is parsing a line from the file and re-outputting as a pipe-separated file.
But, this program deals with each line as a complete record. That might be fine for you, but at some point you might end up with a literal value that has a newline in it, like a,"b\nc",d. Now reading line-by-line breaks the program since the quotes appear to be unclosed within the first line. Note only that, it blindly concatenates the parsed fields without considering if any of the fields should be quoted. It might be unlikely that a pipe character would be in the data, but the problem isn't it's rarity but the consequences and costliness when it does show up.
The rewrite.pl example script in the related module Text::CSV_XS is a tool that could replace this one-liner. It properly reads the input and knows how to properly translate it.
Does MATLAB have the following capability: take source code that directly includes other .m files and output the source that would result from merging all included files?
For example, consider script_one.m:
% some matlab code
script_two
% more matlab code
I would like to programmatically generate the .m file that would result from copying and pasting the contents of script_two.m into script_one.m. This is difficult to do with normal scripting tools because I would essentially need a MATLAB symbol table to determine which identifiers correspond to sourceable scripts. I highly doubt that Matlab provides such a facility, but am open to other ideas.
The "use case" is the need to modify the source (using sed) but the changes need to propagated to any dependent scripts, such as script_two.m. As I don't have a listing of the dependent scripts, they can only be identified by going through the source manually (and it needs to be done on a large number of dynamically created files).
Some details on the use case:
The main script (script_one) is called with dynamically created header files, e.g., matlab [args] -r 'some definitions; script_two; script_three; others; main_script();quit()'. This is run on machine A; for load balancing, it may need to be run instead on machines B, C, etc, which mount the file system of A at some point. Any paths in the included .m files (which are mainly used as headers) would need to be essentially chrooted to work on the new host. The simplest solution would be to preprocess the code which was generated for machine A, using sed to replace all paths for the new host (B, C, etc.). It can of course be solved by making the changes in matlab, but a sed one-liner is a more attractive solution in terms of parsimony.
In general, no, it's not possible in MATLAB. What you want is a language feature common to languages that require compilation step before execution, but this is not MATLAB's language model, and therefore, it is only doable via hacky wacky language abuse.
You could, conceivably, create a master script, which takes care of coordinating the generation of new source files, and executing them via eval():
[o,e] = system('<your sed command here, to generate script_one.m>');
% ... some more code
% execute newly generated M-file
[outputs] = eval('script_one');
But I hope you see and agree that this turns into spaghetti really quickly.
Executing a script with changing contexts and parameters is exactly what the function language feature was invented for :)
I currently have a mess of Perl code that includes something like a configuration.pm file that exports a large number of variables that other modules are using. The same module uses at least one module, call it Foo, which we wrote in some of the helper methods provided by the configuration.pm (they should be in a different module, but not ready to change this yet).
Currently it loads the module with something like this right near the top of the file:
Begin{ push #INC, 'hard/coded/directory'}
use Module::Foo;
I'm trying to get rid of this hard coded directory. I've already added a default configuration file for it to read data from. I moved the import down some and replaced the use with a require, something like this...
$script_directory = $config_data_from_file{'script_directory'};
push #inc, $script_directory;
require Module::Foo;
However, I want to add a command line argument to Main.pl to point to a different configuration file if I don't want to use the default one. My problem is that all the other modules expect configuration.pm to have loaded configuration data and required foo as soon as they include it. So I can't have configuration.pm wait to initialize until main.pl is ready. The closest I can come up with is something like this:
package Configuration;
load_config_file('default/file/location');
sub load_config_file($){
$config_data_from_file = read_file(#_[0]);
$script_directory = $config_data_from_file{'script_directory'};
push #inc, $script_directory;
require Module::Foo;
#load the rest
}
and have Main.pl recall the load_config_file if a command line option changes the configuration file.
But this is a problem for two reasons. First, if my default script location doesn't exist I still explode when I try to do the first import. Second, I'm requiring Foo twice, overwriting it, which could lead to issues if there are difference between the files. For that matter adding the default script_directory to #INC should be avoided.
There are a few ways to fix the problem I could see. A way to more cleanly load different versions of a module to replace the old one, a way to make Foo delay it's attempt to load until the first time it's used in the file, or a way to delay the $load_config_file method until after I read the configuration file for example. However, as a perl newbie I don't know how to do any of them, and haven't had much luck finding out how online.
I actually can do this now, with a fragile order of loading data that makes presumptions or by skipping ahead to a more through refactor of dozens of scripts to implement the long term solution sooner (but I'm really afraid to touch that much code before I have a way to test the code on my computer). However, I'm asking partially in hopes of learning more features of Perl I may find useful later; how would this be solved if I couldn't do the refactoring?
If you want to give the configuration file as the first parameter you can do something like this:
Main script:
#!perl
BEGIN {
use Configuration;
}
use Module::Foo;
... rest of script ...
Configuration.pm:
package Configuration;
load_config_file($ARGV[0] || 'default/file/location');
sub load_config_file($){
$config_data_from_file = read_file(#_[0]);
$script_directory = $config_data_from_file;
push #INC, $script_directory;
}
My solution in general was to look for my -f argument for a configuration file in my configuration.pm as soon as it is loaded and load the configuration file if possible then, while leaving the #ARGV variable untouched so that others could still parse it. This means we end up parsing command line args twice (actually 3 times), but that doesn't do any real harm. I am enforcing the -f argument being predefined in any module that uses my configuration.pm, and sort of require configuration.pm to be the very first module we include, but I consider that a minor expense. Anyone using our configuration.pm file for configuration arguments should desire that behavior.
I found AppConfig was the best module for handling this. My solution could be done without it, but AppConfig made it cleaner because it combined means of loading variables from config file and command line. in fact I, by pure accident, ended up adding the ability to modify any single variable directly from the command line if they choose the way I did it.
My configuration.pm looks something like hits (rewriting this from memory, not exact)
$conf = AppConfig -> new({
GLOBAL=> {
EXPAND => AppConfig::Expand_Var,
ARGCOUNT => AppConfig::ARGCOUNT_ONE
}})
$conf.define("script_dir", {DEFAULT = "/default/location"});
$conf->define("f", {ALIAS ="file|conf_file"});
...other defines here
#read config file if -f arg exists
parse_commandline_args();
$conf->file($conf->conf_file()) if defined $conf->conf_file()
#reread command line so that arguments on it override those in conf file
parse_commandline_args();
#at this point script_dir should be correct so safely include it.
push #INC $conf->script_dir();
sub parse_commandline_args(){
$copy_of_args = [#ARGV];
$conf->args($copy_of_args);
}
My main.pl is practically untouched. I use configuration.pm near the top of the module and everything else just works. I still need to go through and redefine all the scripts that Use a script to require it instead so that configuration.pm has time to update the INC before it runs, but other then that the rest just works. Anywhere I want to use content from the configuration file I now just can $conf->variable()
The parse_commandline_args is important. just using $conf->args() will erase the content of #ARGV, making them unavailable for later modules, like my main.pl. By copying the array first we leave the original #ARGV untouched for later use.
Not sure if I would recommend this from scratch, feels wrong the way configuration.pm is automagically doing everything, but for updating our ugly prototype to function long enough to maintain it until were funded to write the proper version, which I will not be doing in perl, it will do.
I have an unusual requirement. I have a big config /perl file in which I would like to change the value of one variable before my run. To avoid manually finding the variable and changing it's value, I would like to write a perl script to change the name of the variable. Is that possible to do this without parsing every single line of big perl file, creating a temporary copy and overwriting old file.
Something is parsing this file at some point, right? Give it a list of things to substitute and you can have it only do the substitutions when it needs it. This avoids a big pre-startup overhead and if the config file is sparsely used, will result in a faster overall run.
So just make the thing reading it look for certain patterns to substitute in and a file (or passed in on the command line or environment variables, or...) for the values it should use and go from there.
If you don't have control over the parser, then there's not much to do. You could one-time pre-process the config file to determine EXACTLY where the substitutions need to be and write a faster processor, since it won't have to do any string parsing for regular expressions, just moving a bunch of bytes as fast as your computer can move them to the new file with the substitutions in place.
I'm using Perl WWW::Mechanize package in order to fetch and process data from some websites. Usually my way of action is as follows:
Fetch a webpage
$mech->get("$url");
Save the webpage contents in a variable (BTW, I'm not sure if it's the right way to save this amount of text inside a scalar which, as far as I know, supposed to be used for a single value)
my $list = $mech->content();
Use a subroutine that I've created to write the contents of the variable to a text file. (The writetoFile subroutine includes few more features, like path and existing file validations..)
writeToFile("$filename.tmp","$path",$list);
Processing the text in a file created in the previous step by creating an additional file and save the processed content there (Then deleting the initial temporary file).
What I wonder about, is whether it is possible to perform the processing before storing the text in a file, directly inside the $list variable? The whole process is working as expected but I don't really like the logic behind it and it seems a bit inefficient as well, since I have to rewrite the same file multiple times.
EDIT:
Just to give a bit more information about what I'm actually after when I process the variable contents. So the data I fetch from the website in this case is actually a list of items separated by a blank line and the first line is irrelevant to me. So what I'm doing while processing this data is 2 things:
Remove the empty (CRLF) lines
Remove the first line if it includes a particular text.
Ideally I want to save the processed list (no blank spaces and first line removed) in a file without creating any additional files on the way. In order to save the file I would like to use the writeToFile sub (I wrote) since it also performs validation on whether such file already exists (If a file will be saved before final processing - the writeToFile will always rewrite the existing file).
Hope it makes sense.
You're looking for split. The pattern depends: use (?<=\n) split at a new line character and keep it. If that doesn't matter, use \R to include all sort of line breaks.
foreach my $line (split qr/\R/, $mech->content) {
…
}
Now the obligatory HTML-parsing-with-regex admonishment: if you get HTML source with Mechanize, parsing it line-by-line does not make much sense. You probably want to process the HTML-stripped text version of the document instead, or pass the HTML source to a parser such as Web::Query to declaratively get at the pieces you need.