perl how to include a pre-defined variable list - perl

Is it possible to have a perl file that defines a list of variables with certain values, and then have the main perl script include that file which contains the list of variable definitions?
Or any sort of perl approach which would make that intention work.
Thanks!

Store those values in a config file and retrieve it using config parsing in your main perl script.
Refer following link for example.
https://perlmaven.com/reading-configuration-files-in-perl

Related

Message passing between two perl files

I have 2 Perl files which cannot be merged and have to be run separately. My first file does certain initialization of parameters which are used by my second file, which performs some testing. Now I want to use the parameters initialized in the first file in the second file so how can I do that?
I will write a Perl script for Software testing. I need to write two files one is initialization file which will do all the initialization and the second file contains the test sequence to execute which will use initialize parameters. I need to run both files separately. Execution-wise my first file will execute first and then my second file will run.
I am thinking of using XML file where the first file will log the parameter in the file and the second file will get the parameters from that file? Is there any better way to do this?
If your initialization produces only plain key-value pairs then any way of serialising data will suffice. Otherwise XML is probably the worst option for your case. You might need to put a lot of effort to get the same data structure in your second script. This happens because by default xml modules do not know what should be an atrribute, a child node or an array of nodes. For example, passing a one-element array of hashes to xml from first script might turn to just a single hash in your second script. The results will highly depend on xml modules, options you pass to them and the data itself.
JSON should'n have such issues. It might have unnecessary type conversions but you shouldn't really notice them.
Storable guarantees that you get the same data in your second script.
You might find Data::Dumper to be an easier solution. But it has some security issues since you need to execute its output in your second script.
All of the above are not meant to be used with data containing self-references and anything but scalars, arrayrefs and hashrefs.

Can I make a module from a bunch of single-function scripts?

We've accumulated a bunch of scripts, each looks and feels like CmdLets, i.e. it has a set of declared params and then it immediately calls a Main function which does the work, calling private sub-functions within.
An example is Remove-ContentLine.ps1 which just spits out the contents of a file or piped input except for lines matching some pattern.
So they're like little "function-scripts".
Is there any way I can aggregate these scripts into a module while also keeping them exactly as they are in files?
Edit
If your hunch is that its easier to just copy paste and refactor them into a psm1 then just say ;)
You ask:
Is there any way I can aggregate these scripts into a module while
also keeping them exactly as they are in files?
But I am certain that is not what you really want. If so, then all of your code will immediately execute when you load the module! Rather, I think what you want is that each of your scripts should be contained within a function; that group of functions is then loaded when you import the module; and you can then execute any of your functions on demand.
The process is very straightforward, and I have written an extensive article on just how to do that (Further Down the Rabbit Hole: PowerShell Modules and Encapsulation) but I will summarize here:
(1) Edit each file to wrap the entire contents into a function and conclude with exporting the function. I would suggest name the function based on the file name. Thus, Remove-ContentLine.ps1 should now look like this:
function Remove-ContentLine()
{
# original content of Remove-ContentLine.ps1 here
}
Export-ModuleMember Remove-ContentLine
(2) Decide on a name for your module and create a directory of that name. Let's call it MyModule. Within the MyModule directory, create a subdirectory to place all your .ps1 files; let's call that ScriptCmdlets.
(3) Create a module file MyModule.psm1 within MyModule whose contents will be exactly this:
Resolve-Path $PSScriptRoot\ScriptCmdlets\*.ps1 |
? { -not ($_.ProviderPath.Contains(".Tests.")) } |
% { . $_.ProviderPath }
Yes, every module (.psm1) file I write contains that identical code!
(4) Create a module manifest MyModule.psd1 within MyModule using the New-ModuleManifest cmdlet.
Then to use your module, just use Import-Module. But I urge you to review my article for more details to gain a better understanding of the process.
I doubt you can if the scripts already executing something ("main"). If they just expose a function like Remove-ContentLine for the Remove-ContentLine.ps1 you could dot source all the scripts in a single script to aggregate them or use the ScriptsToProcess = #() section when working with a module manifest.
I think it would be best to refactor the functions from within each .ps1 into a proper module. It should be essentially just copy/pasting the scripts into a single .psm1 file and creating a .psd1 for it. Be sure to check for and properly handle anything that is set in the script or global scopes, and there are no naming conflicts between functions.
If you have Sapien PowerShell Studio, there is a 'New Module from Functions' option in the File menu which would help automate the bulk of this for you.

how to create a Doxygen link to the same file

I would like to write a Doxygen comment that names the file in which the comment occurs. Rather than write the filename explicitly, I would like Doxygen to supply me with it. Thus, if I change the name of the file, or move some of the content into a different file, I don't need to change hard-coded instances of the name.
For a concrete example, let's say I'm adding comments to functions in array.hpp, and I want the comment for certain functions to say "This function should only be used within array.hpp." I want to be able to write
/**
* This function should only be used within #thisfile.
*/
where #thisfile is a Doxygen expression that translates into array.hpp within the file array.hpp.
I've looked at the Doxygen documentation, including "Automatic link generation/Links to files" and the entire "Special Commands" section, but I haven't found what I'm looking for. Does such functionality exist?
Note that essentially the same question was asked on the Doxygen mailing list a few weeks ago. It has not received any replies.
General
As far as I know such functionality does not exist out-of-the-box. But you can add it by configuring an INPUT_FILTER in your Doxyfile. The path to the file is passed as an argument to the filter by doxygen. This can be used by the filter to replace your keyword (for example #thisfile) with the path to the file.
Below I give an example how to implement this with bash. A solution for other shells or Windows should be quite similar.
Example for bash
Write a short bash script infiltrate_filename.sh:
#!/bin/bash
pathToScript=`pwd`"/"
sed -e "s:#thisfile:${1/$pathToScript/}:g" $1
This script truncates the path to the file by the working directory. The resulting string is used to replace the keyword of your choice (here: #thisfile).
Make your script executable: chmod +x infiltrate_filename.sh
Set the INPUT_FILTER in your Doxyfile to INPUT_FILTER = ./infiltrate_filename.sh
That's it! 🎉 Now you can use #thisfile in your documentation blocks and it will be replaced by the path to the file. As the paths are relative to Doxygen's working directory they will automatically be linked to the file.
Notes
This solution assumes that the filter script is located in the working directory of doxygen (for example ~/my_project) and that the INPUT files are in subdirectories of the working directory (for example ~/my_project/src/foo/bar).
I have tested this example on a minimum working example. I am not a bash or sed expert. This solution may be improvable.

Multiple functions in one fish file

If I put a file called myfunc.fish in a directory called functions, and it includes a single function called myfunc, then fish will locate it if I type myfunc as a command.
What about if I want to have a bunch of short functions in one file? How do I "include" them?
source is how you include files.
Say you have a collection of functions thing1, thing2, etc. in a single file ~/mystuff/things.fish that you want to make available. Two good approaches are:
You can use the autoloading machinery: make the files functions/thing1.fish, functions/thing2.fish, etc. each with the same contents:
source ~/mystuff/things.fish
But a simpler approach is to just put that source line into your ~/.config/fish/config.fish file. Then it will be executed for each session.

How to use environment variables exported by child shell script in parent Perl script?

From my PERL script, I am calling child shell script.
There are few db environment variables which are exported by child shell script
But when I try to use those in perl script, they are not shown. Here is my code:
my $commanLine = ". SetConnection.sh -n $TaskName";
system $commanLine;
my $dbConnectString = "$ENV{'DB_USER'}/$ENV{'DB_PASSWORD'}";
print "$dbConnectString";
Please suggest.
TL;DR
Exported variables are inherited by child processes from the parent. You can't modify the environment of the parent process from the child directly, but you can certainly exchange data using files, pipes, or other forms of interprocess communication.
Source a Perl File Holding Variables
The easiest solution is to have the child process write a file that can then be sourced by the parent. For example, security issues aside, SetConnection.sh could write to a file like /tmp/variables.pl, which you could then source as a Perl script inside the parent script.
For example, consider the following file, presumably written by the child process:
# /tmp/foo.pl
$foo='bar';
Now you require the file in your parent script:
$ perl -e 'require "/tmp/foo.pl"; print "$foo\n"'
bar
This isn't really very secure, but it does work. Think of it as similar to eval, along with race conditions and access issues. Nevertheless, it's a very pragmatic solution.
Use a Real Configuration File
Alternatively, you could use a format like JSON, YAML, or CSV (created any way you like, including by your child process) to create a configuration file which you could then parse for values. This is generally the best approach, but your use case may vary.
The benefit of this approach is that you can validate and sanitize values, and don't need to worry about the security or uniqueness of temp files. It's really the right way to do these things, but will require much more work on your part.