PowerShell script to execute MSI with parameters [closed] - powershell

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How to write PowerShell script for attached screenshots? I am trying to run and configure installer via PowrShell.

There is no guarantee that this will be possible, as it's possible to (incorrectly) author a Windows Installer package in a way that rejects this. But in many cases, it suffices to find the feature names, property names, and property values involved in the settings you wish to control.
Note that to approach 100% correctness, you will have to consider all properties that get set during the normal UI sequence, or consult documentation provided by the author of the package.
Feature Selection
You can specify which features will be installed by listing their names in the special properties: ADDLOCAL, REMOVE, ADDSOURCE, etc. To find their names, consult documentation, or examine the Feature table of the .msi file.
Note that doing this can evade attempts the author might have made to ensure proper selection of features (for some meaning of proper), and it will also set the property Preselected, which offers the author a way to tweak the behavior in this situation.
Configuration
Windows Installer dialog boxes are all defined in the Dialog table and Control table, but the results can also be augmented by the ControlEvent table and custom actions. In the common case you can look at the Control table to identify the Property name associated with the values you wish to set. For example, an Edit control with Property value TESTPROP will set the property TESTPROP to whatever you type into its associated edit box. Check boxes cross reference the CheckBox table for the value they will set or clear (often simply 1), etc. Radio button groups cross reference the RadioButton table for their values.
Sometimes intermediate values may be set by the controls, and their eventual target properties are then adjusted by a custom action invoked by a Control Event or a sequenced Custom Action.
Alternatives
Instead of poking into the inner bowels of the installation package you are trying to automate, documentation from its author is often much safer. When that's missing, you can find a tool that helps you generate a response transform (much easier to use, but fallible), a tool that helps you explore the tables I described above (harder, but gives as complete an understanding as you can get), or you can create and examine a verbose log file created while installing the package with the settings you desire (middle of the road).
The log file will include all the properties and feature names you need to set on a command line, but probably will include several you do not need to set. Figuring out which is which can still be difficult.
End Result
In the end, you will end up with a command line including some of the following parts. I'm using italics to indicate the parts whose names or values I do not know; those should be substituted with correct values. I've also added parentheticals to explain what each part does; those should be omitted. I've broken the command line across multiple screen lines for clarity.
msiexec /i the.msi /qn (install the.msi silently)
ADDLOCAL="desired,features,here" (select these features)
REMOVE="undesired,features,here" (optional; explicilty unselect these features)
SOMEPROP1="some value1" (specify the value of one or more properties)
SOMEPROP2="some value2"
SOMEPROP3="some value3" (specify more properties as necessary)
Note that property names are almost always going to be upper case and may include underscores. Feature names can be mixed case but will not include spaces. Property values can be anything, and must be quoted if they include spaces or other special characters.
If you plan to use this across multiple systems, note that differences in the target system can affect the correct command lines to use. This is relatively rare; when it happens, it's likely to be due to differing versions of Windows or differing availability of optional dependencies.

Related

Nagios - adding spaces/tabs/newline Between arguments of Service Check Command

For more readability I would like to add spaces around the arguments in a service's check command. Running a nagios check outputs an error as it seems to want everything on one likeso:
One line check_command definition
However I would like to spcify the check command likeso:
separate lines check command
It’s not possible. Nagios parses configuration attributes by the logic “attribute value \n” so that’s how it has to look.
Other options include breaking it up into multiple check commands, thereby reducing the amount of macros. It’s hard to say since you don’t share the actual text.
On that note, do not share text in pictures. Ever. For any reason. Copy the text and format it as monospaced. If it contains sensitive information, redact it from the text, don’t draw on a picture. Sharing text as pictures is a great way to become very unpopular with colleagues, support personell, and anyone else you come in contact with who has to be able to access the actual text itself, or just want to be able to read it (they may use a screen reader).

ISPF edit macro, how to prevent "Browse substituted" from occurring..?

Steve Ives provided ALLMEM code to run an edit macro against all members of a PDS, see here: How can I run ISPF Edit Macros in Batch
Some members in my PDS are too large (by default) for edit/view and suffer "Browse substituted" on the line:
Address 'ISPEXEC' 'EDIT DATAID('data1')',
'MEMBER('member1') MACRO('workmac')'
Since browse cannot run edit-macros the MACRO('workmac') bit does not come into play, there is no END command issued to return execution to the loop in ALLMEM, and the overall batch execution stops until I manually hit my PF3.
Is there any way I can force TSO to keep in EDIT mode for these large members?
Is there any way I can force TSO to keep in EDIT mode for these large
members?
Maybe.
ISPF Edit has an LRECL limit. If your members that are too large exceed this there isn't anything much you can do about that. If you want to engage in radical notions like splitting each record in two so they are editable, editing them, then reassembling each record pair back into a single record, that's a separate issue.
But maybe the problem isn't your LRECL, but the number of records. You might be able to do something about that.
You could try increasing the REGION parameter for your batch job in which you are running your ISPF Edit macro. I don't know if your personal ISPF settings matter in an ISPF batch job, but you could type EDITSET in an ISPF Edit session and ensure that the value for "Maximum initial storage allowed for Edit and View" is 0, just in case it matters.
Be advised that this may fix your problem, but it's possible your members are simply too big for ISPF Edit. In that case, you must find an alternate mechanism. Since you already have an edit macro, perhaps you could alter it, substituting your own code for ISPF Edit services, and run that code against your data. Perhaps this is an opportunity to learn the marvelous features of your SORT utility. Or awk. Lots of options.
If it is only certain members then it is not an LRECL issue, but strictly size. As cschneid mentioned you can try and maximize the storage available to Edit. However, if the member is really large then you wold eventually hit a storage limit. Currently Edit or View will switch to Browse in that case. If you are running Batch then this presents a problem as you describe. There is nothing that will keep it in Edit. RC=4 is already a documented return code for Browse being substituted, but if you are in batch then you probably end up in a display loop. One possible solution would be to have your own copy of ISRBROBA in ISPPLIB and have it set .RESP = END in the )INIT or )PROC section so as to force an END if BROWSE gets used. Since it is a batch job it is unlikely you would need the normal version of ISRBROBA. You would just make sure that your PANEL library is concatenated first.

At which lines in my MATLAB code a variable is accessed?

I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)

Argument passing strategy - environment variables vs. command line

Most of the applications we developers write need to be externally parametrized at startup. We pass file paths, pipe names, TCP/IP addresses etc. So far I've been using command line to pass these to the appplication being launched. I had to parse the command line in main and direct the arguments to where they're needed, which is of course a good design, but is hard to maintain for a large number of arguments. Recently I've decided to use the environment variables mechanism. They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code.
These are my first (and possibly quite shallow) impressions on both strategies but I'd like to hear opinions of more experienced developers -- What are the ups and downs of using environment variables and command line arguments to pass arguments to a process? I'd like to take into account the following matters:
design quality (flexibility/maintainability),
memory constraints,
solution portability.
Remarks:
Ad. 1. This is the main aspect I'm interested in.
Ad. 2. This is a bit pragmatic. I know of some limitations on Windows which are currently huge (over 32kB for both command line and environment block). I guess this is not an issue though, since you just should use a file to pass tons of arguments if you need.
Ad. 3. I know almost nothing of Unix so I'm not sure whether both strategies are as similarily usable as on Windows. Elaborate on this if you please.
1) I would recommend avoiding environmental variables as much as possible.
Pros of environmental variables
easy to use because they're visible from anywhere. If lots of independent programs need a piece of information, this approach is a whole lot more convenient.
Cons of environmental variables
hard to use correctly because they're visible (delete-able, set-able) from anywhere. If I install a new program that relies on environmental variables, are they going to stomp on my existing ones? Did I inadvertently screw up my environmental variables when I was monkeying around yesterday?
My opinion
use command-line arguments for those arguments which are most likely to be different for each individual invocation of the program (i.e. n for a program which calculates n!)
use config files for arguments which a user might reasonably want to change, but not very often (i.e. display size when the window pops up)
use environmental variables sparingly -- preferably only for arguments which are expected not to change (i.e. the location of the Python interpreter)
your point They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code reminds me of justifications for the use of global variables ;)
My scars from experiencing first-hand the horrors of environmental variable overuse
two programs we need at work, which can't run on the same computer at the same time due to environmental clashes
multiple versions of programs with the same name but different bugs -- brought an entire workshop to its knees for hours because the location of the program was pulled from the environment, and was (silently, subtly) wrong.
2) Limits
If I were pushing the limits of either what the command line can hold, or what the environment can handle, I would refactor immediately.
I've used JSON in the past for a command-line application which needed a lot of parameters. It was very convenient to be able to use dictionaries and lists, along with strings and numbers. The application only took a couple of command line args, one of which was the location of the JSON file.
Advantages of this approach
didn't have to write a lot of (painful) code to interact with a CLI library -- it can be a pain to get many of the common libraries to enforce complicated constraints (by 'complicated' I mean more complex than checking for a specific key or alternation between a set of keys)
don't have to worry about the CLI libraries requirements for order of arguments -- just use a JSON object!
easy to represent complicated data (answering What won't fit into command line parameters?) such as lists
easy to use the data from other applications -- both to create and to parse programmatically
easy to accommodate future extensions
Note: I want to distinguish this from the .config-file approach -- this is not for storing user configuration. Maybe I should call this the 'command-line parameter-file' approach, because I use it for a program that needs lots of values that don't fit well on the command line.
3) Solution portability: I don't know a whole lot about the differences between Mac, PC, and Linux with regard to environmental variables and command line arguments, but I can tell you:
all three have support for environmental variables
they all support command line arguments
Yes, I know -- it wasn't very helpful. I'm sorry. But the key point is that you can expect a reasonable solution to be portable, although you would definitely want to verify this for your programs (for example, are command line args case sensitive on any platforms? on all platforms? I don't know).
One last point:
As Tomasz mentioned, it shouldn't matter to most of the application where the parameters came from.
You should abstract reading parameters using Strategy pattern. Create an abstraction named ConfigurationSource having readConfig(key) -> value method (or returning some Configuration object/structure) with following implementations:
CommandLineConfigurationSource
EnvironmentVariableConfigurationSource
WindowsFileConfigurationSource - loading from a configuration file from C:/Document and settings...
WindowsRegistryConfigurationSource
NetworkConfigrationSource
UnixFileConfigurationSource - - loading from a configuration file from /home/user/...
DefaultConfigurationSource - defaults
...
You can also use Chain of responsibility pattern to chain sources in various configurations like: if command line argument is not supplied, try environment variable and if everything else fails, return defauls.
Ad 1. This approach not only allows you to abstract reading configuration, but you can easily change the underlying mechanism without any affect on client code. Also you can use several sources at once, falling back or gathering configuration from different sources.
Ad 2. Just choose whichever implementation is suitable. Of course some configuration entries won't fit for instance into command line arguments.
Ad 3. If some implementations aren't portable, have two, one silently ignored/skipped when not suitable for a given system.
I think this question has been answered rather well already, but I feel like it deserves a 2018 update. I feel like an unmentioned benefit of environmental variables is that they generally require less boiler plate code to work with. This makes for cleaner more readable code. However a major disadvatnage is that they remove a layers of isolation from different applications running on the same machine. I think this is where Docker really shines. My favorite design pattern is to exclusively use environment variables and run the application inside of a Docker container. This removes the isolation issue.
I generally agree with previous answers, but there is another important aspect: usability.
For example, in git you can create a repository with the .git directory outside of that. To specify that, you can use a command line argument --git-dir or an environmental variable GIT_DIR.
Of course, if you change the current directory to another repository or inherit environmental variables in scripts, you get a mistake. But if you need to type several git commands in a detached repository in one terminal session, this is extremely handy: you don't need to repeat the git-dir argument.
Another example is GIT_AUTHOR_NAME. It seems that it even doesn't have a command line partner (however, git commit has an --author argument). GIT_AUTHOR_NAME overrides the user.name and author.name configuration settings.
In general, usage of command line or environmental arguments is equally simple on UNIX: one can use a command line argument
$ command --arg=myarg
or an environmental variable in one line:
$ ARG=myarg command
It is also easy to capture command line arguments in an alias:
alias cfg='git --git-dir=$HOME/.cfg/ --work-tree=$HOME' # for dotfiles
alias grep='grep --color=auto'
In general most arguments are passed through the command line. I agree with the previous answers that this is more functional and direct, and that environmental variables in scripts are like global variables in programs.
GNU libc says this:
The argv mechanism is typically used to pass command-line arguments specific to the particular program being invoked. The environment, on the other hand, keeps track of information that is shared by many programs, changes infrequently, and that is less frequently used.
Apart from what was said about dangers of environmental variables, there are good use cases of them. GNU make has a very flexible handling of environmental variables (and thus is very integrated with shell):
Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (-- and there is an option to change this behaviour) ...
Thus, by setting the variable CFLAGS in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things.
Finally, I would stress that the most important for a program is not programmer, but user experience. Maybe you included that into the design aspect, but internal and external design are pretty different entities.
And a few words about programming aspects. You didn't write what language you use, but let's imagine your tools allow you the best possible argument parsing. In Python I use argparse, which is very flexible and rich. To get the parsed arguments, one can use a command like
args = parser.parse_args()
args can be further split into parsed arguments (say args.my_option), but I can also pass them as a whole to my function. This solution is absolutely not "hard to maintain for a large number of arguments" (if your language allows that). Indeed, if you have many parameters and they are not used during argument parsing, pass them in a container to their final destination and avoid code duplication (which leads to inflexibility).
And the very final comment is that it's much easier to parse environmental variables than command line arguments. An environmental variable is simply a pair, VARIABLE=value. Command line arguments can be much more complicated: they can be positional or keyword arguments, or subcommands (like git push). They can capture zero or several values (recall the command echo and flags like -vvv). See argparse for more examples.
And one more thing. Your worrying about memory is a bit disturbing. Don't write overgeneral programs. A library should be flexible, but a good program is useful without any arguments. If you need to pass a lot, this is probably data, not arguments. How to read data into a program is a much more general question with no single solution for all cases.

How do you include authorship and version info into your MATLAB functions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This question is related to my previous one: MATLAB m-file help formatting.
What do you usually write to describe authorship of your own function? Do you put it at the end of the function body or right after the help text before any code?
How do you include version information? Is it possible to automatically update version after function modification?
This is what I usually include:
% My Name <my#email>
% My company
% Created: September 2010
% Modified: October 2010
Please share your thoughts, ideas?
I have a function in the MATLAB Central File Exchange that helps you document your function in a standard way, and works with version control software (CVS and Subversion; not git) to automatically update the author field and time of modification.
You just type new at the command prompt, then the name of the function, and it sorts out the rest.
The basic template for documentation that I use is
function [outputArgs] = TestFunction(inputArgs)
%TESTFUNCTION Summary of this function goes here
%
% [OUTPUTARGS] = TESTFUNCTION(INPUTARGS) Explain usage here
%
% Examples:
%
% Provide sample usage code here
%
% See also: List related files here
% $Author: rcotton $ $Date: 2010/10/01 18:23:52 $ $Revision: 0.1 $
% Copyright: Health and Safety Laboratory 2010
(You'll obviously want a different company in your copyright statement.)
The first line of the help documentation is known as the H1 line, and is used by the function lookfor, among others. It is important that this comes straight after the function definition line.
If you have different use cases (maybe with and without optional arguments), then you should describe each one.
The Examples: and See also: lines are formatted in a way the works with the help report generator. (I've just spotted a bug - the year should be before the company name in the copyright line. Fix on its way.)
$Author:, etc., are formatted for use with CSV/SVN. Since git uses hashes of files, you can't change the content of the file, without git thinking that it's been updated.
We keep our code in a Subversion repository and use the keywords functionality for writing this sort of information into the header comments of the m-file when it is committed to the repo. We put a block of comments right after the initial function line in (most of) our m-files.
If you are not using a source code control system then:
You really should start using one right now.
You could write a script (in Matlab, why not) for maintaining the comment information you require, and implement some process to ensure that you run the script when necessary.
We don't generally put modification dates or histories in our source files, the repository tracks changes for us. Which is another reason you should be using one.
And, while you are thinking about all this, if you haven't already done so: check out Matlab's publish functionality.
EDIT: #yuk: I guess from your mention of TortoiseSVN that you are working on Windows. It's a couple of years since I installed Subversion on my Windows PC. I don't recall any problems at all with the installation, and I'm not qualified to help you debug yours -- but there are plenty on SO who are.
As for keeping version (etc) info up to date, there's no scripting required. You simply include the special strings, such as $Rev$, which Subversion recognises as keywords in the locations in your files (probably in the comments) where you want the version information (etc). This is well explained in the Subversion Book.
As for the Matlab documentation, I think the publish (and related) features are well explained in the Desktop Tools and Development Environment handbook which is available on-line at the Mathworks web-site.
As High Performance Mark points out, some form of source code control is ideal for handling this situation. However, if you are adding information by hand here are a few pointers:
I would definitely include a line stating the MATLAB version your code was written in, or perhaps which versions you know it works for.
When adding the information, you have to leave space between it and the help comment block if you don't want to display it when the user views the help contents, like so:
function myFunction
%# Help text for function
%# Your information
Unless you do want it displayed with the help. Then just make one big comment block.