Typing fully qualified type names in powershell? - powershell

I want to reduce typing System and other root namespaces, for example
[string] $ComputerName = [Environment]::MachineName
vs.
[System.String] $ComputerName = [System.Environment]::MachineName
Another example:
[Net.Dns]::GetHostEntry("192.168.1.1")
vs.
[System.Net.Dns]::GetHostEntry("192.168.1.1")
Are there any reasons and specific situation when typing System and similar parent namespaces is required?
I often wonder why is there System namespace after all since everything is inside that namespace so what the deal with that namespace? it's a nonsense; what is the term System supposed to mean anyway? it's not related to operating system here. but to everything in NET framework.
I assume that there may be exceptions when calling static methods, but I don't know C# so unable to answer this myself.

In somewhat jumbled order:
it's a nonsense; what is the term System supposed to mean anyway?
The "System" in question is the .NET runtime and framework. The original idea, as far as I understand, was that code that doesn't ship with .NET would fall under different namespaces - but multiple Microsoft-authored components built on top of .NET has since made use of System parent namespace - including all the APIs that make up PowerShell itself.
I often wonder why is there [a] System namespace after all since everything is inside that namespace so what the deal with that namespace?
Not "everything" is inside the System namespace, but as mentioned above, everything that ships with the runtime or base class libary is - which is exactly why PowerShell automatically resolve type literals even if you omit System. from a qualified type name - PowerShell is trying to help you reduce typing already
Are there any reasons and specific situation when typing System and similar parent namespaces is required?
Yes - when the parent namespace is not System.
The first example that comes to mind is the .NET wrapper classes for the Win32 registry API on Windows:
$HKLM = [Microsoft.Win32.RegistryKey]::OpenBaseKey([Microsoft.Win32.RegistryHive]::LocalMachine, [Microsoft.Win32.RegistryView]::Default)
Now, for the actual question:
I want to reduce typing System and other root namespaces
You can't add custom namespace prefixes (like System) to PowerShell's name resolution method, but you can declare automatic resolution of type names in specific namespaces in PowerShell 5 and up, with the using namespace directive.
Without using namespace:
[System.Net.Dns]::GetHostEntry("192.168.1.1")
With using namespace:
using namespace System.Net
[Dns]::GetHostEntry("192.168.1.1")
When used in a script, any using directives must precede anything else in the file, including the param block.
using namespace directives will work in an interactive session as well, granted that you issue it as a separate statement:
PS> using namespace System.Net
PS> [Dns] # still works!

By the way, powershell has a lot of "type accelerators" like [datetime], can you make your own, like [dt]: Boolean and bool difference?

Related

What is the scope of using namespace in PowerShell?

When specifying using namespace within PowerShell module or script file, is there a scope associated with the directive into which namespace symbols are being introduced?
Or are namespace symbols introduced into global session state avoiding the PS scope modifiers?
using namespace System.Management.Automation.Runspaces
For example if we specify using namespace inside module it would be great if this applies to module scope only.
Likewise if we specify using namespace inside script file it would be great if this applies to that script only unless dot sourced.
I didn't test any of this and was not able to find documentation regarding this question.
A using namespace statement follows PowerShell's usual dynamic scoping rules[1]:
It takes effect in the scope in which it is placed and all descendant scopes.
If a script file (*.ps1) is dot-sourced (e.g. . ./file.ps1), the statement takes effect in the caller's scope.
Since modules have their own scope domains ("session states") that are connected only to the global scope, a using namespace statement in a module doesn't affect any module-external callers.
Caveat: As of PowerShell 7.1, you cannot use name-only type literals that rely on using namespace statements (e.g., [Command] for [System.Management.Automation.Runspaces.Command]) in [OutputType([...])] attributes of exported functions (they work fine in parameter declarations and function bodies), as detailed in the GitHub issue you've discovered, GitHub issue #13768.
[1] As of this writing, the conceptual about_Scopes help topic doesn't explicitly discuss the dynamic nature of PowerShell's scoping as such (as distinct from the lexical scoping you would find in languages such as C#). In simple terms, PowerShell's dynamic scoping means that, inside a given scope domain (non-module code vs. a given module's code), code called from a given caller is affected by the runtime state of that caller, such as by using statements, Set-StrictMode settings, and the visibility of the caller's variables.

Why automation framework require proxy function for every powershell cmdlet?

In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules

Puppet Class: define a variable which list all files in a directory

I'm defining my own Puppet class, and I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory. I was wondering to have a similar syntax like below, but didn't found a way to make it work.
$dirs = Dir.entries('C:\\Program Files\\Java\\')
Does anyone how to do it in a Puppet file?
Thanks!
I was wondering if it is possible to have an array variable which contains a list of all files in a specific directory.
Information about the current state of the machine to be configured is conveyed to the catalog compiler via facts. These are available to your classes as top-scope variables, and Puppet (or Facter, actually) provides ways to define your own custom facts. That's a link into the Facter 3 manual, but similar applies to earlier versions. Do not overlook the rest of the Facter documentation, which has more relevant information on this topic.
On the other hand, information about the machine providing catalog-building services -- the master in a master / agent setup -- can be obtained by writing and calling a custom function. This is rarely what you actually want, but it's worth mentioning because you might one day want a custom function for some other purpose.

Looking for BP on PS using module, import-module, import using dot-sourcing and Add-PSSnapin

Background: I am looking for Best Practices for building PowerShell framework of my own. As I was used to be .NET programmer I like to keep source files small and organize code in classes and libraries.
Question: I am totally confused with using module, import-module, sourcing using dot-import and Add-PSSnapin. Sometimes it works. Sometimes it does not. Some includes work when running from ISE/VS2015 but fail when running via cmd powershell -command "& './myscript.ps1'". I want to include/import classes and functions. I also would like to use type and namespace aliases. Using them with includes produces even weirdest results, but sometimes somehow they work.
Edit: let me be more specific:
Local project case (all files in one dir): main.ps1, common_autorun.ps1, _common.psm1, _specific.psm1.
How to include these modules into main script using relative paths?
_specific.psm1 also rely on _common.psm1.
There are ScriptBlocks passed between modules that may contain calls to classes defined in parent context.
common_autorun.ps1 contains solely type accelerators and namespace imports as described here.
Modules contain mainly classes with static methods as I am not yet used to PowerShell style of programming where functions do not have predicted returns.
As I understand my problems are related to context and scope. Unfortunately these PowerShell concepts are not well documented for v5 classes.
Edit2: Simplified sample:
_common.psm1 contains watch
_specific.psm1 contains getDiskSpaceInfoUNC
main.ps1 contains just:
watch 'getDiskSpaceInfoUNC "\\localhost\d$"'
What includes/imports should I put into these files in order this code to work both in ISE and powershell.exe -command "& './main.ps1'"?
Of cause this works perfectly when both functions are defined in main.ps1.

Powershell naming conventions/scoping

I want to create a Powershell function/cmdlet which installs (and one that uninstalls) a web application: copies files, creates an app pool, creates the web application, sets up all kinds of IIS properties, does some web.config modifications, etc. I'm confused about how I should name it. Powershell has this verb-object naming convention, and it's all nice, but the names I want to use (New-WebApplication, etc.) are already taken by the WebAdministration module (which this new function will use internally). Is there a nice way to scope my functions to make it clear that it's a different module? Like mymodule.New-WebApplication, My-New-WebApplication, New-MyWebApplication? Or I could call it Install-WebApplication but that could lead to confusion because of reusing the same name.
I just ran into this recently for a similar issue. This could have many opinionated answers but this would handle the way to scope my functions to make it clear that it's a different module.
You could use the -Prefix parameter of Import-Module
Import-Module mymodule -Prefix Super
So when you go to use your cmdlet you would call it with
New-SuperWebApplication
Alternatively, you can also explicitly call the cmdlet with the module path
mymodule\New-WebApplication
I agree with Matt's answer, but I wanted to offer another perspective.
I wrote a module where the intention was specifically to recreate the functionality of an existing cmdlet. I named my function differently, but I also exported functions from the module that allow the caller to overrride the existing cmdlet with mine (using an Alias, which is interpreted first), and then to also undo that process.
This allowed someone to explicitly call the function without needing to use -Prefix nor use the \ syntax, using the new name with new code, but it also allowed one to use my function as a drop-in replacement for existing code by calling a single new command.
Here's that module if you want to take a look:
DnsCmdletFixes