Why do powershell class properties require this within their methods? - powershell

PSVersion: 5.1.17134.858
I haven't done a whole lot of work with Powershell's classes but here's a simple example of what I'm running into:
class xNode{
[uint64]$MyID=0
static [uint64]$ClassID=0
xNode(){
$MyID = [xNode]::ClassID++
}
[String] ToString(){return "xNode: $MyID"}
}
doesn't parse it gives the two errors:line 6 $MyID..., "Cannot assign property, use '$this.MyID'."line 9 ..$MyID", "Variable is not assigned in the method."
I'm trying to use the classes property named $MyID, and this usage appears to be in line with the example given in the help documentation get-help about_Classes, and when I copied their whole example at the end out to a file then tried to run it I was getting the same errors as well for $Head, $Body, $Title,... Of course I can force it to work by adding this.
class xNode{
[uint64]$MyID=0
static [uint64]$ClassID=0
xNode(){
$this.MyID = [xNode]::ClassID++
}
[String] ToString(){return "xNode: $($this.MyID)"}
}
However I'd rather not have to keep typing this. all over the place, is there maybe some environment setting or something I've overlooked?
(Note: to get it to work at the commandline, I also needed to remove all of the blank lines)

However I'd rather not have to keep typing this. all over the place, is there maybe some environment setting or something I've overlooked?
No, it's working as advertised, you can't reference instance properties inside the class without the $this variable reference.
Why do powershell class properties require this within their methods?
(The following is what I'd call "qualified speculation", ie. not an officially sourced explanation)
PowerShell classes are a bit tricky from an implementers point of view, because a .NET property behaves differently than, say, a parameter in a powershell scriptblock or a regular variable.
This means that when the language engine parses, analyzes and compiles the code that makes up your PowerShell Class, extra care has to be taken as to which rules and constraints apply to either.
By requiring all "instance-plumbing" to be routed through $this, this problem of observing one set of rules for class members vs everything else becomes much smaller in scope, and existing editor tools can continue to (sort of) work with very little change.
From a user perspective, requiring $this also helps prevent accidental mishaps, like overwriting instance members when creating new local variables.

Related

Why automation framework require proxy function for every powershell cmdlet?

In my new project team, for each powershell cmdlet they have written proxy function. When i asked the reason for this practice, they said that it is a normal way that automation framework would be written. They also said that If powershell cmdlet is changed then we do not need to worry ,we can just change one function.
I never saw powershell cmdlets functionality or names changed.
For example, In SQL powershell module they previously used snapin then they changed to module. but still the cmdlets are same. No change in cmdlet signature. May be extra arguments would have added.
Because of this proxy functions , even small tasks taking long time. Is their fear baseless or correct? Is there any incident where powershell cmdlets name or parameter changed?
I guess they want to be extra safe. Powershell would have breaking changes here and here sometimes but I doubt that what your team is doing would be impacted by those (given the rare nature of these events). For instance my several years old scripts continue to function properly up to present day (and they were mostly developed against PS 2-3).
I would say that this is overengineering, but I cant really blame them for that.
4c74356b41 makes some good points, but I wonder if there's a simpler approach.
Bear with me while I restate the situation, just to ensure I understand it.
My understanding of the issue is that usage of a certain cmdlet may be strewn about the code base of your automation framework.
One day, in a new release of PowerShell or that module, the implementation changes; could be internal only, could be parameters (signature) or even cmdlet name that changes.
The problem then, is you would have to change the implementation all throughout your code.
So with proxy functions, you don't prevent this issue; a breaking change will break your framework, but the idea is that fixing it would be simpler because you can fix up your own proxy function implementation, in one place, and then all of the code will be fixed.
Other Options
Because of the way command discovery works in PowerShell, you can override existing commands by defining functions or aliases with the same name.
So for example let's say that Get-Service had a breaking change and you used it all over (no proxy functions).
Instead of changing all your code, you can define your own Get-Service function, and the code will use that instead. It's basically the same thing you're doing now, except you don't have to implement hundreds of "empty" proxy functions.
For better naming, you can name your function Get-FrameworkService (or something) and then just define an alias for Get-Service to Get-FrameworkService. It's a bit easier to test that way.
One disadvantage with this is that reading the code could be unclear, because when you see Get-Service somewhere it's not immediately obvious that it could have been overwritten, which makes it a bit less straightforward if you really wanted to call the current original version.
For that, I recommend importing all of the modules you'll be using with -Prefix and then making all (potentially) overridable calls use the prefix, so there's a clear demarcation.
This even works with a lot of the "built-in" commands, so you could re-import the module with a prefix:
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
TL;DR
So the short answer:
avoid making lots and lots of pass-thru proxy functions
import all modules with prefix
when needed create a new function to override functionality of another
then add an alias for prefixed_name -> override_function
Import-Module Microsoft.PowerShell.Utility -Prefix Overridable -Force
Compare-OverridableObject $a $b
No need for a proxy here; later when you want to override it:
function Compare-CanonicalObject { <# Stuff #> }
New-Alias Compare-OverridableObject Compare-CanonicalObject
Anywhere in the code that you see a direct call like:
Compare-Object $c $d
Then you know: either this intentionally calls the current implementation of that command (which in other places could be overridden), or this command should never be overridden.
Advantages:
Clarity: looking at the code tells you whether an override could exist.
Testability: writing tests is clearer and easier for overridden commands because they have their own unique name
Discoverability: all overridden commands can be discovered by searching for aliases with the right name pattern i.e. Get-Alias *-Overridable*
Much less code
All overrides and their aliases can be packaged into modules

Object type changes on import/export in Powershell

I have been banging up against this for several hours now and I'm hoping that someone can help point me in the right direction.
I'm developing a few custom PowerShell cmdlets, and one of the supporting classes is a User object. Several of my cmdlets either emit or consume List.
This has worked very well so far, but I hit a serious snag when I tried to serialize one of the lists. The export seems to work fine; I look at the file (csv, clixml, etc.) and it looks the way I expect it to with type User. However, when I re-import it, the type seems to change to CSV:Class.User or Deserialized.Class.User. Obviously, this causes a problem when it's fed into a cmdlet that expects the standard User class.
If there a good way to fix this? I suspected that changing my cmdlets to expect some Interface instead of List would probably do the trick, but I can't figure out what interface that should be. And I can find no switch to the import methods to specify class names.
Any help would be greatly appreciated.
Welcome to PowerShell's extended type system. :-) BTW you will also get back state-only deserialized objects when your objects are passed across a remoting session. You can query the PSObject's TypeNames collection looking for Deserialized.Class.User to determine if you have a deserialized version of your type. Sames goes for the CSV version. You could create a couple of factory methods or clone style constructors on your User class that takes a PSObject that is some type of User (CSV or Deserialized) and then create a regular Class.User object. Just be aware that certain operations may not make sense in the deserialization case. For instance, using a Process object as an example, you can call Kill on a Process object and if the object came from the same machine that would work (assuming correct privs). However, if you were to call Kill on a process object from another machine, that's not going to work - hence the special deserialized objects that are primarily just data (property) containers.

Why the heck is Rails 3.1 / Sprockets 2 / CoffeeScript adding extra code?

Working with Rails 3.1 (rc5), and I'm noticing that any coffeescript file I include rails (or sprockets) is adding in initializing javascript at the top and bottom. In other words, a blank .js.coffee file gets outputted looking like this:
(function() {
}).call(this);
This is irritating because it screws up my javascript scope (unless I really don't know what I'm doing). I generally separate all of my javascript classes into separate files and I believe that having that function code wrapping my classes just puts them out of scope from one another. Or, atleast, I can't seem to access them as I am continually getting undefined errors.
Is there a way to override this? It seems like this file in sprockets has to do with adding this code:
https://github.com/sstephenson/sprockets/blob/master/lib/sprockets/jst_processor.rb
I understand that wrapping everything in a function might seem like an added convenience as then nothing is ran until DOM is loaded, but as far as I can tell it just messes up my scope.
Are you intending to put your objects into the global scope? I think CoffeeScript usually wraps code in anonymous functions so that it doesn't accidentally leak variables into the global scope. If there's not a way to turn it off, your best bet would probably be to specifically add anything you want to be in the global scope to the window object:
window.myGlobal = myGlobal;
It seems to be a javascript best practice these days to put code inside a function scope and be explicit about adding objects to the global scope, and it's something I usually see CoffeeScript do automatically.
You don't want to put everything into the global scope. You want a module or module like system where you can namespace things so you don't colide with other libraries. Have a read of
https://github.com/jashkenas/coffee-script/wiki/Easy-modules-with-coffeescript

Does powershell have a method_missing()?

I have been playing around with the dynamic abilities of powershell and I was wondering something
Is there is anything in powershell analogous to Ruby's method_missing() where you can set up a 'catch all method' to dynamically handle calls to non-existant methods on your objects?
No, not really. I suspect that the next version of PowerShell will become more in line with the dynamic dispatch capabilities added to .NET 4 but for the time being, this would not be possible in pure PowerShell.
Although I do recall that there is a component model similar to that found in .NET's TypeDescriptor for creating objects that provide properties and methods dynamically to PowerShell. This is how XML elements are able to be treated like objects, for example. But this is poorly documented if at all and in my experience, a lot of the types/methods needed to integrate are marked as internal.
You can emulate it, but it's tricky. The technique is described in Lee Holmes book and is boiled down to two scripts - Add-RelativePathCapture http://poshcode.org/2131 and New-CommandWrapper http://poshcode.org/2197.
The essence is - you can override any cmdlet via New-CommandWrapper. Thus you can redefine Out-Default that is implicitly invoked at the end of almost every command (excluding commands with explicit formatters like Format-Table at the end). In the new Out-Default you check if the last command threw an exception saying that no method / property was found. And there you insert your method_missing logic.
You could use Try Catch within Powershell 2.0
http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx

lua - capturing variable assignments

Ruby has this very interesting functionality in which when you create a class with 'Class.new' and assign it to a constant (uppercase), the language "magically" sets up the name of the class so it matches the constant.
# This is ruby code
MyRubyClass = Class.new(SuperClass)
puts MyRubyClass.name # "MyRubyClass"
It seems ruby "captures" the assignment and inserts sets the name on the anonymous class.
I'd like to know if there's a way to do something similar in Lua.
I've implemented my own class system, but for it to work I've got to specify the same name twice:
-- This is Lua code
MyLuaClass = class('MyLuaClass', SuperClass)
print(MyLuaClass.name) -- MyLuaClass
I'd like to get rid of that 'MyLuaClass' string. Is there any way to do this in Lua?
When assigning to global variables you can set a __newindex metamethod for the table of globals to catch assignments of class variables and do whatever is needed.
You can eliminate one of the mentions of MyLuaClass...
> function class(name,superclass) _G[name] = {superclass=superclass} end
> class('MyLuaClass',33)
> =MyLuaClass
table: 0x10010b900
> =MyLuaClass.superclass
33
>
Not really. Lua is not an object-orientated language. It can behave like one sometimes. But far from every time. Classes are not special values in Lua. A table has the value you put in it, no more. The best you can do is manually set the key in _G from the class function and eliminate having to take the return value.
I guess that if it REALLY, REALLY bothers you, you could use debug.traceback(), get a stack trace, find the calling file, and parse it to find the variable name. Then set that. But that's more than a little overkill.
With respect at least to Lua 5.2: You can capture assignments to A) the global table of a Lua State, as mentioned in a previous reply, and also B) to any other Lua Object whose __index and __newindex metamethods have been substituted (by replacing the metatable), this I can confirm as I'm currently using both these techniques to hook and redirect assignments made by Lua scripts to external C/C++ resource management.
There is a gotcha with regards to reading them back though, the trick is to NOT let the values be set in a Lua State.
As soon as they exist there, your hooks will fail to be called, so if you want to go down this path, you need to capture ALL get/set attempts, and NEVER store the values in a Lua State.