Powershell circular dependencies - powershell

I have a scenario where I THINK a circular dependency is the right answer. In my actual code I have a class that contains some data that is used for token replacements. That class contains other classes for the data, and contains a method that does the lookup of token values and returns the value. However, those dependent classes need to be validated, and some of those values are dependent on lookups. So, I have mocked up this code to validate the approach.
class Tokens {
[Collections.Specialized.OrderedDictionary]$primaryData
[Collections.Specialized.OrderedDictionary]$secondaryData
Tokens ([Secondary]$seconday) {
$this.primaryData = [Ordered]#{
'1' = 'One'
'2' = 'Two'
'3' = 'Three'
'4' = 'Four'
'5' = 'Five'
}
$this.secondaryData = $seconday.secondaryData
}
[String] GetToken ([String]$library, [String]$item) {
return $this.$library.$item
}
[Void] SetToken ([String]$library, [String]$item, [String]$value) {
$this.$library.$item = $value
}
[String] ToString () {
[Collections.Generic.List[String]]$toString = [Collections.Generic.List[String]]::new()
foreach ($key in $this.primaryData.Keys) {
$toString.Add("$key : $($this.primaryData.$key)")
}
foreach ($key in $this.secondaryData.Keys) {
$toString.Add("$key : $($this.secondaryData.$key)")
}
return [string]::Join("`n", $toString)
}
}
class Secondary {
[Collections.Specialized.OrderedDictionary]$secondaryData
Secondary () {
$this.secondaryData = [Ordered]#{
'A' = 'a'
'B' = 'b'
'C' = 'c'
'D' = 'd'
'E' = 'e'
}
}
[Void] Update ([Tokens]$tokensReference) {
$tokensReference.SetToken('secondaryData', 'A', 'A')
$tokensReference.SetToken('secondaryData', 'B', "$($tokensReference.GetToken('secondaryData', 'A')) and $($tokensReference.GetToken('secondaryData', 'B'))")
}
[String] ToString () {
[Collections.Generic.List[String]]$toString = [Collections.Generic.List[String]]::new()
foreach ($key in $this.secondaryData.Keys) {
$toString.Add("$key : $($this.secondaryData.$key)")
}
return [string]::Join("`n", $toString)
}
}
CLS
$secondary = [Secondary]::new()
$tokens = [Tokens]::new($secondary)
$secondary.Update($tokens)
Write-Host "$($tokens.ToString())"
This is working, exactly as expected. however, the idea of circular dependency injection has my hair standing on end. Like, it could be a real problem, or at least is a code smell. So, my question is, am I on the right track, or is this a dead end and I just haven't found that end yet? Given that PowerShell isn't "fully" object oriented yet, I imagine there could be some uniquely PS related issues, and everything I have found searching for Powershell Circular Dependency talks about removing them. So far I haven't found anything about when it is appropriate and how to do it well.
And, assuming it is a valid approach, is there anything obvious in this simplified implementation that could lead to problems or could be better done some other way?
EDIT: OK, so perhaps I am going to refine my vocabulary a bit too. I was thinking circular dependency since Secondary is a dependency, or member perhaps, of Tokens, and then I update Secondary from inside Secondary, while referencing data from Secondary, via the method in Tokens.
To clarify (I hope) the ultimate goal, these lookups are for program specific data, which I have in XML files. So, for example the data file for Autodesk Revit 2021 would include these three items
<item id="GUID" type="literal">{7346B4A0-2100-0510-0000-705C0D862004}</item>
<item id="installDataKey" type="registryKeyPath">HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\[Revit 2021~GUID]</item>
<item id="displayName" type="registryPropertyValue" path="installDataKey">Revit 2021</item>
In actual use I would want to get the DisplayName property found in the key defined in <item id="installDataKey">, and if the value matches the value in <item id="displayName"> then I might also look for the value of the DisplayVersion property in the same key, and make decisions based on that. But because there are new versions every year, and 20+ different software packages to address, managing these data files is a pain. So I want to validate the data I have in the files against a machine that actually has the software installed, to be sure I have my data correct. Autodesk is famous for changing things for no good reason, and often for some very customer hostile reasons. So, things like referencing the GUID as data and reusing it as a token, i.e. the [Revit 2021~GUID] above, saves effort. So during the validation process only, I would want to set the GUID, then do the standard token replacement to convert HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\[Revit 2021~GUID] to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{7346B4A0-2100-0510-0000-705C0D862004} and should that key actually be found, use it to validate about 20 other registry and file paths as well as the actual value of DisplayName. If everything validates I will sign the XML, and in actual use signed XML will basically treat everything as a literal and no validation is done, or rather it was prevalidated.
But before use a reference to [product~installDataKey] when the current product is Revit 2021 would resolve first to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\[Revit 2021~GUID] then to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{7346B4A0-2100-0510-0000-705C0D862004} at which point the code could use it as a registry path and see if Revit 2021 is in fact installed.
So, 99.9% of the time, I would instantiate Tokens with a constructor that just includes the full dataset and move on. But on the .1% of occasions where I am validating the data itself, I need to be able to read the xml, set the value for that GUID and immediately use the lookup to validate that Autodesk hasn't done something stupid like move some data out of the GUID key and into a secondary key. They have done that before.
I also might want to replace HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall with a token like [windows~uninstallKey] just to make life easier there too.
Hopefully that makes some sense. It's a mess to be sure, but anything to do with Autodesk is a mess.

Related

Recursively searching / comparing nested Hashtables?

Let me start by apologising that the below data structure probably isn't written correctly. I dynamically created the hashes in the code and I'm not very good at trying to represent what gets created.
$Sailings= #{
'Arrivals' = #{
$DynamicKey_booking_Ref = #{
'GoingTo' = 'Port1';
'Scheduled' = '09:05';
'Expected' = '10:09';
'Status' = 'Delayed'
};
'Departures' = #{
$DynamicKey_booking_Ref = #{
'ArrivingFrom' = 'Port1';
'Scheduled' = '09:05';
'Expected' = '09:05';
'Status' = 'OnTime'
};
}
}
I typically access the data like this. (hopefully that confirm the structure I'm using):
$Sailings.Arrivals.PDH083.GoingTo which returns "Port1"
$Sailings.Arrivals.PDH083.Scheduled which returns "09:05"
where PDH083 in this example is a dynamically created key based on a booking ref.
What I am trying to do is compare this structure with another identical structure but with potentially different values. E.g. are these two elements the same?
$Sailings.Arrivals.PDH083.GoingTo = "Port1"
$Output.Arrivals.PDH083.GoingTo = "Port5555"
If they're not the same capture the difference and the path/key that was different. Then report on them at the end.
What I'm struggling to write is a recursive loop that can walk down to the last element and then compare it to the $output. While my hashes are fixed now, I'd like to allow for the possibility that more nested hashes might be added lower down at a later date. Is this something that can be done easily?
I've used
($Sailings.Arrivals.keys | ? {$Output.Arrivals.keys -notcontains $_})
to show missing keys, but I just can't fathom a similar / efficient way of doing this for the values. I again can see I can use .values but it's still and element at a time.
Here is a semi recursive example, which returns string representations of the paths for all the leaves in the tree. Hope it gives you ideas:
$Sailings = #{
'Arrivals' = #{
'PDH083' = #{
'GoingTo' = 'Port1'
'Scheduled' = '09:05'
'Expected' = '10:09'
'Status' = 'Delayed'
}
}
'Departures' = #{
'PDH083' = #{
'ArrivingFrom' = 'Port1'
'Scheduled' = '09:05'
'Expected' = '09:05'
'Status' = 'OnTime'
}
}
}
function Roam($arg, $result="") {
if(!($arg -is [Hashtable])) {
return "$result/$arg"
}
foreach($pair in $arg.GetEnumerator()) {
Roam $pair.value "$result/$($pair.key)"
}
}
Roam $Sailings
The if is stop condition, the first thing you should ask when you are designing recursive operations: When I have a result?
Suppose you are standing on the roots of a huge tree, and you have been given a task of mapping routes to every single leave of that tree, every turn to the left or right on the branches from the trunk to a leave. Overwhelming, eh? But think instead about finding out a route to a single leave, whichever from the myriad.
You start climbing, and reach first branch. Will you turn to the left or right? Doesn't matter. You decide to take the left branch, and write down on piece of paper left. You reach next branch, and for matter of fun turn right, writing down right. After a few branches you are carrying notes like left, right, right, left, right and so on, until (because trees don't usually produce loops) eventually there is no more climbing but only a leave in font of you.
You've done it! You mapped the whole (and only one) path to this single leave, and can now jump down (hoping the tree is not too tall) and represent the paper containing the route to your adoring friends. Reaching the leave was the stop condition.
But how about the other leaves? Imagine you have a weird superpower of cloning yourself. When you reach a branch, you clone yourself and the paper you are carrying. If you turn right, you add that right turn to the notes, but the clone goes left, and writes down left instead. On next branch you again clone yourself and the paper, and write down your chosen direction, and the clone does the same for its direction. You don't have to worry about the clones, (Maybe you are a clone yourself!) just repeat that until you've reached the one leave and can jump out.
The $result argument is that piece of paper, and originally it doesn't came from anywhere, it's empty.
Because all the leaves in you data structure are strings, you could also write the if statement like:
if($arg -is [String])
How about GetEnumerator? Hashtables are normally not ordered in PowerShell. We can't pick first or second or sixth pair. But your data structure branches to more directions than left and right, so the Hashtable has to be ordered to an Array as a crowd of people to a queue, so we can use foreach loop to send our clones to their paths. (We could replace that loop with recursion, but let it be)
So in the function call Roam $pair.value "$result/$arg" the first argument is the branch ahead, and the second one is the piece of paper we just added the current direction.
Recommendation: You don't need to scramble it through, even the first few chapters are enlightening. Structure and Interpretation of Computer Programs https://mitpress.mit.edu/sicp/full-text/book/book.html

SelectNodes in XPathNodeList

Given an XPathNodeList derived with something like this
$settingsNodes = $temp.xml.SelectNodes('/Settings/*')
And given a particular node Name or ID, I can iterate the XPathNodeList, test for a match and increment a counter. However, I wonder if there is something more elegant, along the lines of
$count = $settingsNodes.SelectNodes("//$nodeName")
That doesn't work because an XPathNodeList doesn't have a SelectNodes method, and I can't seem to find anything that does work. Googling XPathNodeList SelectNodes returns all sorts of references to SelectNodes as a method of XmlNode, but nothing on XPathNodeList.
My specific condition means I am almost certainly never looping through more than a few hundred to perhaps a thousand nodes, so maybe it really doesn't matter, it just seems like there is probably or more graceful solution and I just haven't found it.
EDIT: For additional context.
In one condition I might have this XML and I just want to catch and log the duplicate UserLogFilePath node.
<Settings>
<JobsXML>[Px~Folder]\Resources\jobs.xml</JobsXML>
<JobLogFilePath>[Px~Folder]\Logs</JobLogFilePath>
<JobLogFileName>[Px~StartTime] Job [Px~Job][Px~Error]</JobLogFileName>
<MachineLogFilePath>[Px~Folder]\Logs</MachineLogFilePath>
<MachineLogFileName>[Px~Now] [Px~Action] [Px~Set] on [Px~Computer][Px~Error][Px~Validation]</MachineLogFileName>
<ResetLogFilePath>[Px~Folder]\Logs</ResetLogFilePath>
<ResetLogFileName>[Px~Now] [Px~Action] on [Px~Computer][Px~Error][Px~Validation]</ResetLogFileName>
<UserLogFilePath>[Px~Folder]\Logs</UserLogFilePath>
<UserLogFilePath>C:\Program Files</UserLogFilePath>
<UserLogFileName>[Px~User] on [Px~Computer]</UserLogFileName>
<UserContextMember></UserContextMember>
<UserContextNotMember>Administrators</UserContextNotMember>
</Settings>
If there are no duplicates the entire Settings node is imported into my XML variable in memory.
Later, I might have this XML and I want to catch duplicate IDs, both in and outside of any Product_Group.
<Definitions>
<Products>
<Product_Group id="Miscellaneous">
<Product id="ADSK360">
<DefaultShortcut>Autodesk 360.lnk</DefaultShortcut>
<ProgramDataFolder>C:\ProgramData\Autodesk\Autodesk ReCap</ProgramDataFolder>
<ProgramFolder>C:\Program Files\Autodesk\Autodesk Sync</ProgramFolder>
<ShortcutPath>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Autodesk</ShortcutPath>
</Product>
<Product id="ADSK360">
<DefaultShortcut>Autodesk 360.lnk</DefaultShortcut>
<ProgramDataFolder>C:\ProgramData\Autodesk\Autodesk ReCap</ProgramDataFolder>
<ProgramFolder>C:\Program Files\Autodesk\Autodesk Sync</ProgramFolder>
<ShortcutPath>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Autodesk</ShortcutPath>
</Product>
</Product_Group>
<Product id="ADSK360">
<DefaultShortcut>Autodesk 360.lnk</DefaultShortcut>
<ProgramDataFolder>C:\ProgramData\Autodesk\Autodesk ReCap</ProgramDataFolder>
<ProgramFolder>C:\Program Files\Autodesk\Autodesk Sync</ProgramFolder>
<ShortcutPath>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Autodesk</ShortcutPath>
</Product>
</Products>
</Definitions>
If there are no duplicates the Product_Groups are eliminated (it's just a convenience for users managing their XML) and all the Products are imported into the Products node in memory.
Subsequent product files are checked first for internal duplicates, and if none are found checked for duplicates already in the main XML, and if none found the file's XML is merged. This repeats for potentially a hundred or more total files in some cases.
The current loop based approach is inelegant, especially for the internal duplicates test. I make an XPathNodesList, say of all the Products, in the candidate XML. This is where SelectNodes is nice because it can find Products in a Product_Group or not. Then I loop through each product and test it against the whole XPathNodesList. Ugly because a list of only 10 nodes means 100 times through the loop. When testing the candidate against the final XML it's more efficient, but still ugly.
EDIT #2:
Taking a stab at using Select, I have this correctly finding duplicate nodes.
$settingsNodes = $temp.xml.SelectNodes('/Settings/*')
Write-Host "$($settingsNodes.count)"
Write-Host "$(($settingsNodes | select -unique).count)"
But how does one find only duplicate IDs? Better yet duplicate IDs with the same node name? Since a different node name but the same ID would actually not be a duplicate. -unique is a bool I see, so my guess is I am about to learn something more about pipelining, because I need to extract the IDs to pipe to Select -unique, which isn't Where-Object. How does one just pull the IDs in this case?
OK, gut test here. I think maybe I am trying to be a bit too clever when in fact there is a much easier solution. To wit...
$settingsNodes = $temp.xml.SelectNodes('/Settings/*')
$unique = $duplicate = #()
foreach ($node in $settingsNodes) {
if ($unique -contains $node.Name) {
$duplicate += $node.Name
} else {
$unique += $node.Name
}
}
Write-Host "u: $unique"
Write-Host " "
Write-Host "d: $duplicate"
Swap Name for ID and it works for that too. The SelectNodes will take care of eliminating the Product_Group. And similarly I can build an array of the nodes in Final to test against with the now unique list of candidate nodes.
So, am I missing something that is going to bite me? Or should I just go ahead and kick myself? ;)

How to modify internalname in Sharepoint 2010 list field

I need to change the internalname for a sharepoint 2010 list item field. For some reason, our migration software renamed it during 2007->2010 migration and this field is referenced by other processes so we need the internalname back to the original. This field exists in over 200 lists in the migrated site so we need a means to do this programatically - powershell preferred.
$newInternalName = "yourInternalFieldName"
$displayName = "oldDisplayName"
$SPWebApp = Get-SPWebApplication "http://yourwebapp"
foreach (SPList $currList in $SPWebApp.Lists)
{
foreach (SPField $fld in $currList.Fields) #you could potentially use a different command here to get the field more efficiently
{
if ($fld.Name == $displayName)
{
#The boolean in the parameter list is for required/non-required field $currList.Fields.Add($newInternalName,Microsoft.SharePoint.SPFieldType.Text,$False)
foreach (SPListItem $item in $currList.Items)
{
$item[$newInternalName] = $item[$displayName]
$item[$newInternalName].Title = $displayName #I'm assuming you want to keep the display name the same
$item.Update()
}
Break #since you've already fixed the column for this list no need to keep going through the fields
# optional delete unwanted column
# $currList.Fields.Delete($displayName)
# $currList.Update()
}
}
}
As far as I know, you cannot change the internal name of a field once it has been created. Here I create a new field with the correct internal name and copy over the values. I don't have access to a server with Powershell today so I wasn't able to test this, but this should be very close to what you need. You may need to tweak it a bit based on what type of field you're dealing with, if you want to use a different Add function, and/or you want to delete the old field.
The enumeration to select the type of the field in the Add function is defined here:
http://msdn.microsoft.com/en-us/library/office/microsoft.sharepoint.spfieldtype(v=office.14).aspx
There are a couple of overloads for the Add function so if the one I used doesn't work for you, you might wanna use one of these others
http://msdn.microsoft.com/en-us/library/office/aa540133(v=office.14).aspx

Perl access to Amazon marketplace offers via the product API

I need to use the Amazon Product API via perl to get a list of third party new and used (marketplace) offers given an ASIN. I need to be able to see prices and whether each offer is fulfilled by amazon (eligible for prime/super saver shipping). I've scoured the Net::Amazon module, and I don't see any way to do this.
Has anyone done anything similar?
So I was looking in to this myself, and it looks like items offered by Amazon have the attribute 'IsEligibleForSuperSaverShipping' => '1' in the offers. So this would be something like:
my $ua = Net::Amazon->new(
associate_tag => 'derpderp',
token => 'morederp',
secret_key => 'herpaderp',
);
my $rsp = $ua->search( asin => 'B000FN65ZG' );
if ($rsp->is_success()) {
if (my $o = $rsp->{xmlref}->{Items}->{Offers}) {
foreach my $offer (keys %{ $o }) {
if ($o->{$offer}->{IsEligibleForSuperSaverShipping}) {
# This is offered by Amazon.com
} # if
} # foreach
}
else {
die "Error: ", $rsp->message(), "\n";
}
note, as of writing (04-Nov-13), that ASIN is fulfilled through Amazon; it may not be in the future.
The problem here is that Net::Amazon generates its accessors magically (and not using Class::Accessor, but it's old, so we can forgive it…). I am not sure what the correct accessor is for the individual offers within the {Items} element above. Reaching into your object is kind of fraught with peril, but in this case, finding the right accessor should not be so hard (given it is automagicaly generated) and barring that, I think you can feel comfortable reaching right into the object.
Also, you might suggest reaching out to the author's author, Mike Schilli, or current maintainer, Christopher Boumenot, might be worth doing, especially if this is something that's in the result from Amazon consistently and could just be added to the API. The problem with this is that the return from Amazon is kind of variable. Quoting the perldoc,
Methods vary, depending on the item returned from a query. Here's the most common ones. They're all accessors, meaning they can be used like Method() to retrieve the value or like Method($value) to set the value of the field.
This makes it tricky to assume that you can test the return for super-saver-shipping-ness because it may just not have that key in the structure returned.

HOP::Lexer with overlapping tokens

I'm using HOP::Lexer to scan BlitzMax module source code to fetch some data from it. One particular piece of data I'm currently interested in is a module description.
Currently I'm searching for a description in the format of ModuleInfo "Description: foobar" or ModuleInfo "Desc: foobar". This works fine. But sadly, most modules I scan have their description defined elsewhere, inside a comment block. Which is actually the common way to do it in BlitzMax, as the documentation generator expects it.
This is how all modules have their description defined in the main source file.
Rem
bbdoc: my module description
End Rem
Module namespace.modulename
This also isn't really a problem. But the line after the End Rem also contains data I want (the module name). This is a problem, since now 2 definitions of tokens overlap each other and after the first one has been detected it will continue from where it left off (position of content that's being scanned). Meaning that the token for the module name won't detect anything.
Yes, I've made sure my order of tokens is correct. It just doesn't seem possible (somewhat understandable) to move the cursor back a line.
A small piece of code for fetching the description from within a Rem-End Rem block which is above a module definition (not worked out, but working for the current test case):
[ 'MODULEDESCRIPTION',
qr/[ \t]*\bRem\n(?:\n|.)*?\s*\bEnd[ \t]*Rem\nModule[\s\t]+/i,
sub {
my ($label, $value) = #_;
$value =~ /bbdoc: (.+)/;
[$label, $1];
}
],
So in my test case I first scan for a single comment, then the block above (MODULEDESCRIPTION), then a block comment (Rem-End Rem), module name, etc.
Currently the only solution I can think of is setup a second lexer only for the module description, though I wouldn't prefer that. Is what I want even possible at all with HOP::Lexer?
Source of my Lexer can be found at https://github.com/maximos/maximus-web/blob/develop/lib/Maximus/Class/Lexer.pm
I've solved it by adding (a slightly modified version of) the MODULEDESCRIPTION. Inside the subroutine I simply filter out the module name and return an arrayref with 4 elements, which I later on iterate over to create a nice usable array with tokens and their values.
Solution is again at https://github.com/maximos/maximus-web/blob/develop/lib/Maximus/Class/Lexer.pm
Edit: Or let me just paste the piece of code here
[ 'MODULEDESCRIPTION',
qr/[ \t]*\bRem\R(?:\R|.)*?\bEnd[ \t]*Rem\R\bModule[\s\t]\w+\.\w+/i,
sub {
my ($label, $value) = #_;
my ($desc) = ($value =~ /\bbbdoc: (.+)/i);
my ($name) = ($value =~ /\bModule (\w+\.\w+)/i);
[$label, $desc, 'MODULENAME', $name];
}
],