Dealing with files in PSUnit - powershell

I'm writing a Powershell script which is going to go out into a client's current source control system and do a mass rename on all of the files so that they follow a new naming convention.
Being the diligent TDD developer that I am, I started with putting together a PSUnit test case. At first I was thinking that I would pass in a string to my function for the file name (along with a couple of other relevant parameters) and then return a string. Then it occurred to me that I am going to need to break apart the file name into an extension and a base name. Since System.IO.FileInfo already has that functionality, I thought why not just pass in a file object instead of a string?
If I do that however, then I don't see how I can write my PSUnit test without it being reliant on an external resource (in this case, the file must exist for me to get the FileInfo object - or does it?).
Is there a "clean" way to handle this? How are others approaching these issues?
Thanks for any help or advice!

My suggestion is: Be pragmatic and pass in the base name and extension as two separate strings. For convenience reasons, you can provide an overload that accepts a FileInfo.

Related

How to protect powershell file, and call single function

I'm having this problem for a while now and google have its limits.
I'm writing a powershell file that contain several generic function.
I use the function in vary scripts and now I want to let other personal in my work to use them as well.
the problem is, do to sensitive operation, I want to lock and protect the script (compile to a dll, exe etc').
how do I create powershell library like C# DLL?
one option I try but did not find out how to continue is to compile the script using powerGUI to executable file ( .exe) but then I canot access the function in it let alone pass on parameters to that function.
hope you understood me :)
thank you.
You don't. Rather than trying to obscure this information (if you compile them, they can be decompiled and your "protected" resources will no longer be), remove them entirely and make those parameters for your functions. This both protects your "sensitive" data and makes the code much more reusable.
You can then package your functions into a module

Object type changes on import/export in Powershell

I have been banging up against this for several hours now and I'm hoping that someone can help point me in the right direction.
I'm developing a few custom PowerShell cmdlets, and one of the supporting classes is a User object. Several of my cmdlets either emit or consume List.
This has worked very well so far, but I hit a serious snag when I tried to serialize one of the lists. The export seems to work fine; I look at the file (csv, clixml, etc.) and it looks the way I expect it to with type User. However, when I re-import it, the type seems to change to CSV:Class.User or Deserialized.Class.User. Obviously, this causes a problem when it's fed into a cmdlet that expects the standard User class.
If there a good way to fix this? I suspected that changing my cmdlets to expect some Interface instead of List would probably do the trick, but I can't figure out what interface that should be. And I can find no switch to the import methods to specify class names.
Any help would be greatly appreciated.
Welcome to PowerShell's extended type system. :-) BTW you will also get back state-only deserialized objects when your objects are passed across a remoting session. You can query the PSObject's TypeNames collection looking for Deserialized.Class.User to determine if you have a deserialized version of your type. Sames goes for the CSV version. You could create a couple of factory methods or clone style constructors on your User class that takes a PSObject that is some type of User (CSV or Deserialized) and then create a regular Class.User object. Just be aware that certain operations may not make sense in the deserialization case. For instance, using a Process object as an example, you can call Kill on a Process object and if the object came from the same machine that would work (assuming correct privs). However, if you were to call Kill on a process object from another machine, that's not going to work - hence the special deserialized objects that are primarily just data (property) containers.

loading parameter files for data different sets

I need to analyse several sets of data which are associated with different parameter sets (one single set of parameters for each set of data). I'm currently struggling to find a good way to store these parameters such that they are readily available when analysing a specific dataset.
The first thing I tried was saving them in a script file parameters.m in the data directory and load them with run([path_to_data,'/parameters.m']). I understand, however, that this is not good coding practice and it also gave me scoping problems (I think), as changes in parameters.m were not always reflected in my workspace variables. (Workspace variables were only changed after Clear all and rerunning the code.)
A clean solution would be to define a function parameters() in each data directory, but then again I would need to add the directory to the search path. Also I fear I might run into namespace collisions if I don't give the functions unique names. Using unique names is not very practical on the other hand...
Is there a better solution?
So define a struct or cell array called parameters and store it in the data directory it belongs in. I don't know what your parameters look like, but ours might look like this:
parameters.relative_tolerance = 10e-6
parameters.absolute_tolerance = 10e-6
parameters.solver_type = 3
.
.
.
and I can write
save('parameter_file', 'parameters')
or even
save('parameter_file', '-struct', 'parameters', *fieldnames*)
The online help reveals how to use -struct to store fields from a structure as individual variables should that be useful to you.
Once you've got the parameters saved you can load them with the load command.
To sum up: create a variable (most likely a struct or cell array) called parameters and save it in the data directory for the experiment it refers to. You then have all the usual Matlab tools for reading, writing and investigating the parameters as well as the data. I don't see a need for a solution more complicated than this (though your parameters may be complicated themselves).

How can I get the list of properties that MSBuild was invoked with?

Given this command:
MSBuild.exe build.xml /p:Configuration=Live /p:UseMerge=true /p:EnableUpdateable=false
how can I form a string like this in my build script:
UseMerge=true;EnableUpdateable=true
where I might not know which properties were used at the command line.
What are you going to do with the list?
There's no built in "properties that came via the commandline" thing a la splatting in PowerShell 2.0
Remember properties can come from environment variables and/or other scripts.
Also, you stripped on of the params out in your example.
In general, if one is trying to chain to another command, one uses defaulting (Conditions on elements in PropertyGroups) and validation (Messages Conditional on presence of options) and then either create a new property or embed the params you want to pass into a string.
Here's hoping someone has a nice neat example of a more general way to do this but I doubt it.
As covered in http://www.simple-talk.com/dotnet/.net-tools/extending-msbuild/ one can dump out the parameters passed by doing /v:diag on the commandline (but that's obviously not what you're after).
Have a look in the Common.targets files - you'll find lots of cases of chaininign involving manaully building up lists to pass onto subservient tasks.

How to deal with changing feature and product names in source code?

What is a good strategy for dealing with changing product and feature names in source code. Here's the situation I find myself in over and over again (most of you can relate?)...
Product name starts off as "DaBomb"
Major features are "Exploder", "Lantern" and "Flag".
Time passes, and the Feature names are changed to "Boom", "Lighthouse" and "MarkMan"
Time passes, and the product name changes to "DaChronic"
...
...
Blah, blah, blah...over and over and over
And now we have a large code base with 50 different names sprinkled around the directory tree and source files, most of which are obsolete. Only the veterans remember what each name means, the full etimologic history, etc.
What is the solution to this mess?
Clarification: I don't mean the names that customers see, I mean the names of directories, source files, classes, variables, etc. that the developers see where the changing product and feature names get woven into.
Given your clarification that you "don't mean the names that customers see, [you] mean the names of directories, source files, classes, variables, etc. that the developers see", yeah, this can be an annoying problem.
The way teams I've been on have coped with best when we've had a policy of always using only one name for each thing in the code base. If the name changes later on we either stay with the old name in the code, or we migrate all instances of the old name to the new name. The important thing is to never start using the new name in the code unless all instance of the old name have been migrated. That way you only ever have to keep 2 names for something in your head: the "old name", used in the code, and the name everyone else uses.
We've also often chosen a very generic/descriptive name for things when starting out if we know the "brand name" is likely to change.
I consider renaming to better naming conventions just another form of refactoring. Create a branch, perform the renames, run unit/integration tests, commit, merge, repeat. It's all about process control to keep consistency in the project.
The solution to the mess is to not create it in the first place. Once a code path is named, there's rarely a good reason to change it and never a good reason to use a new name alongside the old one. When "Exploder" becomes "Boom", you have two choices: Either keep using Exploder exclusively, and never mention Boom anywhere, or change all instances of Exploder to Boom and then continue on using Boom exclusively and never mention Exploder again.
If you're using both Exploder and Boom in the same code base, you're doing it wrong.
Also, I know you clarified that you're not talking about the user-visible names, but, if you start out working with your own internal names which are relevant to what the code does and completely independent of what marketing wants to call the product/feature, then this is much less likely to become an issue. If you're already referring to Exploder internally as TNT, then what difference does it make if Exploder gets changed to Boom?
How do you deal with Localization? Same thing; same method.
We use an internal and and external name. It could be as simple as a static variable definition like
public static final String EXPLODER = "Boom";
And in code you'll always use the reference to EXPLODER. Same for path names and the like - hard coding those paths at different places is a no-go anyway. If some guys starts digging through internal stuff (like JS sources or ini files or whatever), who cares if they discover Exploder?
Just use internal names, and ignore changes to marketing/official names: https://softwareengineering.stackexchange.com/a/208578/55472.