I am trying to get something like this to work but cannot figure it out.
(Get-Item env:userprofile\AppData\Local\Microsoft\OneDrive\OneDrive.exe).VersionInfo.FileVersion
I am getting an error that it does not exist, though I know it does.
If I run the same thing with a known logged in user like below,
(Get-Item c:\users\jonesb\AppData\Local\Microsoft\OneDrive\OneDrive.exe).VersionInfo.FileVersion
I get the versioning I am looking for. I will be running this script on thousands of machines and I don't know who will be logged in to each machine. Please advise.
env:userprofile expands to env:\userprofile. This is a PSDrive, which you can access with Cmdlets like Get-Item, but it does not expand in strings. What you need to do is to use the variable $env:userprofile.
(Get-Item $env:userprofile\AppData\Local\Microsoft\OneDrive\OneDrive.exe).VersionInfo.FileVersion
Related
Is there a way to find a list of script files that reference a given module (.psm1)? In other words, get all files that, in the script code, use at least 1 of the cmdlets defined in the module.
Obviously because of PowerShell 3.0 and above, most of my script files don't have an explicit Import-Module MODULE_NAME in the code somewhere, so I can't use that text to search on.
I know I can use Get-ChildItem -Path '...' -Recurse | Select-String 'TextToSearchFor' to search for a particular string inside of files, but that's not the same as searching for any reference to any cmdlet of a module. I could do a search for every single cmdlet in my module, but I was wondering if there is a better way.
Clarification: I'm only looking inside of a controlled environment where I have all the scripts in one file location.
Depending on the scenario, the callstack could be interesting to play around with. In that case you need to modify the functions which you want to find out about to gather information about the callstack at runtime and log it somewhere. Over time you might have enough logs to make some good assumptions.
function yourfunction {
$stack = Get-PSCallStack
if ($stack.Count -gt 1) {
$stack[1] # log this to a file or whatever you need
}
}
This might not work at all in your scenario, but I thought I throw it in there as an option.
Let's say I wrote a PowerShell script that includes this commmand:
Get-ChildItem -Recurse
But instead I wrote:
Get-ChildItem -Re
To save time. After some time passed and I upgraded my PowerShell version, Microsoft decided to add a parameter to Get-ChildItem called "-Return", that for example returns True or False depending if any items are found or not.
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected? I understand Microsoft's attempt to save my typing time, but this is my concern and therefore I will probably always try to write the complete parameter name.
Unless of course you know something I don't. Thank you for your insight!
This sounds more like a rant than a question, but to answer:
In that virtual scenario, do I have I to edit all my former scripts to ensure that the script will function as expected?
Yes!
You should always use the full parameter names in scripts (or any other snippet of reusable code).
Automatic resolution of partial parameter names, aliases and other shortcuts are great for convenience when using PowerShell interactively. It lets us fire up powershell.exe and do:
ls -re *.ps1|% FullName
when we want to find the path to all scripts in the profile. Great for exploration!
But if I were to incorporate that functionality into a script I would do:
Get-ChildItem -Path $Home -Filter *.ps1 -Recurse |Select-Object -ExpandProperty FullName
not just for the reasons you mentioned, but also for consistency and readability - if a colleague of mine comes along and maybe isn't familiar with the shortcuts I'm using, he'll still be able to discern the meaning and expected output from the pipeline.
Note: There are currently three open issues on GitHub to add warning rules for this in PSScriptAnalyzer - I'm sure the project maintainers would love a hand with this :-)
I am facing some problems in powershell. I want to be able to let a powershell command search for multiple directories.
With the name being a variable like "$VM_DISK=VM_DISK.vhdx" and let powershell search in that manor so that if that file exists in a folder such as C:\VM_DISK\ it exit the script.
I have already tried the "Get-Childitem" but it doesn't seem to work when I put my variable in it. Here is an example:
$VM_DISK= "Example.vhdx"
$search=Get-ChildItem -Path C:\VM_DISK\* -Filter $VM_DISK -Recurse
if ($search -eq $VM_DISK) {write-host "Goodbye!" exit} else {write-host "Continue"}
I just cant seem to figure out why this isn't working, hope some can figure it out.
You need to alter your if statement.
if ($search.Name -contains $VM_Disk)
This way you are comparing an Array of names (which is what you want, names of objects, not objects) to a name of particular object (to a string, basically).
This makes little sense in your case, tbh. Since $search would always include $VM_Disk or would be null if nothing was found.
So the proper way to test would be if ($search) (just like Mathias advised). Which would test if anything was returned. Which, basically equals what you are trying to do.
I'm working with Octopus and I need to add in one of my PowerShell scripts the chance to modify an Octopus Parameter (not Variable...).
In few words, my website deploys in 2 folders, alternately, and I have to take a trace of this. My idea is to set a parameter that, at every run of the script reads the actual value and so knows where to deploy this new release.
I also tried some stuff such as
$OctopusParameters['Destination']=$Number
and
Set-OctopusVariable -Name 'Destination' -Value $Number
but without success.
I hope I've been clear enough and thanks in advance for everyone will reply.
You might want to try setting an environmental variable on the machine for this. It will persist between deployments.
Edit:
Can't format this in the comment very well, you probably want something like this
$destination = [environment]::GetEnvironmentVariable("Destination","Machine")
// change $destination to its opposite value
[Environment]::SetEnvironmentVariable("Destination",$destination,"Machine")
I'd like to use PowerShell to check whether an IIS Web Application exists (or potentially some other kind of item). I can do this with Get-Item, but that reports an error if the item doesn't exist, which is misleading to the user running the script - it looks like something went wrong when actually everything is fine.
How do I do this without an error?
The cmdlet Test-Path is specifically designed for this, it determines whether items of a path exist. It returns a Boolean value and does not generate an error.
The cmdlet Get-Item (and similar) can be used, too, but not directly. One way is already proposed: use -ErrorAction SilentlyContinue. It might be important to know that in fact it still generates an error; it just does not show it. Check the error collection $Error after the command, the error is there.
Just for information
There is a funny way to avoid this error (it also works with some other cmdlets like Get-Process that do not have a Test-Path alternative). Suppose we are about to check existence of an item "MyApp.exe" (or a process "MyProcess"). Then these commands return nothing on missing targets and at the same time they generate no errors:
Get-Item "[M]yApp.exe"
Get-Process "[M]yProcess"
These cmdlets do not generate errors for wildcard targets. And we use these funny wildcards that actually match single items.
Use the command ... get-item blah -ErrorAction SilentlyContinue