Invoke-WebRequest without OutFile? - powershell

I used Invoke-WebRequest in Powershell to download a file without using the -OutFile parameter, and, from the documentation here, the file should've ended up in the directory I was in. However, there is nothing. Response was OK, no error was shown.
What could've happened to that file? Am I mistaken about how Invoke-WebRequest should work without an Out parameter?
Thanks!
Note: I know I can easily download the file using the parameter, but it's pretty big and I'd like to make sure it doesn't end up clogging disk space somewhere I don't need

From the linked docs:
By default, Invoke-WebRequest returns the results to the pipeline.
That is, in the absence of -OutFile no file is created.
(If you don't capture or redirect the output, it will print to the host (console).)
As techguy1029 notes in a comment, the current directory only comes into play if you do use -OutFile but specify a mere file name rather than a path.
As an aside: To-pipeline output is a response object of (a) type (derived from) WebResponseObject, whereas only the value of the response's body (the equivalent of property value .Content) is saved with -OutFile.

Lets talk about what the Microsoft documentation says for Invoke-WebRequest
"
-OutFile : Specifies the output file for which this cmdlet saves the response body. Enter a path and file name. If you omit the path, the
default is the current location. "
The Key word here is if a Path is omitted it will use the current path.
the -OutFile is a parameter of type String
The usage to save to current path would be
Invoke-webrequest "http://Test.com/test.pdf" -OutFile "Test.pdf"
else to have a custom path
Invoke-webrequest "http://Test.com/test.pdf" -OutFile "C:\Test\Test.Pdf"

Related

Download File on Webpage via Windows CMD/Power Shell

just as the title states, I'd like to download a file from the internet, specifically the download on this webpage. I have looked into using Invoke-WebRequest, curl, and certutil. All of these options download the HTML of the site. The specific URL of the download looks like this: https://bluetoothinstaller.com/bluetooth-command-line-tools/BluetoothCLTools-1.2.0.56.exe.
Calling things like the following just downloads the HTML:
Invoke-WebRequest -Uri 'https://bluetoothinstaller.com/bluetooth-command-line-tools/BluetoothCLTools-1.2.0.56.exe' -OutFile 'test.exe'
Alternatively, if anyone knows how to download the link via the HTML, please do share.
I'd prefer it if the solution did not require any additional software, but am flexible.
Thanks!
Looking at some code I wrote near the end of last year and I found this line:
(New-Object System.Net.WebClient).DownloadFile($URL, $ZipFile)
In my case I was trying to download the latest SQLite and it worked. In your case you will probably want to rename the $ZipFile variable to something like $ExeFile.
The command to build the file path/name, and define where I wanted the file saved, was this:
$ZipFile = "$PSScriptRoot\$(Split-Path -Path $URL -Leaf)"
As for extracting the file's download path form a webpage, I haven't done that yet. It is something aim to do but it will be awhile before I get around to trying to figure that out.
The following worked for me, note the OutFile comment. You might find something useful on the Network tab of your browser's dev-tools.
$params = #{
UseBasicParsing = $true
Uri = "https://bluetoothinstaller.com/bluetooth-command-line-tools/BluetoothCLTools-1.2.0.56.exe"
Headers = #{
Referer = "https://bluetoothinstaller.com/bluetooth-command-line-tools/download.html"
}
OutFile = 'path/to/download.exe' # Change this
}
Invoke-RestMethod #params

Copy File From Teams to File Server Using Powershell

I am trying to copy a xlsx file from my Teams channel to a location on a file server.
I've seen various articles on line that suggest Invoke-WebRequest "https://teams.microsoft.com/l/file/rest of URL here" -OutFile C:\Test\CricketQuiz.xlsx. While this works in terms of being able to see the file at the desired file location, I can't actually open it as I get this error:
I get the same error when I tried the approach suggested in this article https://blog.jourdant.me/post/3-ways-to-download-files-with-powershell .
$url = "https://teams.microsoft.com/l/file/rest of my URL here"
$output = "C:\Test\SportsQuiz.xlsx"
$start_time = Get-Date
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($url, $output)
I'm guessing this is something relatively straightforward to resolve for those with more experience.
The problem here is that the link you've got (the Teams link) is not a direct link to the file at all - it's a link to an embedded version of the file, inside the Teams client (basically like a deep link). To -actually- download the file try the following:
from the url you've got, parse out the "objectUrl" part of the query string. As an example, I have:
https://teams.microsoft.com/l/file/[guid]?tenantId=[guid2]&fileType=xlsx&objectUrl=https%3A%2F%2F[tenantname].sharepoint.com%2Fsites%2FHR%2FShared%2520Documents%2FEmployee%2520Sentiment%2520Analysis.xlsx&serviceName=recent
you want (in my example): https%3A%2F%2F[tenantname].sharepoint.com%2Fsites%2FHR%2FShared%2520Documents%2FEmployee%2520Sentiment%2520Analysis.xlsx
then you need to querystring decode this, to get (e.g.) https://[tenantname].sharepoint.com/sites/HR/Shared%20Documents/Employee%20Sentiment%20Analysis.xlsx
finally, you should use the PnP-PowerShell module's Get-PnPFile to download the file. This itself is a few steps though:
3.1 you need to connect the session, using Connect-PnPOnline, but you also need to connect to the right "SPWeb". In this case, it would be Connect-PnPOnline https://[tenantname].sharepoint.com/sites/HR
3.1 after that you can download the file, but you need to url decode it again, to get rid of %20 and similar, something like:
Get-PnPFile -Url "/Shared Documents/Employee Sentiment Analysis.xlsx" -AsFile -Path "c:\temp\"
This will give you a copy of Employee Sentiment Analysis.xlsx (in my example) inside c:\temp
Obviously this can all be automated, like the querystring decoding, the connect-pnp credentials, etc., but hopefully this gets you on the right path.

Invoke-WebRequest and Hebrew characters

I already tried the reghack for PS to support Hebrew characters. I can type Hebrew no problems but for some reasons characters containing Hebrew returned from Invoke-WebRequest are in gibberish (see the following screenshot).
Here's the site URL I'm attempting to query:
https://www.hometheater.co.il/vt278553.html
Update:
It looks like the content-type being returned is ALWAYS of charset Windows-1255 which is probably the issue.
This seems to be not only an issue of having to specify the encoding but also that the shell cannot display the encoding correctly. If you specify the encoding to a file and edit it with a decent text editor (not Notepad but e.g. Notepad++), then you will be see that it has parsed it correctly.
Invoke-WebRequest -Uri "https://www.hometheater.co.il/vt278553.html" -ContentType "text/plain; charset=Windows-1255" -OutFile content.txt
We can also test that the in-memory presentation is correct by reading it and writing it to another file:
Get-Content .\content.txt | Set-Content test.txt

Display all content with Invoke-WebRequest

So I decided to start using PowerShell rather than Command Prompt. And I want to run curl. Very different output then discover that curl is an alias to Invoke-WebRequest in PowerShell.
Using PowerShell curl in the same way as real curl, I only get part of the content displayed.
I have seen that I can put the output of PowerShell curl into a variable and then use $variable.Content to display all of the content but that seems extra work over real curl.
Is there an option to show all of the content directly? I can't see one in the help.
Unlike the curl command line utility Invoke-WebRequest returns an object with various properties of which the content of the requested document is just one. You can get the content in a single statement by expanding the property like this:
Invoke-WebRequest 'http://www.example.org/' | Select-Object -Expand Content
or by getting the property value via dot-notation like this:
(Invoke-WebRequest 'http://www.example.org/').Content
Alternatively you could use the Windows port of curl:
& curl.exe 'http://www.example.org/'
Call the program with its extension to distinguish it from the alias curl for Invoke-WebRequest.
Well, if you are bothered with extra typing this is the shortest way to achieve that (well, at least I can think of):
(iwr google.tt).content
Something like this?
$res=Invoke-WebRequest "https://www.google.fr/"
#to view html of body
$res.ParsedHtml.body.innerHTML
#to view text of body
$res.ParsedHtml.body.innerText

Using an answer file with a PowerShell script

I have a PowerShell script with a number of 'params' at the start:
param(
[switch] $whatif,
[string] $importPath = $(Read-Host "Full path to import tool"),
[string] $siteUrl = $(Read-Host "Enter URL to create or update"),
[int] $importCount = $(Read-Host "Import number")
)
Is there any way I can run this against an answer file to avoid entering the parameter values every time?
I am not getting the reason for the question. All you have to do to call your script is something like:
.\script.ps1 -whatif -importPath import_path -siteUrl google.com -importCount 1
The Read-Host are there as defaults, to be executed ( and then read and assign the values to the parameters ) only if you don't specify the values. As long you have the above comand ( saved in a file so that you can copy and paste into console or run from another script or whatever ), you don't have to enter the values again and again.
Start by setting the function or script up to accept pipeline input.
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param(
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $importPath,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[string] $siteUrl,
[Parameter(Mandatory=$True,ValueFromPipelineByPropertyName=$True)]
[int] $importCount
)
Notice that I removed your manually-created -whatif. No need for it - I'll get to it in a second. Also note that Mandatory=$True will make PowerShell prompt for a value if it isn't provided, so I removed your Read-Host.
Given the above, you could create an "answer file" that is a CSV file. Make an importPath column, a siteURL column, and an importCount column in the CSV file:
importPath,siteURL,importCount
"data","data",1
"x","y",2
Then do this:
Import-CSV my-csv-file.csv | ./My-Script
Assuming your script is My-Script.ps1, of course.
Now, to -whatif. Within the body of your script, do this:
if ($pscmdlet.shouldprocess($target)) {
# do whatever your action is here
}
This assumes you're doing something to $target, which might be a path, a computer name, a URL, or whatever. It's the thing you're modifying in your script. Put your modification actions/commands inside that if construct. Doing this, along with the SupportsShouldProcess() declaration at the top of the script, will enable -whatif and -confirm support. You don't need to code those parameters yourself.
What you're building is called an "Advanced Function," or if it's just a script than I guess it'd be an "Advanced Script." Utilizing pipeline input parameters in this fashion is the "PowerShell way of doing things."
To my knowledge, Powershell doesn't have a built-in understanding of answer files. You'll have to pass them in somehow or read them yourself from the answer file.
Wrapper. You could write another script that calls this script with the same parameters you want to use every time. You could also make a wrapper script that reads the values from the answer file, then pass them in.
Optional Parameters. Or you could change the parameters to use defaults that indicate no parameters were passed, then check for a file of a specific name to read values from. If the file isn't found, then prompt for the values.
If the format of the answer file is flexible, (i.e., you're only going to be using it with this Powershell script), you could get much closer to the behavior of an actual answer file by writing it as a Powershell script itself and dot-sourcing it.
if (test-path 'myAnswerfile'){
. 'myAnswerFile'
#process whatever was sourced from the answer file, if necessary
} else {
#prompt for values
}
It still requires removing the Read-Host calls from the parameters of the script.
Following on from Joel you could set up a different parameter set, based around the switch -answerfile.
If that's set the function will look for an answer file and parse though it - as he said you'll need to do that yourself. If it's not set and the others are then the functionis used with the parameters given. Minor benefit I see is that you can still have the parameters mandatory when used that way.
Matt