How can i download a report using wget - wget

I am trying to use WGET to automate download of a file which is generated by a report server (PDF format). However, the problem i am having is that the file name is never known (generated randomly by the server) and the URL accepts parameters that will change eg. Date=. Name= ID= etc.
For Example if i were to pass http://url.com/date=&name=&id= in internet explorer, i will get a download dialog prompting me to download a file with file xyz123.pdf
Is it possible that i can use WGET to pass these parameters to the report server and automatically download the generated PDF file

Just put the full url in quotes - It should go and fetch the file:
wget "http://url.com/date=foo&name=baa&id=baz"
Thanks,
//P

Related

Aborting wget on file found (Windows)

I am trying to use wget for Windows (on Windows 7) to find and download a file that I don't know the full name of (I have a partial name, and I know the form of the unknown part of the name). I am using an input file with a list of the possible file names, and I want to abort wget when the file is found (the rest of the possibilities will give 404 errors). How can I cause wget to abort automatically when that one file is found?

Download file from Google Drive with PowerShell

In trying to download a file with PowerShell I have the following
$client = new-object System.Net.WebClient
$client.DownloadFile($AGRIDATAMISCURL,$TESTAGRIDATAMISCZIP)
Where $AGRIDATAMISCURL is a URL that looks like "https://drive.google.com/file/d/<...>" and $TESTAGRIDATAMISCZIP looks like "C:\test\A.zip"
This script doesn't return an error but the file it downloads is basically an HTML file with a prompt to sign in to Google. Is there another way to download a file that is "shared with me"?
Share the file first
Files in Google Drive must be made available for sharing before they can be downloaded. There's no security context when running from PowerShell, so the file download fails. (To check this, rename the file with a `.html` extension, and view in a text editor).
Note: the following solution assumes that the links are to non-security-critical files, or that the links will only be given to those with whom access can be trusted (links are https, so are encrypted with transmission). The alternative is to programatically authenticate with Google - something not addressed in this answer.
To Share the file, in Google Drive:
Right-click the file, and choose Get shareable link
2. Turn link sharing on
Click Sharing Settings
Ensure that Anyone with the link can view (Note that in corporate environments, the link must be shared with those outside the organization in order to bypass having to login)
Then Download Programatically
Then, code can be used to download the file as such (in this case with Windows PowerShell):
# Download the file
$zipFile = "https://drive.google.com/uc?export=download&id=1cwwPzYjIzzzzzzzzzzzzzzzzzzzzzzzz"
Invoke-WebRequest -Uri $zipFile -OutFile "$($env:TEMP)\myFile.doc"
Replace the 1cwwPzYjIzzzzzzzzzzzzzzzzzzzzzzzz with the ID code from the shareable link setting back in step #2, above.

upload files to github directory using github api

I want to upload files from my system to a directory in github repo using the api .Is there any api endpoint which allows me to do that.
You should use the GitHub CRUD API which wans introduced in .May 2013
It includes:
File Create
PUT /repos/:owner/:repo/contents/:path
File Update
PUT /repos/:owner/:repo/contents/:path
File Delete
DELETE /repos/:owner/:repo/contents/:path
There's deffinitly a way to do it.
This is how I did it;
curl -i -X PUT -H ‘Authorization: token 9xxxxxxxxxxxxxxxxxxxxxxxe2’ -d
‘{“message”: “uploading a sample pdf”,
“content”:”bXkgbm……………………………..”
}’ https://api.github.com/repos/batman/toys/contents/sample.pdf
Where the content property is a base64 encoded string of characters. I used this tool to encode my pdf file. https://www.freeformatter.com/base64-encoder.html
Notice, "batman" is the owner, "toys" is my repo, "contents" has to be there by default, and sample.pdf would the name of the file you want to upload your file as.
In short, stick to this format: /repos/:owner/:repo/contents/:path
And you can run the identical step for any of these files:
PNG (.png)
GIF (.gif)
JPEG (.jpg)
Log files (.log)
Microsoft Word (.docx), Powerpoint (.pptx), and Excel (.xlsx) documents
Text files (.txt)
PDFs (.pdf)
ZIP (.zip, .gz)
Good luck.
Btw, I have these same details added on here: http://www.simplexanswer.com/2019/05/github-api-how-to-upload-a-file/

Download and save image from a website using wget

How do I download and save the particular image from the following web page using wget.
http://www-nass.nhtsa.dot.gov/nass/cds/GetBinary.aspx?SceneView&ImageID=509617654
I tried this
"C:\Program Files (x86)\GnuWin32\bin\wget" -r -P "C:\temp\" -A jpeg,jpg,bmp,gif,png "http://www-nass.nhtsa.dot.gov/nass/cds/GetBinary.aspx?SceneView&ImageID=509617654"
But the image did not download and save. I am using Windows 7. I guess I am not getting the image since the web page is not a proper html page (no html or asp etc extension). Am I correct?
Not exactly. A file extension is not required for URLs containing HTML (e.g. http://google.com/).
By inspecting the HTML source (ignoring that the page has invalid HTML (<script> tag in between <head> and <body>)), we can see it's using JavaScript to alter the image's src attribute on page load (why, who knows...) to /GetBinary.aspx?Scene&ImageID=509617654&CaseID=&Version= (relative to the HTML page).
As wget can't execute JS, this will never work (like this).
However the actual image URL does return a JPEG image, but you'll have to rename it, as, also, the web server (IIS) is misconfigured, as for that URL it returns a header:
Content-Type: E:\Sites\NASS\CDS\/img/jpg
which is invalid, and causes file association problems when downloading in most browsers / clients.
To prove it's there, you can try downloading it directly with wget:
wget "http://www-nass.nhtsa.dot.gov/nass/cds/GetBinary.aspx/GetBinary.aspx?Scene&ImageID=509617654&CaseID=&Version=" -O image.jpg

wget download name

I'd like to write a function that, given a URL, returns the name of the file downloaded by wget URL.
I don't understand the behavior of wget very well. If I do wget on python.org, www.python.org, http://www.python.org, or http://www.python.org/, the name of the file downloaded is index.html.
However, if I do www.python.org/about, the name of the file downloaded is about, instead of index.html.
The reason your wget fetches index.html in the first cases is because that's the default "home page" that the server points to. python.org, www.python.org, http://www.phython.org, and http://www.python.org/ aren't files, so the server points wget to index.html. It points your browser there, too, though you don't usually see it. www.python.org/about is a different page, so it makes sense that the file it downloads has a different name.
Might I recommend the man page for wget if you want to know how it works? If it's the name of the downloaded file that concerns you, you have the option to change it via the -O option.