trying to download jpg images using wget - wget

I am trying to download jpg images from one of our suppliers. I know the image locations so for example I use:
wget http://fastserve.horizonhobby.com/ProdInfo/SPM/450/SPM1511-450.jpg
This was working fine for a while. But I started getting check certificate issues.
When I added the "check_certificate = off" >> , the down load would be corrupt.
Any help with this would be greatly appreciated.
Thanks
Tom

That link goes to a login so I imagine your wget can only fail, you'd better look into curl and login options.

You should bypass the authentication step first.
The link you provided is not reachable without authenticating first.

Related

Download message from Google group

I need to download an archived google group.
Following link is one of the messages of that group for example.
https://groups.google.com/forum/#!topic/sci.aeronautics/ViFtpXfVm7M
The problem is, what i see in the browser does not appear in the downloaded webpage.
With my very limited knowledge, It seems to me like the reason behind it is this content is dynamically created by java-script. Or else, these downloaded files are with so called 'mbox' extension which is encrypted ?
What I've tried so far
First trys
Simple download
wget https://groups.google.com/d/topic/sci.aeronautics/ViFtpXfVm7M
With mirror
wget --mirror https://groups.google.com/d/topic/sci.aeronautics/ViFtpXfVm7M
Assuming its encrypted
With cookies.
wget --load-cookies=cookies.txt https://groups.google.com/d/topic/sci.aeronautics/ViFtpXfVm7M
Got thunderbird to setup my gmail and opening. did not open correctly
Assuming the content was javascript generated
Downloaded using phantomJS
https://askubuntu.com/questions/411540/how-to-get-wget-to-download-exact-same-web-page-html-as-browser
Downloaded using phantomJS with a different script
https://gist.github.com/giocomai/247d54e097b5083e2451
Used scripts available from Github
https://github.com/henryk/gggd
https://github.com/icy/google-group-crawler
But none did not work so far.
Can anyone please shed some light on how to download this page with its message as a readable html or txt file ?
Cheers
AyyoSalli
You could use https://groups.google.com/forum/feed/sci.aeronautics/msgs/atom.xml?num=100 to get some of the posts - but it only gets roughly half the posts in this case.
And it has all the messages from all topics together.
View it in Firefox or Classic Opera to see directly in a more human-readable form.
But since you say you already got a file in standard mbox format, what exactly is wrong with it - did you attempt to import it into a locally installed email or newsclient ? (like Thunderbird)

Cannot upload files for release in Github

I want ot create a release for my Github project, but when I try to upload my binary (which is a .zip file) to the release, I get the following error message:
Something went really wrong, and we can’t process that file.
I get the same error message if I try to upload some other files (e.g. my readme file). What could be wrong?
Check first if this issue persists with all browser.
I have seen this error message before, where the upload succeeded with Chrome, but not Firefox.
For the record in my case this was due to an obvious mistake: I had to authorize a github script in NoScript.
If someone encounters this problem in China, and is using ShadowSocks, try turning ShadowSocks off.
I was trying to upload an image file in png format on chrome browser but I was getting the same message.As suggested in one of the answer, I turned off windows firewall and it worked.
But my mind was not satisfied and I turned on Windows firewall and converted that image to jpg format and it got uploaded.
But this stubborn mind was not satisfied and told me to upload this in .png format only.God came to my rescue and accidentally properties tab of png image got opened and at the bottom of tab, it was showing
"This file came from another computer and might be blocked".
So I unblocked it and it got uploaded.
I got that error due to my companies firewall that prevents all file uploads. Might this be the cause of your problems?
The problem occurs during file upload because I was using VPN which doesn't allow file upload hence I disconnected from VPN and then try uploading which work absolutely fine. So my advice to check your firewall/VPN settings as mostly Organization doesn't allow fileupload in GIT due to security concern.
I got the same problem on Opera browser(Ubuntu 18.04.1).
Then I realized that my browser does not have permission to access the related(where your files located) partition. I moved the files that I wanted to upload on Github into accessible partition by Opera browser in my HDD, Thus I could be able to upload my files to Github.
It worked like a charm. Hope this helps.
clear your browser's cookies and cache ..after clearing files will be uploaded easily on git without error
In my case. I'm reuploading two zip files to an existing release. One success and another one failed. Tried in both chrome and firefox.
My solution is to use the hub command-line tool.
I use the below command to edit the existing release and attach files to it.
hub release edit --draft=false --attach FILE TAG
For some reason, I had to change its prefix from .png to .jpg and it worked.
I had the same problem which I solved by disabling my firewall. I hope this will work for you
In my case, it was my anti-virus (AVG) preventing the upload.
In my case, Avast antivirus was blocking the objects-origin.githubusercontent.com site, and made the upload fail.
After disabling the antivirus, I was able to upload the files.
Well! That is because your are uploading some images that you have downloaded from the internet like in my case I have used a favicon that is generated from website to solve the issue Image with security details~img1
Image with detail~img2
Remove personal info ~ img3
To solve the problem
first of all head to the images that you have downloaded from internet and right click on image and click on properties
check the unblock checkbox as shown in image 1 and then
click on the details tab as shown in image 2 and then click on the remove properties and personal information
And then check on the remove the following properties on the file as shown in image 3
And then click on ok and then again ok
The given solution will solve your issue it just worked perfect for me
Files cannot be uploaded because your network provider doesn't allow, use different network or apply vpn for uploading the files.

Fancybox works locally but not when uploaded to server

Hi im hoping you guys can help me, I put together a fancybox gallery which works beautifully locally but when I uploaded it to server only 2 images work when clicked..Im at a loss as to why the others wont load, all images are in the same folder and paths are the same.
Any Ideas?
By using the browser debug tools, we can see that your server's returning "404 not found" errors for all the images. It looks like your image paths are img\thumb3.jpg; web servers use forward slashes as a path separator. If you change your gallery so the images are img/thumb3.jpg, it should all work fine.
The reason this works locally but not when uploaded is because you're working on Windows, and your web server probably isn't. :)

Swf Upload System IO error - Progress bar getting stuck

After reading Steve Sandersons post on swf upload.
http://blog.stevensanderson.com/2008/11/24/jquery-ajax-uploader-plugin-with-progress-bar/
I have implemented the swf upload on a site I am working on, Some users are getting a variety of issues where the progress bar gets stuck, or they get the error message 2038 - with error code -220 (System IO error.) - this is not related to Certificates as in the test below both addresses can be accessed with http or https
I haven't been able to reproduce much of these errors, However when trying to upload large images over 2 mb
It works fine on the test site, But not on the live
UPDATE: I had posted examples here, now removed as the links don't work.
Both sites hosted on App Harbor. exactly the same code.
The Limit for image uploads should be 10 mb - and I have successfully uploaded larger images that the one posted here.
what could be the cause of this?
Can I ask what language the rest of the site is written in?
My first thought is that if it's an IO error it could be running out of space?
Run:
Df -h
On the servers and see what we get, remember that all file uploads are written to /tmp before being moved where you want them, so if that fills upload stops.
This turned out to be a configurations setting at the load balancer level, We have a dedicated load balancer with app harbor to so we can offer full ssl support. It had not been set up to allow requestes of 10mb, they have changed it now.
Just don't forget to have the parameters in php.ini that set :
session cookies to on
and session.use_only_cookies to off
and in the js plugin session are handled this way:
post_params: {
<?php echo "'".ini_get('session.name')."':'".session_id()."',"; ?>
}
Furthermore, don't forget to check the list of images extensions handled by your js plugin

Using wget to download all the hulkshare/mediafire linked files on a page

So I've been trying to set up wget to download all the mp3s from www.goodmusicallday.com. Unfortunately, rather than the mp3s being hosted by the site, the site puts them up on www.hulkshare.com and then links to the download pages. Is there a way to use the recursive and filtering abilities of wget to make it go to each hulkshare page and download the linked mp3?
Any help is much appreciated
So, a friend of mine actually figured out an awesome way to do this, just enter the code below in Terminal:
IFS="";function r { echo $1|sed "s/.*$2=\([^\'\"\&;]*\).*/\1/";};for l in `wget goodmusicallday.com -O-|grep soundFile`;do wget -c `r $l soundFile` -O "`r $l titles`";done
I guess not!!!
I have tried on several occasion to do scripted downloads from mediafire, but in vain.
and that's the reason why they don't have a simple download link, instead have a timer attached to!
If you have noticed carefully, you will see that the download links(i mean the actual file hosting server is not www.mediafire.com! but rather something like download666.com).
So, i don't think it is possible with wget!!
Wget can only save the day if download links are simple html links, the a tags.
Regards,