All files that are on my IIS intranet, when downloaded become corrupted.
If I open them on the server they are OK.
If I copy them locally on my PC (transferring the file by File Explorer)
they are OK.
But if I create an Intranet page with a link on them and then I download
them using my browser, when they arrive on my PC they are slightly larger and
corrupted.
It seems like IIS is corrupting them during the file transfer.
Thanks
Are you calling binmode on your output file handles? If you don't set the output streams to binary mode then plain newlines (\n or more accurately \x0a) will be converted to Windows-style CR-LF (\x0d\x0a), that conversion would explain the size increase and the corruption.
Have you checked that the files you are trying to download, are supported MIME-Types in IIS? If the MIME-types aren't set, you can add them fairly easily in IIS.
Related
I am trying to use this fasttext model crawl-300d-2M-subword.zip from the official page onI my Windows machine, but the download fails by the last few Kb.
I managed to successfully download the zip file into my ubuntu server using wget, but the zipped file is corrupted whenever I try to unzip it. Example of what I am getting:
unzip crawl-300d-2M-subword.zip
Archive: crawl-300d-2M-subword.zip
inflating: crawl-300d-2M-subword.vec
inflating: crawl-300d-2M-subword.bin bad CRC ff925bde (should be e9be08f7)
It is always the file crawl-300d-2M-subword.bin, which I am interested in, that has problems in te unzipping.
I tried the two ways many times but with no success. it seems to me no one had this issue before
I've just downloaded & unzipped that file with no errors, so the problem is likely unique to your system's configuration, tools, or its network-path to the download servers.
One common problem that's sometimes not prominently reported by a tool like wget is a download that keeps ending early, resulting in a truncated local file.
Is the zip file you received exactly 681,808,098 bytes long? (That's what I get.)
What if you try another download tool instead, like curl? (Such a relay between different endpoints might not trigger the same problems.)
Sometimes if repeated downloads keep failing in the same way, it's due to subtle misconfiguration bugs/corruption unique to the network path from your machine to the peer (download origin) machine.
Can you do a successful download of the zip file (of full size per above) to anywhere else?
Then, transfer from that secondary location to where you really want it?
If you're having problems on both a Windows machine, and a Ubuntu server, are they both on the same local network, perhaps subject to the same network issues – either bugs, or policies that cut a particular long download short?
So I have the jquery file upload script running on my webserver, with a target-url pointing to a local mac server with apache/php, and when I go to upload a file the script shows the progress bar and it completes, and it arrives to the correct directory and everything on the mac, but the file is an assortment of numbers like:
1390865006
or
1390865033
and it should be noted that those files are about the correct size, but even if I add .jpg to the end of the file name they won't open as images.
Am I missing some crucial something somewhere? I hope it's not my mac being too old, I didn't see any requirements for php version in jquery file upload, but I am running 5.2.8
Thanks
* EDIT *
This was resolved by upgrading PHP to 5.5.8
This was resolved by upgrading PHP to 5.5.8
I have a file server running Ubuntu 12.04 and Samba 3.6.3. A Samba share is mapped to a drive on a Windows 8 machine.
When copying a test file to a local drive (which is an SSD and not a bottleneck here), it is very slow when doing so through Explorer. It is similarly slow when downloading the file through Internet Explorer. When downloading through Firefox (by entering the file URI), however, it is more than 10x as fast, as the image below shows.
What's going on here? I know that Samba is not fast, but I thought that's generally the case when dealing with lots of small files, when its request logic is very inefficient. The test file was 826 MB.
Removing custom "socket options" line in smb.conf (the Samba configuration file) solved it for me.
It seems that it's best to leave that option blank nowadays, since it will calculate optimal values itself. Firefox seemed to be either using its own SMB protocol settings, or ignoring those set by the Samba server.
On my mac I mounted a shared drive using WebDAV by going to "Finder > Go > Connect to server".
Now, when I try to view the files using TextWranger or TextEdit I can see the PHP code that I want to edit.
However, if I try to use an IDE like NetBeans/Eclipse/TextMate and create a new project with my shared drive as the "Existing sources" folder I cannot see the PHP code.
Instead I see the HTML output of the files as if I were seeing them through a web browser. Also, if I try to view a file that isn't normally accessibility (a command line script) I see the output as if it were called from the command line.
But a weird thing is if I use TextMate to edit a single file from the shared drive I can see the php code I am trying to edit. It just doesn't work as a project.
Any suggestions or solutions on how I can use an IDE to edit files over WebDAV? And why do my IDEs display the content rendered, instead of the actual file on the file system.
I'm not a specialist at all but I seem to remember that WebDAV clients do send GET requests.
If I'm correct your server may not be able to discriminate between HTTP GET and WebDAV GET thus rendering your .php files. Why this would work that way when working with a project and another way while working with individual files is not clear, though.
Do you get rendered files when you add files to your project manually as well?
I'm trying to automate uploading and downloading from an ftp site using cURL inside MAtlab, but I'm having difficulties. Essentially I want one computer continuously uploading new files to an ftp, yet since there is a disk quota on the ftp, I want another computer continuously downloading and removing those same files from the ftp.
Easy enough, but my problem arises from wanting to make sure that I don't download a file that is still being uploaded, thereby resulting in an incomplete file.
First off, is there a way in cURL to make it so that the file wouldn't be available for download from the ftp site until the entire file has been uploaded?
One way around this is that I could upload files to one directory, and once they are finished uploading, then I could transfer them to a "Finished" directory on the ftp site. Then the download program would only look for files inside that "Finished" directory. However, I don't know how to transfer files within an ftp site using cURL.
Is it possible to transfer files between directories on an ftp site using cURL without having to download the file first?
And if anyone else has better ideas on how to perform this task, I'd love to hear em!
Thanks!
You can upload the files using a special name and then rename it when done, and have the download client only download files with that special "upload completed" name style.
Or you move them between directories just as you say (which is essentially a rename as well, just changing the directory too).
With the command line curl, you can perform "raw" commands after the upload with the -Q option and you can even find a tiny example in the curl FAQ: http://curl.haxx.se/docs/faq.html#Can_I_use_curl_to_delete_rename