Is there a way to make Netbeans file uploads not fail on a single error? - netbeans

Is there a way to make Netbeans's FTP uploads less error sensitive(reconnect and retry on error)?
It is simply unable to handle large amounts of files. If I upload 1000 files and there is some sort of failure at the 400th file, it will fail for the rest of the files, and it will not record the files that failed. So I will have to upload ALL the files again, even the ones that uploaded successfully on the previous attempt, but it will fail agan again and again.

This post describes changing a NetBeans FTP setting to Passive. Doing so reduced many of my transfer errors.

Related

crawl-300d-2M-subword.zip corrupted or cannot be downloaded

I am trying to use this fasttext model crawl-300d-2M-subword.zip from the official page onI my Windows machine, but the download fails by the last few Kb.
I managed to successfully download the zip file into my ubuntu server using wget, but the zipped file is corrupted whenever I try to unzip it. Example of what I am getting:
unzip crawl-300d-2M-subword.zip
Archive: crawl-300d-2M-subword.zip
inflating: crawl-300d-2M-subword.vec
inflating: crawl-300d-2M-subword.bin bad CRC ff925bde (should be e9be08f7)
It is always the file crawl-300d-2M-subword.bin, which I am interested in, that has problems in te unzipping.
I tried the two ways many times but with no success. it seems to me no one had this issue before
I've just downloaded & unzipped that file with no errors, so the problem is likely unique to your system's configuration, tools, or its network-path to the download servers.
One common problem that's sometimes not prominently reported by a tool like wget is a download that keeps ending early, resulting in a truncated local file.
Is the zip file you received exactly 681,808,098 bytes long? (That's what I get.)
What if you try another download tool instead, like curl? (Such a relay between different endpoints might not trigger the same problems.)
Sometimes if repeated downloads keep failing in the same way, it's due to subtle misconfiguration bugs/corruption unique to the network path from your machine to the peer (download origin) machine.
Can you do a successful download of the zip file (of full size per above) to anywhere else?
Then, transfer from that secondary location to where you really want it?
If you're having problems on both a Windows machine, and a Ubuntu server, are they both on the same local network, perhaps subject to the same network issues – either bugs, or policies that cut a particular long download short?

Invoke-WebRequest can't download properly

I was using Invoke-WebRequest for some time in my script and today first time I got problem with it - once I run script, it starts to download ZIP file and finishes without warnings, but when I am trying to open file, WinRar gives me an error and I can see that file size is less than it should be. I tried several time and got same result. Then I tried to download file with Google Chrome just to ensure that file is not corrupted, but it downloaded (with correct file size) and opened without any issues.
After couple of hours I ran script again and it downloaded file as well. So now I am not sure that I can trust Invoke-WebRequest since it seems that sometimes it can make incomplete download.
Myabe someone can point me on how I can avoid this situation? Or maybe better to use another method?

cURL ftp transfer scenario

I'm trying to automate uploading and downloading from an ftp site using cURL inside MAtlab, but I'm having difficulties. Essentially I want one computer continuously uploading new files to an ftp, yet since there is a disk quota on the ftp, I want another computer continuously downloading and removing those same files from the ftp.
Easy enough, but my problem arises from wanting to make sure that I don't download a file that is still being uploaded, thereby resulting in an incomplete file.
First off, is there a way in cURL to make it so that the file wouldn't be available for download from the ftp site until the entire file has been uploaded?
One way around this is that I could upload files to one directory, and once they are finished uploading, then I could transfer them to a "Finished" directory on the ftp site. Then the download program would only look for files inside that "Finished" directory. However, I don't know how to transfer files within an ftp site using cURL.
Is it possible to transfer files between directories on an ftp site using cURL without having to download the file first?
And if anyone else has better ideas on how to perform this task, I'd love to hear em!
Thanks!
You can upload the files using a special name and then rename it when done, and have the download client only download files with that special "upload completed" name style.
Or you move them between directories just as you say (which is essentially a rename as well, just changing the directory too).
With the command line curl, you can perform "raw" commands after the upload with the -Q option and you can even find a tiny example in the curl FAQ: http://curl.haxx.se/docs/faq.html#Can_I_use_curl_to_delete_rename

Synchronizing with live server via FTP - how to FTP to different folder then copy changes

I'm trying to think of a good solution for automating the deployment of my .NET website to the live server via FTP.
The problem with using a simple FTP deployment tool is that FTPing the files takes some time. If I FTP directly into the website application's folder, the website has to be taken down whilst I wait for the files to all be transferred. What I do instead is manually FTP to a seperate folder, then once the transfer is completed, manually copy and paste the files into the real website folder.
To automate this process I am faced with a number of challenges:
I don't want to FTP all the files - I only want to FTP those files that have been modified since the last deployment. So I need a program that can manage this.
The files should be FTPed to a seperate directory, then copy+pasted into the correct destination onces complete.
Correct security permissions need to be retained on the directories. If a directory is copied over, I need to be sure that the permissions will be retained (this could probably be solved by rerunning a script that applies the correct permissions).
So basically I think that the tool that I'm looking for would do a FTP sync via a temporary directory.
Are there any tools that can manage these requirements in a reliable way?
I would prefer to use rsync for this purpose. But seems you are using windows OS here, some more effort is needed, cygwin stuff or something alike.

Why CGI.pm upload old revision of a file on successful new file upload?

I am using CGI.pm version 3.10 for file upload using Perl. I have a Perl script which uploads the file and one of my application keeps track of different revisions of the uploaded document with check-in check-out facility.
Re-creational steps:
I have done a checkout(download a file) using my application (which is web based uses apache).
Logout from current user session.
Login again with same credentials and then check-in (upload) a new file.
Output:
Upload successful
Perl upload script shows the correct uploaded data
New revision of the file created
Output is correct and expected except the one case which is the issue
Issue:
The content of the newly uploaded file are same as the content of the last uploaded revision in DB.
I am using a temp folder for copying the new content and if I print the new content in upload script then it comes correct. I have no limit on CGI upload size. It seems somewhere in CGI environment it fails might be the version i am using. I am not using taint mode.
Can anybody helps me to understand what might be the possible reason?
Sounds like you're getting the old file name stuck in the file upload field. Not sure if that can happen for filefield but this is a feature for other field types.
Try adding the -nosticky pragma, eg, use CGI qw(-nosticky :all);. Another pragma to try is -private_tempfiles, which should prevent the user from "eavesdropping" even on their own uploads.
Of course, it could be that you need to localize (my) some variable or add -force to the filefield.
I found the issue. The reason was destination path of the copied file was not correct, this was because my application one of event maps the path of copied file to different directory and this path is storing in user session. This happens only when I run the event just before staring upload script. This was the reason that it was hard to catch. As upload script is designed to pick the new copied file from same path so it always end up uploading the same file in DB with another revision. The new copied file lying in new path.
Solved by mapping correct path before upload.
Thanks