Samba read speeds very slow through Explorer, but OK through Firefox - file-transfer

I have a file server running Ubuntu 12.04 and Samba 3.6.3. A Samba share is mapped to a drive on a Windows 8 machine.
When copying a test file to a local drive (which is an SSD and not a bottleneck here), it is very slow when doing so through Explorer. It is similarly slow when downloading the file through Internet Explorer. When downloading through Firefox (by entering the file URI), however, it is more than 10x as fast, as the image below shows.
What's going on here? I know that Samba is not fast, but I thought that's generally the case when dealing with lots of small files, when its request logic is very inefficient. The test file was 826 MB.

Removing custom "socket options" line in smb.conf (the Samba configuration file) solved it for me.
It seems that it's best to leave that option blank nowadays, since it will calculate optimal values itself. Firefox seemed to be either using its own SMB protocol settings, or ignoring those set by the Samba server.

Related

crawl-300d-2M-subword.zip corrupted or cannot be downloaded

I am trying to use this fasttext model crawl-300d-2M-subword.zip from the official page onI my Windows machine, but the download fails by the last few Kb.
I managed to successfully download the zip file into my ubuntu server using wget, but the zipped file is corrupted whenever I try to unzip it. Example of what I am getting:
unzip crawl-300d-2M-subword.zip
Archive: crawl-300d-2M-subword.zip
inflating: crawl-300d-2M-subword.vec
inflating: crawl-300d-2M-subword.bin bad CRC ff925bde (should be e9be08f7)
It is always the file crawl-300d-2M-subword.bin, which I am interested in, that has problems in te unzipping.
I tried the two ways many times but with no success. it seems to me no one had this issue before
I've just downloaded & unzipped that file with no errors, so the problem is likely unique to your system's configuration, tools, or its network-path to the download servers.
One common problem that's sometimes not prominently reported by a tool like wget is a download that keeps ending early, resulting in a truncated local file.
Is the zip file you received exactly 681,808,098 bytes long? (That's what I get.)
What if you try another download tool instead, like curl? (Such a relay between different endpoints might not trigger the same problems.)
Sometimes if repeated downloads keep failing in the same way, it's due to subtle misconfiguration bugs/corruption unique to the network path from your machine to the peer (download origin) machine.
Can you do a successful download of the zip file (of full size per above) to anywhere else?
Then, transfer from that secondary location to where you really want it?
If you're having problems on both a Windows machine, and a Ubuntu server, are they both on the same local network, perhaps subject to the same network issues – either bugs, or policies that cut a particular long download short?

Keep VS Code from creating new directories when file system unmounts

I am running VS Code on a Windows machine running Ubuntu on WSL 2 that mounts a remote drive from a Linux server using FUSE. This allows me to edit comfortably on VS Code while I run the documents on the server and it generally works great. However, if I am editing and my computer loses its Internet connection briefly, the FUSE mount is lost. If I am in the middle of editing and don't notice, then when I save it, VS Code will see nothing in the directory and create a bunch of directories and save the file locally, which is not what I want to do.
For example, I might be editing a file that is in the mount folder, which is the remote mount. I work on the file mount/somedir/somedir2/someFile.txt. If the Internet connection drops, the remote filesystem is unmounted. If I click Ctrl-s, then VS Code sees only an empty folder called mount. It then creates a somedir directory, then a somedir2 directory, then a someFile.txt file, and saves it there. It is often some time before I catch the problem, and while it is resolvable, I end up with multiple versions of the same file (one on my computer and one on the server) and rationalizing the two is a pain and, if I do it wrong, can end up with me losing work and data (which has happened).
Is there a way to tell VS Code to give an error message when attempting to save a file to a suddenly nonexistent directory, rather than creating it automatically for me? That would make my life much easier.

Jupyter dashboard freezes when opening path which contains many files

I'm using jupyter dashboard to browse files in remote linux server. But it often become slow or even frozen when opening directory containing many files (maybe thousands). I wonder is my problem common? Is there any extensions to solve this, maybe allowing user to browse by switching pages?
Thank you for answering my question.

Corrupted files when downloaded from IIS

All files that are on my IIS intranet, when downloaded become corrupted.
If I open them on the server they are OK.
If I copy them locally on my PC (transferring the file by File Explorer)
they are OK.
But if I create an Intranet page with a link on them and then I download
them using my browser, when they arrive on my PC they are slightly larger and
corrupted.
It seems like IIS is corrupting them during the file transfer.
Thanks
Are you calling binmode on your output file handles? If you don't set the output streams to binary mode then plain newlines (\n or more accurately \x0a) will be converted to Windows-style CR-LF (\x0d\x0a), that conversion would explain the size increase and the corruption.
Have you checked that the files you are trying to download, are supported MIME-Types in IIS? If the MIME-types aren't set, you can add them fairly easily in IIS.

online space to store files using commandline

I require a small space online (free) where I can
upload/download few files automatically using a script.
Space requirement is around 50 MB.
This should be such that it could be automated so I can set
it to run without manual interaction i.e. No GUI
I have a dynamic ip & have no tech on setting up a server.
Any help would be appreciated. Thanks.
A number of online storage services provide 1-2 GB space for free. Several of those have command-line clients. E.g. SpiderOak that I use has a client that can run in a headless (non-GUI) mode to upload files, and there's even a way to download files from it by wget or curl.
You just set up things in GUI mode, then put files into the configured directory and run SpiderOak with right options; files get uploaded. Then you either download ('restore') all or some of the files via another SpiderOak call or get them via HTTP.
About the same applies to Dropbox, but I have no experience with that.
www.bshellz.net gives you a free shell running Linux. I think everyone gets 50mb so you're in luck!