online space to store files using commandline - command-line

I require a small space online (free) where I can
upload/download few files automatically using a script.
Space requirement is around 50 MB.
This should be such that it could be automated so I can set
it to run without manual interaction i.e. No GUI
I have a dynamic ip & have no tech on setting up a server.
Any help would be appreciated. Thanks.

A number of online storage services provide 1-2 GB space for free. Several of those have command-line clients. E.g. SpiderOak that I use has a client that can run in a headless (non-GUI) mode to upload files, and there's even a way to download files from it by wget or curl.
You just set up things in GUI mode, then put files into the configured directory and run SpiderOak with right options; files get uploaded. Then you either download ('restore') all or some of the files via another SpiderOak call or get them via HTTP.
About the same applies to Dropbox, but I have no experience with that.

www.bshellz.net gives you a free shell running Linux. I think everyone gets 50mb so you're in luck!

Related

crawl-300d-2M-subword.zip corrupted or cannot be downloaded

I am trying to use this fasttext model crawl-300d-2M-subword.zip from the official page onI my Windows machine, but the download fails by the last few Kb.
I managed to successfully download the zip file into my ubuntu server using wget, but the zipped file is corrupted whenever I try to unzip it. Example of what I am getting:
unzip crawl-300d-2M-subword.zip
Archive: crawl-300d-2M-subword.zip
inflating: crawl-300d-2M-subword.vec
inflating: crawl-300d-2M-subword.bin bad CRC ff925bde (should be e9be08f7)
It is always the file crawl-300d-2M-subword.bin, which I am interested in, that has problems in te unzipping.
I tried the two ways many times but with no success. it seems to me no one had this issue before
I've just downloaded & unzipped that file with no errors, so the problem is likely unique to your system's configuration, tools, or its network-path to the download servers.
One common problem that's sometimes not prominently reported by a tool like wget is a download that keeps ending early, resulting in a truncated local file.
Is the zip file you received exactly 681,808,098 bytes long? (That's what I get.)
What if you try another download tool instead, like curl? (Such a relay between different endpoints might not trigger the same problems.)
Sometimes if repeated downloads keep failing in the same way, it's due to subtle misconfiguration bugs/corruption unique to the network path from your machine to the peer (download origin) machine.
Can you do a successful download of the zip file (of full size per above) to anywhere else?
Then, transfer from that secondary location to where you really want it?
If you're having problems on both a Windows machine, and a Ubuntu server, are they both on the same local network, perhaps subject to the same network issues – either bugs, or policies that cut a particular long download short?

Where is JupyterLite notebook located locally on Windows?

I am using jupyterlite which is JupyterLab distribution that runs entirely in the browser.
However, after clearing the browser history, the files are no more visible.
Please let me know how can I retrieve the *ipynb files from my windows machine.
I have already checked %AppData% and I don't see any *ipynb files.
The files are stored in well... the browser. Specifically in the IndexDB or localStorage. This means that the physical location on the disk will depend entirely on the browser that you use, rather than on the operating system, and will likely be inaccessible (for an average user) without decoding binary blobs.
For example, in Chrome you can check the path to the application data using chrome://version/ (under Profile Path) and in that directory there should be IndexedDB folder. Then you need to find a sub-folder depending on the domain in which you accessed JupyterLite, for example https_jupyterlite.readthedocs.io_0.indexeddb.leveldb, and there you will find a LevelDB database file with .ldb extension and a MANIFEST file (with the pointer to the current version in the CURRENT file. The details of how to extract the blobs are outside of scope for this answer, but have a look at How to access Google Chrome's IndexedDB/LevelDB files?.
However, you can use files from your file system directly in JupyterLite without worrying about in-browser technologies with the jupyterlab-filesystem-access extension which uses File System Access API however this API is not available on Firefox yet.
As noted by #Wayne all of this is still quite experimental (both as in "using the newest browser APIs" and "the team of developers is still figuring way forward, please help by providing kind feedback and contributing").

Netbeans - Open remote folder/new project

I have been using Netbeans for several months now and like it a lot. I am trying to enable a way to create a project which accesses live files on my server to make changes. When I create a project using a remote source, it starts downloading all the server files to my computer. This would be just fine, except for the fact that (a) the server has a few gigs of files on it and (b) there are two of us that will be making changes on the server.
In the past, I have worked with IDEs that just open an FTP or SFTP connection and will download the file you want to edit, and then upload that file back to the server when you save it. Preferably, this is what I would like Netbeans to do.
I have tried adding a FTP folder in Windows, but Netbeans won't open it. I have tried using Swish and setting up an SFTP folder, and Netbeans won't find the Swish folder altogether.
On a side note, I understand what I am doing is horrible practice, but it is a small site and I am usually the only one working on it. I haven't worked on the website in the past several weeks and just thought it would be easier to get access directly than re-downloading the entire server's worth of code/images/videos/etc. Any help would be appreciated.
NetBeans does not support what you want to do. However, if you put your site under Subversion/Git (revision control), you could check out the content, modify it locally and push modification back remotely.
This would also help avoiding code clashes when your friends work on your website too.
Actually Netbeans supports this for php projects.
Just choose :
PHP Application from remote server
(but git is the best solution anw as it gives you version control as well but the above is useful if you want your server files to be updated when you just press ctr+s)

Synchronizing with live server via FTP - how to FTP to different folder then copy changes

I'm trying to think of a good solution for automating the deployment of my .NET website to the live server via FTP.
The problem with using a simple FTP deployment tool is that FTPing the files takes some time. If I FTP directly into the website application's folder, the website has to be taken down whilst I wait for the files to all be transferred. What I do instead is manually FTP to a seperate folder, then once the transfer is completed, manually copy and paste the files into the real website folder.
To automate this process I am faced with a number of challenges:
I don't want to FTP all the files - I only want to FTP those files that have been modified since the last deployment. So I need a program that can manage this.
The files should be FTPed to a seperate directory, then copy+pasted into the correct destination onces complete.
Correct security permissions need to be retained on the directories. If a directory is copied over, I need to be sure that the permissions will be retained (this could probably be solved by rerunning a script that applies the correct permissions).
So basically I think that the tool that I'm looking for would do a FTP sync via a temporary directory.
Are there any tools that can manage these requirements in a reliable way?
I would prefer to use rsync for this purpose. But seems you are using windows OS here, some more effort is needed, cygwin stuff or something alike.

Web Farm file distribution

I'm looking for a way to move files from my machine to several servers on a web farm.
I currently use beyond compare to move the files over; one BC session for each server on the farm. I'm fine with this because BC is fast and I like the control I have.
Our business gave me a new requirement of automatic file distribution of images. I've read a little about DFS but I'm not sure that is the route I want to take. I want the files to actually end up on the servers.
Any tools for this out there. A scripting option perhaps?
We use Robocopy in order to copy identical configuration files to all servers in a citrix farm, works great.
Have a look into MSDeploy and MSBuild this might be a newer version of msdeploy
At my place I think we use WanSync though I couldn't be 100% sure because I'm not in that department.