Hugo Server not rebuilding on file changes - ubuntu-16.04

I have a project which is having a folder size of around 1.6 GB. When I try to build the project by running (on ubuntu server with 8GB memory),
hugo server --bind=0.0.0.0
Watching for changes in /root/hugo/{content,layouts,static}
Watching for config changes in /root/hugo/config.toml
Serving pages from memory
Running in Fast Render Mode. For full rebuilds on change: hugo server --disableFastRender
Web Server is available at http://localhost:1313/ (bind address 0.0.0.0)
Press Ctrl+C to stop
It will take around 20 minutes and 7 GB of ram to start and after that If I change a file (ex: index.md in content folder), It is not rebuilding..
On the same server, If I tried building a sample project and changes the file, It show..
Change detected, rebuilding site ..
What may be the reason? Is it because of the huge memory consumption?
Thanks /-

The memory consumption sounds like the probable cause. For each .md file, Hugo will generate several .html files. There are the content pages themselves, the tag and category pages, and the navigation-type pages that redirect to the content pages from equivalent URLs. The generated HTML files are also generally bigger than the input .md files.
On top of that, you have the memory which is being used by Hugo for the conversion, and also the memory used by the rest of your OS and your other apps. I wouldn't be at all surprised if all of that took you over 8GB of memory usage, and once you go over that, your OS will start swapping memory out to the hard disk, which will slow your computer down considerably.
How much space does your site take up on disk when you render it normally with hugo instead of hugo server? That will give you an idea of how much memory you would be taking up.

Related

crawl-300d-2M-subword.zip corrupted or cannot be downloaded

I am trying to use this fasttext model crawl-300d-2M-subword.zip from the official page onI my Windows machine, but the download fails by the last few Kb.
I managed to successfully download the zip file into my ubuntu server using wget, but the zipped file is corrupted whenever I try to unzip it. Example of what I am getting:
unzip crawl-300d-2M-subword.zip
Archive: crawl-300d-2M-subword.zip
inflating: crawl-300d-2M-subword.vec
inflating: crawl-300d-2M-subword.bin bad CRC ff925bde (should be e9be08f7)
It is always the file crawl-300d-2M-subword.bin, which I am interested in, that has problems in te unzipping.
I tried the two ways many times but with no success. it seems to me no one had this issue before
I've just downloaded & unzipped that file with no errors, so the problem is likely unique to your system's configuration, tools, or its network-path to the download servers.
One common problem that's sometimes not prominently reported by a tool like wget is a download that keeps ending early, resulting in a truncated local file.
Is the zip file you received exactly 681,808,098 bytes long? (That's what I get.)
What if you try another download tool instead, like curl? (Such a relay between different endpoints might not trigger the same problems.)
Sometimes if repeated downloads keep failing in the same way, it's due to subtle misconfiguration bugs/corruption unique to the network path from your machine to the peer (download origin) machine.
Can you do a successful download of the zip file (of full size per above) to anywhere else?
Then, transfer from that secondary location to where you really want it?
If you're having problems on both a Windows machine, and a Ubuntu server, are they both on the same local network, perhaps subject to the same network issues – either bugs, or policies that cut a particular long download short?

Jupyter dashboard freezes when opening path which contains many files

I'm using jupyter dashboard to browse files in remote linux server. But it often become slow or even frozen when opening directory containing many files (maybe thousands). I wonder is my problem common? Is there any extensions to solve this, maybe allowing user to browse by switching pages?
Thank you for answering my question.

Changes in conf/server.xml does not seem to have any effect during runtime

Here's what I know:
When uploading files given by users, we should put them in a folder
outside the deployment folder. Let me call it D:\uploads.
We should (somehow) add that folder (D:\uploads) as a web app context.
Here's what I did:
I upload my files to the folder D:\uploads.
I tried adding the web app context as it's mentionned here by adding the following row to TOMCAT_DIR/conf/server.xml:
<Context docBase="D:\uploads" path="/uploads"/>
But that doesn't have any effect. When consulting http://localhost:8080/uploads/file.png or http://localhost:8080/uploads I get a HTTP Status 404 error.
So what I want to know:
What did I do wrong ? How can I add my upload folder to Tomcat?
Is there any better approach when it comes to uploading files ?
Because I'm wondering what should I change if I want to deploy my
application to another server where there's no D:\uploads.
Change the docBase attribute. Use D:/uploads (with slash) instead of D:\uploads (with backslash).
When dealing with files in Java, you can safely use / (slash, not backslash) on all platforms.
Regarding the differences you mentioned in the comments when starting the Tomcat from the IDE and from bin/startup.bat: It's very likely when you start the Tomcat from the IDE, it is not using the same context.xml your Tomcat is using. Just review the Tomcat settings in the IDE.
How to store uploaded files is a common topic at Stack Overflow. Just look around and you'll get surprised in how this topic is popular.
If you aren't happy enough in storing your files in D:/uploads or you'll have other servers accessing the files, you could consider storing them in some location in your network. Depending on your requirements, you can have one dedicated server to store your files or just share the folder which contains the files in your current server. The right decision will always depend on your requirements.

Samba read speeds very slow through Explorer, but OK through Firefox

I have a file server running Ubuntu 12.04 and Samba 3.6.3. A Samba share is mapped to a drive on a Windows 8 machine.
When copying a test file to a local drive (which is an SSD and not a bottleneck here), it is very slow when doing so through Explorer. It is similarly slow when downloading the file through Internet Explorer. When downloading through Firefox (by entering the file URI), however, it is more than 10x as fast, as the image below shows.
What's going on here? I know that Samba is not fast, but I thought that's generally the case when dealing with lots of small files, when its request logic is very inefficient. The test file was 826 MB.
Removing custom "socket options" line in smb.conf (the Samba configuration file) solved it for me.
It seems that it's best to leave that option blank nowadays, since it will calculate optimal values itself. Firefox seemed to be either using its own SMB protocol settings, or ignoring those set by the Samba server.

online space to store files using commandline

I require a small space online (free) where I can
upload/download few files automatically using a script.
Space requirement is around 50 MB.
This should be such that it could be automated so I can set
it to run without manual interaction i.e. No GUI
I have a dynamic ip & have no tech on setting up a server.
Any help would be appreciated. Thanks.
A number of online storage services provide 1-2 GB space for free. Several of those have command-line clients. E.g. SpiderOak that I use has a client that can run in a headless (non-GUI) mode to upload files, and there's even a way to download files from it by wget or curl.
You just set up things in GUI mode, then put files into the configured directory and run SpiderOak with right options; files get uploaded. Then you either download ('restore') all or some of the files via another SpiderOak call or get them via HTTP.
About the same applies to Dropbox, but I have no experience with that.
www.bshellz.net gives you a free shell running Linux. I think everyone gets 50mb so you're in luck!