Memory Limit Hit - railstutorial.org

Memory Limit Hit following rubyrailstutorial.org
Hi,
I am following the tutorial on Ruby Rails Tutorial and I am using the Cloud9 IDE however I keep getting a "Memory Limit Hit" when running through the tutorial. I am using the free tier which comes with 512MB Ram which is constantly in the red zone.
I have tried killing some processes, but as soon as I start back on the tutorial I keep getting the error.
Thanks
Michael

Short answer: Spring has a bug that causes it to use up memory by spawning too many processes.
From your c9 shell, run
pkill -9 -f spring
and restart your rails server.
Longer answer: He describes a lot about it in Chapter 3 of the tutorial.
Box 3.4 on this page https://www.railstutorial.org/book/static_pages covers it nicely.
Incidentally, I found that I couldn't simply restart my c9 session...it's really good at persisting the state of your virtual server ... including all of the extra Spring processes.

Related

CPU usage of Jboss JVM goes upto 99% and stays there

I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.

Meteor crashing

My Meteor app is crashing with the following error:
Unexpected mongo exit code null. Restarting.
=> Exited from signal: SIGKILL
/home/ron/.meteor/packages/meteor-tool/.1.1.3.4sddkj++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/fibers/future.js:245
throw(ex);
^
Error: Unable to allocate ArrayBuffer.
This is followed by a call-stack trace.
What is causing this?
Thanks!
This error is probably caused by your operating environment. If its not able to allocate an ArrayBuffer it may be that you don't have enough RAM or some other service is blocking meteor from allocating memory.
This error may occur on the smallest DigitalOcean droplet if that's what you're using.
It's generally recommended you have 1 GB of free ram for Meteor to work properly in development mode.
Something you could use is a swapfile to increase your ram.
Real RAM memory could be replaced with virtual memory but won't be so fast memory... in linux this SO feature is achieved using a swap partition. In windows is using a paging file. Weirdly you can emulate this feature in the linux world using swapspace (or create a traditional swap partition)
sudo apt-get install swapspace
Whatever option you choose will create swap for you and it will help you to start up your meteor app!!!
Just be aware that this will be a more slower experience than real RAM but definitely will work

Seeking examples of scripts/syntax for testing MongoDB with YCSB

I'm testing the performance of MongoDB on a single system using YCSB. I'd like to get a sense of the performance using SSDs compared to spinning disks.
I have CentOS, MongoDB, and YCSB installed. I have stumbled around a bit with basic examples, but have yet to see a step by step of starting from this setup to loading to running to reviewing. I keep seeing bits and pieces, but not enough to get me up and running.
If anyone could please provide a command line for these steps, it would be most appreciated!
Thanks
Here's a guide on how to run Yahoo! Cloud System Benchmark (YCSB) using Mongodb.
https://github.com/samanca/YCSB/tree/master/mongodb
https://github.com/brianfrankcooper/YCSB/wiki
Working example using Python and Java to test Mongodb:
https://github.com/richcar58/MongoDBTools/blob/master/RunYcsb/runycsb/fabfile.py

How can I find out why mongodb crashes?

I have recently started having my mongodb instance crash on an ubuntu machine at random times, it usually stays up for a day or so. The mongo log has no trace of the crash, just the last operation and when I restarted the server. I need some guidance in finding out the problem and the log doesn't have any information. Is there another log I should be looking at?
The setup is fairly straightforward, single instance (no sharding), of mongodb 2.2 running on an ubuntu box, with pretty much default install.
The only change I have done recently which seems to coincide with this in timing is I have replaced some simple map reduce execution with the aggregate framework.
Thanks.
MongoDB unofficially have Mtools try download and run with your logs in this tool. it can give you why it went down, how many time it restarted and many more details.
you can get this in github
Through Logs you can view the actual cause of crash
/var/log/mongodb/mongod.log ####Common path of logs
or use this command
db.adminCommand( { getLog : "global" } )

How to grab a full memory dump of a large memory usage

I am hosting IIS based web service applications on Windows 2008 64-bit system running on a Quad core 8G machine. Ran into couple of instances when W3WP was running at 7.6G of memory usage. Nothing else was responding on the system including RDP. Right click on the process from the task manager and creating the dumps, froze the system and all its threads for a long time (close to 30minutes). When the freeze up occurred during off hours, we let the dump run for a while (ran close to 1 hour) but still dump didn't complete. In the interest of getting the system up, we had to kill IIS
Tried other tools like procexp, debug diag etc to create full memory dump and all have the same results
So, what tool does the community use to grab dump files quickly? Or without freezing all the threads? I realize latter might be a rhetorical question. But what are the options for generating such a large dump file without locking up the system for a long time?
IMO you shouldn't have to wait until the process memory grows to 8 GB. I am sure with something like 3 - 4 GB you should be able to detect the memory leak.
Procdump has an option based on memory threshold
-m Memory commit threshold in MB at which to create a dump of the process.
I would you this option to dump the memory of the process.
And also SSD would help in writing faster.
WPA a.k.a xperf (http://msdn.microsoft.com/en-us/performance/cc825801.aspx) is a powerfull tool, to diagnose the applications. You will get call stack of the culprit allocation. You dont have to collect the dump and it is no-invasive and does not load much in production systems
Complete step by step information is available here. http://msdn.microsoft.com/en-us/library/ff190906(v=VS.85).aspx.