Why is my Joomla website giving this error? Fatal error: Allowed memory size of 134217728 bytes exhausted - joomla1.5

My website is intermittently giving me the following error:
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 80 bytes) in /home//public_html/libraries/joomla/database/database/mysqli.php on line 478
Upon researching I found that there are lots of modules / extensions that could probably cause
It is a shared hosting package and following are the PHP specifications (taken from Joomla Admin):
PHP Version 5.2.17
memory_limit 512M (local) 64M (Master Value - not sure what that means)
This problem is recurring and I am unable to solve it even after I have disabled all plugins which we are using. I am not sure what is to be done. Will upgrading to a better server with more PHP memory_limit help?
Link to my website:
bit.ly/RAKDtx

Try to put this in in configuration.php:
ini_set ('memory_limit', '128M ');
OR put this in .htaccess file:
php_value memory_limit 128M
And, maybe you can change that value to 256 if 128 dont work. Good luck.
And about Master value:
The value shown in the “Local” column is the memory limit you are working with. If no Local value is set, then the Master value is your memory limit.

I also had the memory limit recurring issue, so I changed twice, from 64 to 128 then to 256 mo, (paying more to my provider), a coupla months later the problem was back. The whole site was desperatly slow, too.
I disabled JV counter, and everything's back, up and running again!
Some bad coding to rewrite, buddies!

Related

Neo4j (Windows) - can't increase heap memory size for Neo4jImport tool

I tried to batch import a graph database with about 40 million nodes and 20 million relationships but I get an outofmemory error (this has been documented already, I know). On Windows, I am using the import tool as so:
neo4jImport –into SemMedDB.graphdb --nodes nodes1.csv --nodes nodes2.csv --relationships edges.csv
I have 16 GB of RAM but Neo4j only allocates 3.5 GB of max heap memory while I still have about 11 GB of free RAM. To try to fix this so I wouldn't get an outofmemory error, I followed some suggestions online and created a conf folder in my C:\program files\Neo4j folder and created a neo4j-wrapper.conf file with heap values set to:
wrapper.java.initmemory=10000
wrapper.java.maxmemory=10000
Also, I set my neo4j properties file page cache setting to:
dbms.pagecache.memory=5g
The problem is, when I restart my neo4j application and try to import again, it still says 3.5 GB of max heap space and 11 GB free RAM... why doesn't Neo4j recognize my settings?
Note, I've tried downloading the zip version of Neo4j in order to use the powershell version of the import tool but I run into the same issue of changing my configuration settings but Neo4j not recognizing them.
I would really appreciate some help with this... thanks!
Cannot tell for windows, but on linux neo4j-wrapper.conf is not used for neo4j-import tool. Instead you can pass additional JVM parameters using JAVA_OPTS environment variable (again Linux syntax here):
JAVA_OPTS="-Xmx10G" bin/neo4j-import
To validate that approach, amend -XX:+PrintCommandLineFlags to the above. At the beginning of the output you should see a line similar to
-XX:InitialHeapSize=255912576 -XX:MaxHeapSize=4094601216 \n
-XX:+PrintCommandLineFlags -XX:+UseCompressedClassPointers \n
-XX:+UseCompressedOops -XX:+UseParallelGC
If that one shows up, using JAVA_OPTS is the way to go.
I found a solution. What ultimately allowed me to change the heap size for the Neo4jImport tool was to open the neo4jImport.bat file (path is C:Program files\neo4j\bin) in a text editor (required me to changed permissions first) and change the "set EXTRA_JVM_ARGUMENTS=-Dfile.encoding=UTF-8" line to
set EXTRA_JVM_ARGUMENTS=-Dfile.encoding=UTF-8 -Xmx10G -Xms10G -Xmn2G
Now, when I run Neo4jImport in the neo4j shell, it shows a heap size of 9.75 GB.
Generally Neo4jImport shouldn't realy on a large heap, it will use whatever heap is available and then use whatever offheap is available, however some amount of "boilerplate" memory needs to be there for the machinery to work properly. Recently there were a fix (coming in 2.3.3) reducing heap usage of the import tool and that would have certainly helped here.

Out Of Memory Error with larger file uploads

I've had some issues with larger file uploads to my jetty server
Trace.
I'm uploading as Multipart/form-data, and fetching the file from the request using scalatra's FileUploadSupport (as below)
class foo extends ScalatraServlet with FileUploadSupport {
configureMultipartHandling(MultipartConfig(maxFileSize = Some(1073741824)))
post("/upload") {
//{1}
... //(VALIDATION AND USER LOGIN WITH SCENTRY)
... //(Transactionally posts meta info to Elasticsearch and writes video to filesystem)
}
}
I have a battery of tests and have no issue running this with smaller files ~50MB and even 3-400MB files if running the server on localhost.
However, when i host my server on a remote machine I have some transport specific issues. And, (when debugging), I never reach a breakpoint at {1}
Researching the problem I found this Which suggests reuse of the same http connection may be causing the issue. Following their advice I added the below to my servlet and on analyzing the response headers I can confirm it "took":
before("/*") {
response.addHeader("Connection", "close")
}
My research also showed some issues with having too many form keys, however the form in question has only 4 keys, and I don't see the issue on localhost or with smaller files on the remote machine.
This upload is occurring over https (in case it's relevant) with a CA signed certificate and is accepted by google chrome as a secure, private connection.
When setting up the Server connector, I changed the idle timeout value in case that was causing any issues
httpsConnector.setIdleTimeout(300000)
Despite these modifications, I have yet to overcome this problem and I appreciate any advice you might have.
EDIT1-SOLVED-
I believed i had already assigned 4G of heap space and therefore this was a nonesene error. Turns out the version i ran on localhost was via intellij, which has its own ideas about heap size.
when running it on the remote machine with "sbt run" I had neglected to include the fork option in my build.scala. consequently sbt ignored my jvm options (can't set options on a running jvm presumably) and i was running with a 300mb heap.
Your trace shows an out of memory exception. Evidently you are running out of memory when uploading the files.
Have you tried increasing your heap size. Is that the difference between your remote and local server?

Server logs: Looking for endless redirect Loop

One of my Drupal websites homepage (just the homepage) is constantly redirecting when the site is visited. Tends to happen randomly. Which I don't understand why it would do this. I talked a bit on the Drupal community and it is said to be a server issue. Not Drupal.
Error 310 (net::ERR_TOO_MANY_REDIRECTS): There were too many redirects.
I don't currently have CPanel access to check the server logs though. I am somewhat fluent in terminal and I have root SSH access to the server.
Where and what commands would I have to run to find and access the logs that could possible help me figure where to start with fixing this? Would they just be located in /var/? What would I be looking for once I get access to the logs, just a steady stream of the duplicated IP address that it keeps being redirected too?
Found out this IS a Drupal Commerce Kickstart core issue.
Found follow errors in php logs:
PHP Warning: Unknown: Input variables exceeded 1000. To increase the limit change max_input_vars in php.ini
PHP Fatal error: Unsupported operand types in public_html/dev/profiles/commerce_kickstart/modules/contrib/search_api_db/servic‌​e.inc on line 970
Got the redirect loop to stop after increasing the max_input_vars to 9000. I feel it's more of a bandaid fix though. So I'm taking this further into the Commerice Kickstart community.

Segmentation fault when starting G-WAN 3.12.26 32-bit on linux fc14

I have a fc14 32 bit system with 2.6.35.13 custom compiled kernel.
When I try to start G-wan I get a "Segmentation fault".I've made no changes, just downloaded and unpacked the files from g-wan site.
In the log file I have:
"[Wed Dec 26 16:39:04 2012 GMT] Available network interfaces (16)"
which is not true, on the machine i have around 1k interfaces mostly ppp interfaces.
I think the crash has something to do with detecting interfaces/ip addresses because in the log after the above line I have 16 lines with ip's belonging to the fc14 machine and after that about 1k lines with "0.0.0.0" or "random" ip addresses.
I ran gwan 3.3.7 64-bit on a fc16 with about the same number of interfaces and had no problem,well it still reported a wrong number of interfaces (16) but it did not crashed and in the log file i got only 16 lines with the ip addresses belonging to the fc16 machine.
Any ideas?
Thanks
I have around 1k interfaces mostly ppp interfaces
Only the first 16 will be listed as this information becomes irrelevant with more interfaces (the intent was to let users find why a listen attempt failed).
This is probably the long 1K list, many things have changed internally after the allocator was redesigned from scratch. Thank you for reporting the bug.
I also confirm the comment which says that the maintenance script crashes. Thanks for that.
Note that bandwidth shaping will be modified to avoid the newer Linux syscalls so the GLIBC 2.7 requirement will be waved.
...with a custom compiled kernel
As a general rule, check again on a standard system like Debian 6.x before asking a question: there is room enough for trouble with a known system - there's no need to add custom system components.
Thank you all for the tons(!) of emails received these two last days about the new release!
I had a similar "Segmentation fault" error; mine happens any time I go to 9+GB of RAM. Exact same machine at 8GB works fine, and 10GB doesn't even report an error, it just returns to the prompt.
Interesting behavior... Have you tried adjusting the amount of RAM to see what happens?
(running G-WAN 4.1.25 on Debian 6.x)

Why does Apache complain that CGI.pm has panicked at line 4001 due to a memory wrap?

This error according to the logs is caused by a 5-year old Perl script that merely grabs data from MySQL via a simple SQL select and displays it.
It's running on my dev machine which is MBP with 8GB of RAM running the stock Apache.
Once a while, once or twice a month, I get the following error for no apparent reason :
panic: memory wrap at /System/Library/Perl/5.10.0/CGI.pm line 4001.
Apache refuses to run the script again and only a reboot of the OS would make Apache relent. The OS says that there's 3+ GB of free memory when it happens so it's not a low memory issue. Luckily this doesn't happen on the production Debian 5 server.
What's a memory wrap? And what causes it?
I hit this bug as well in a slightly different circumstance. PerlMonks, as ever, has just saved me probably days of work:
http://www.perlmonks.org/?node_id=823389
the problem lies in the way osx ties
up other resources. a simple sleep
will give the os time to close and
open. restart or graceful will go in
conflict.
apachectl stop
sleep 2
apachectl start
This is late but the perl distributed by MacPorts does not have this problem, if that is an option.
mu is too short's answer, which as unfortunately posted as a comment. :
perldiag says that "panic: memory wrap" means "Something tried to allocate more memory than possible". A bit of googling suggests that this isn't a CGI.pm problem but an occassional problem with Perl 5.10 and OSX.