PermGen Space almost overflow. Using jboss 4.2.2GA - jboss

I've got OutOfMemoryError: PermGen space, and i'm trying many times to change it under run.conf under jboss/bin/run.conf but still i can't see any changes after restart jboss
im using Jboss 4.2.2GA
OS: Linux centos
jvm: 1.5.2 hotspot server 64bit
please, any suggestion..
Heap Configuration:
MinHeapFreeRatio = 40
MaxHeapFreeRatio = 70
MaxHeapSize = 1073741824 (1024.0MB)
NewSize = 2686976 (2.5625MB)
MaxNewSize = -65536 (-0.0625MB)
OldSize = 1835008 (1.75MB)
NewRatio = 2
SurvivorRatio = 8
PermSize = 21757952 (20.75MB)
MaxPermSize = 88080384 (84.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 332922880 (317.5MB)
used = 39076184 (37.265953063964844MB)
free = 293846696 (280.23404693603516MB)
11.73730805164247% used
From Space:
capacity = 12582912 (12.0MB)
used = 0 (0.0MB)
free = 12582912 (12.0MB)
0.0% used
To Space:
capacity = 12386304 (11.8125MB)
used = 0 (0.0MB)
free = 12386304 (11.8125MB)
0.0% used
PS Old Generation
capacity = 691994624 (659.9375MB)
used = 159954680 (152.54467010498047MB)
free = 532039944 (507.39282989501953MB)
23.115017725918054% used
PS Perm Generation
capacity = 76742656 (73.1875MB)
used = 75870592 (72.3558349609375MB)
free = 872064 (0.8316650390625MB)
98.8636515264731% used

You can check if you have run.conf file in the your configuration directory (default or maybe production).
At least in JBoss EAP 4.3 each profile can have own running configuration file. If you find such file you have to edit -XX:MaxPermSize parameter.

You need to make sure your JBoss instance is properly configured to handle PermGen. I don't know why the default configs don't come with this stuff in them; for some reason you need to just magically know this stuff. Here's a link on how to configure: http://blog.dahanne.net/2009/08/12/jboss-and-java-lang-outofmemoryerror-permgen-space/.

Related

Program and Run PIC18 with pickit4 on linux

I am on linux ubuntu and target is a PIC18F47J53.
I basically want to program the chip and then let it run, using command lines and using pickit4.
using ipecmd (from mplab x ide v5.45), this is my command:
/opt/microchip/mplabx/v5.45/sys/java/zulu8.40.0.25-ca-fx-jre8.0.222-linux_x64/bin/java -jar /opt/microchip/mplabx/v5.45/mplab_platform/mplab_ipe/ipecmd.jar -TPPK4 /P18F47J53 -M -F"/path_to_myfile.hex" -W
This is my output
DFP Version Used : PIC18F-J_DFP,1.4.41,Microchip
*****************************************************
Connecting to MPLAB PICkit 4...
Currently loaded versions:
Application version............00.06.66
Boot version...................01.00.00
Script version.................00.04.17
Script build number............db473af2f4
Tool pack version .............1.6.961
PICkit 4 is supplying power to the target (3.25 volts).
Target device PIC18F47J53 found.
Device Revision Id = 0x1
*****************************************************
Calculating memory ranges for operation...
Erasing...
The following memory area(s) will be programmed:
program memory: start address = 0x0, end address = 0x3ff
program memory: start address = 0x1fc00, end address = 0x1fff7
configuration memory
Programming/Verify complete
Program Report
30-Jan-2021, 12:54:41
Device Type:PIC18F47J53
Program Succeeded.
Operation Succeeded
All good, and takes about 12 seconds, however, after that the pickit4 turns off the power target, and the pickit LED is BLUE (I guess state "ready")
The main question is how can I let the pickit4 powering the boards? any specific parameter? (I cannot find on the readme.html)
If I use MPLAB X IPE GUI to program, the programming is much quicker (3 or 4 seconds), the pickit LED is YELLOW and the target is left powered on. (I selected "release from reset")
I have tried to get the log out with as many details as possible, but I cannot see the commands sent to the pickit4.
Any idea? thanks
I realize that it's been a while since you asked, but i put the answer here for anyone who needs it. Add -OL to your command line options.

gsutil multiprocessing and multithreaded does not sustain cpu usage & copy rate on GCP instance

I am running a script to copy millions (2.4 million to be exact) images from several gcs buckets into one central bucket, with all buckets in the same region. I was originally working from one csv file but broke it into 64 smaller ones so each process can iterate through its own file as to not wait for the others. When the script launches on a 64 vCPU, 240 GB memory instance on GCP it runs fine for about an hour and a half. In 75 minutes 155 thousand files copied over. The CPU usage was registering a sustained 99%. After this, the CPU usage drastically declines to 2% and the transfer rate falls significantly. I am really unsure why this. I am keeping track of files that fail by creating blank files in an errors directory. This way there is no write lock when writing to a central error file. Code is below. It is not a spacing or syntax error, some spacing got messed up when I copied into the post. Any help is greatly appreciated.
Thanks,
Zach
import os
import subprocess
import csv
from multiprocessing.dummy import Pool as ThreadPool
from multiprocessing import Pool as ProcessPool
import multiprocessing
gcs_destination = 'gs://dest-bucket/'
source_1 = 'gs://source-1/'
source_2 = 'gs://source-2/'
source_3 = 'gs://source-3/'
source_4 = 'gs://source-4/'
def copy(img):
try:
imgID = img[0] # extract name
imgLocation = pano[9] # extract its location on gcs
print pano[0] + " " + panoLocation
source = ""
if imgLocation == '1':
source = source_1
elif imgLocation == '2':
source = source-2
elif imgLocation == '3':
source = source_3
elif imgLocation == '4':
source = source_4
print str(os.getpid())
command = "gsutil -o GSUtil:state_dir=.{} cp {}{}.tar.gz {}".format(os.getpid(), source, imgID , g
prog = subprocess.call(command, shell="True")
if prog != 0:
command = "touch errors/{}_{}".format(imgID, imgLocation)
os.system(command)
except:
print "Doing nothing with the error"
def split_into_threads(csv_file):
with open(csv_file) as f:
csv_f = csv.reader(f)
pool = ThreadPool(15)
pool.map(copy, csv_f)
if __name__ == "__main__":
file_names = [None] * 64
# Read in CSV file of all records
for i in range(0,64):
file_names[i] = 'split_origin/origin_{}.csv'.format(i)
process_pool = ProcessPool(multiprocessing.cpu_count())
process_pool.map(split_into_threads, file_names)
For gsutil, I agree strongly with the multithreading suggestion by adding -m. Further, composite uploads, -o, may be unnecessary and undesirable as the images are not GB each in size and need not be split into shards. They're likely in the X-XXMB range.
Within your python function, you are calling gsutil commands, which are in turn calling further python functions. It should be cleaner and more performant to leverage the google-made client library for python, available [below]. Gsutil is built for interactive CLI use rather than for calling programatically.
https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python
Also, for gsutil, see your ~/.boto file and look at the multi-processing and multi-threading values. Beefier machines can handle greater thread and process. For reference, I work from my Macbook Pro w/ 1 process and 24 threads. I use an ethernet adapter and hardwire into my office connection and get incredible performance off internal SSD (>450 Mbps). That's Megabits, not bytes. The transfer rates are impressive, nonetheless
I strongly recommend you to use the "-m" flag on gsutil to enable multi thread copy.
Also as an alternative you can use the Storage Transfer Service [1] to move data between buckets.
[1] https://cloud.google.com/storage/transfer/

Xcode 8.2: Unable to load configuration data from specified path / permission error in Mac OSX App:

I have a Mac OSX app that I have previously been able to test - however when I run tests now - it will work once and then fail with the error below in the console. I need to do some drastic things to get it working:
If I
change the location of my Derived Data folder
and clean build folder - then it will usually work again once or twice before but when I run tests again it will happen again.
Any ideas of what I can do to fix it permanently - below is pretty much Greek to me..
I have tried the following:
moving the derived data into Documents
installing new Xcode from Appstore.
Deleting and re-adding Certificates and profiles
2017-01-15 16:41:51.247064 XXXXXX[51736:892136] Unable to load
configuration data from specified path
/var/folders/59/7ylv57053bv3c0rbbcc1mcg40000gp/T/com.apple.dt.XCTest/FDF2A461-45D7-4E64-B650-602DF0725CA7/remote-container/tmp/XXXXXXTests-FDF2A461-45D7-4E64-B650-602DF0725CA7.xctestconfiguration;
error: You don’t have permission. 2017-01-15 16:41:51.247221
XXXXXX[51736:892136] IDEBundleInjection Arguments: (
"/Users/XXXXXX/XXXXXX/XXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug/XXXXXX.app/Contents/MacOS/XXXXXX",
"-NSTreatUnknownArgumentsAsOpen",
NO,
"-ApplePersistenceIgnoreState",
YES ) 2017-01-15 16:41:51.248336 XXXXXX[51736:892136] IDEBundleInjection Environment: {
"APP_SANDBOX_CONTAINER_ID" = "com.XXXXXX.XXXXXX";
"Apple_PubSub_Socket_Render" = "/private/tmp/com.apple.launchd.hKPiBBDAAG/Render";
"CFFIXED_USER_HOME" = "/Users/XXXXX/Library/Containers/com.XXXXXX.XXXXXX/Data";
"DTX_CONNECTION_SERVICES_PATH" = "/Applications/Xcode.app/Contents/SharedFrameworks/DTXConnectionServices.framework";
"DYLD_FRAMEWORK_PATH" = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug:/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug:/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/Library/Frameworks";
"DYLD_INSERT_LIBRARIES" = "";
"DYLD_LIBRARY_PATH" = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug:/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug:/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/Library/Frameworks";
HOME = "/Users/XXXXX/Library/Containers/com.grant.XXXXXX/Data";
LOGNAME = XXXXX;
MallocNanoZone = 1;
NSUnbufferedIO = YES;
"OS_ACTIVITY_DT_MODE" = YES;
PATH = "/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin";
PWD = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug";
SHELL = "/bin/bash";
"SSH_AUTH_SOCK" = "/private/tmp/com.apple.launchd.dNK7oacOAX/Listeners";
TMPDIR = "/var/folders/59/7ylv57053bv3c0rbbcc1mcg40000gp/T/com.grant.XXXXXX/";
USER = XXXXX;
XCInjectBundleInto = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug/XXXXXX.app/Contents/MacOS/XXXXXX";
"XCODE_DBG_XPC_EXCLUSIONS" = "com.apple.dt.xctestSymbolicator";
XCTestConfigurationFilePath = "/var/folders/59/7ylv57053bv3c0rbbcc1mcg40000gp/T/com.apple.dt.XCTest/FDF2A461-45D7-4E64-B650-602DF0725CA7/remote-container/tmp/XXXXXXTests-FDF2A461-45D7-4E64-B650-602DF0725CA7.xctestconfiguration";
"XPC_FLAGS" = 0x0;
"XPC_SERVICE_NAME" = "com.apple.dt.Xcode.23100";
"__CF_USER_TEXT_ENCODING" = "0x1F6:0x0:0x2";
"__XCODE_BUILT_PRODUCTS_DIR_PATHS" = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug";
"__XPC_DYLD_FRAMEWORK_PATH" = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug";
"__XPC_DYLD_LIBRARY_PATH" = "/Users/XXXXX/XXX/XXXXXXX/XXXXXX-eghnritsumpbbqgylbzrxqfximew/Build/Products/Debug";
}
In Xcode 9 I found a solution, that worked for me.
Go to Xcode > File > Project Settings... (or Workspace Settings...)
Select New Build System (Preview) as Build System under Shared Settings.
Make sure to select Use Shared Setting under Per-User Project Settings.
I too logged a bug with Apple. Experienced it on Xcode 9. However, I then played with it some more and found that by changing the Derived Data Folder to Custom and disabling code coverage in my Test config for my scheme, the error went away. It seems that some combination of these two caused the issue.
I have logged a bug with Apple as it appears that no one else is getting this error.
Edit: Elise has filed bug Apple #34737491, if you are experiencing it - then please raise a bug and reference that ticket so Apple can see how big the impact is.

Why Play Framework 2.5 doesn't respect JVM memory settings in sbt?

So I've been struggling with setting memory settings with Play inside sbt with:
javaOptions ++= Seq("-Xmx11G", "-Xms3G")
But it seems like it's not respecting it.
When I print it
val mb = 1024*1024
//Getting the runtime reference from system
val runtime = Runtime.getRuntime
println("##### Heap utilization statistics [MB] #####")
//Print used memory
println("Used Memory:" + (runtime.totalMemory() - runtime.freeMemory()) / mb)
//Print free memory
println("Free Memory:" + runtime.freeMemory() / mb)
//Print total available memory
println("Total Memory:" + runtime.totalMemory() / mb)
//Print Maximum available memory
println("Max Memory:" + runtime.maxMemory() / mb)
here is what I see:
##### Heap utilization statistics [MB] #####
Used Memory:270
Free Memory:657
Total Memory:928
Max Memory:928
I tried the suggestion here by setting _JAVA_OPTIONS, but the issue with this is that, it gives me the following error:
No java installations was detected.
Please go to http://www.java.com/getjava/ and download
Any ideas what to do?
(Assuming fork is set to true) If you're using the Play application startup script in production mode, the recommended way is to pass them as command-line arguments to the script. Otherwise the default JVM settings will be used. Here you are working in dev mode using sbt run or activator run, so you need to effectively do the same.
Solution 1:
You can pass the arguments on command-line:
$ sbt run -J-Xms3G -J-Xmx11G
Solution 2:
Starting sbt 0.13.6, you can add .sbtopts file in your project root directory to set JVM flags. This is probably a nicer way because it makes your project self-contained.
Here's a sample .sbtopts:
-J-Xms3G
-J-Xmx11G
Here's the output of $ sbt run (or activator run):
##### Heap utilization statistics [MB] #####
Used Memory: 364
Free Memory: 4062
Total Memory: 4426
Max Memory: 10012
You can read more about the options and usage here.
Note: If this was an SBT project instead of Play, javaOptions defined in build.sbt would apply directly.

Problems with unixODBC and FreeTDS config

I have been working on this for way too long and can't seem to figure it out. I am sure I have something wrong in my freetds.conf, odbc.ini or odbcinst.ini. I can connect to my mssql 2008 server using tsql, but still can't with isql or of course through php.
I am on CentOS 5.6.
Can anyone offer some assistance?
Thanks!
Shawn
This is in my sqltrace.log:
[ODBC][12249][1347850711.939084][__handles.c][459]
Exit:[SQL_SUCCESS]
Environment = 0x1b5fc6c0
[ODBC][12249][1347850711.939149][SQLAllocHandle.c][375]
Entry:
Handle Type = 2
Input Handle = 0x1b5fc6c0
[ODBC][12249][1347850711.939187][SQLAllocHandle.c][493]
Exit:[SQL_SUCCESS]
Output Handle = 0x1b5fcff0
[ODBC][12249][1347850711.939231][SQLConnect.c][3654]
Entry:
Connection = 0x1b5fcff0
Server Name = [MSSQL_DSN][length = 9 (SQL_NTS)]
User Name = [InetIndyArtsRemote][length = 18 (SQL_NTS)]
Authentication = [**********][length = 10 (SQL_NTS)]
UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'
DIAG [01000] [FreeTDS][SQL Server]Unexpected EOF from the server
DIAG [01000] [FreeTDS][SQL Server]Adaptive Server connection failed
DIAG [S1000] [FreeTDS][SQL Server]Unable to connect to data source
[ODBC][12249][1347850711.949640][SQLConnect.c][4021]
Exit:[SQL_ERROR]
[ODBC][12249][1347850711.949694][SQLFreeHandle.c][286]
Entry:
Handle Type = 2
Input Handle = 0x1b5fcff0
[ODBC][12249][1347850711.949735][SQLFreeHandle.c][337]
Exit:[SQL_SUCCESS]
[ODBC][12249][1347850711.949773][SQLFreeHandle.c][219]
Entry:
Handle Type = 1
Input Handle = 0x1b5fc6c0
freetds.conf:
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
tds version = 8.0
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
dump file = /tmp/freetds.log
debug flags = 0xffff
dump file append = yes
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
[IndyArtsDB]
host = xxx.xx.xxx.xx
port = 1433
tds version = 8.0
client charset = UTF-8
ODBC.INI
[MSSQL_DSN]
Driver=FreeTDS
Description=IndyArts DB on Rackspace
Trace=No
Server=xxx.xx.xxx.xx
Port=1433
Database=DBName
ODCBINST.INI
[ODBC]
DEBUG=1
TraceFile=/home/ftp/sqltrace.log
Trace=Yes
[FreeTDS]
Description=MSSQL Driver
Driver=/usr/local/lib/libtdsodbc.so
UsageCount=1
Looking at your sqltrace.log it looks to me like an authentication error - you get that "Unexpected EOF from the server" message immediately after authenticating...
Is there any chance the remote server is blocking connections from your CentOS server, either completely or on port 1433? Any chance the "client charset = UTF-8" in your freetds.conf is causing the problem?
This is my (working) setup on an Ubuntu 12.04 (Precise Pangolin) machine.
Here is my /etc/odbc.ini file:
[xyz]
Description = XYZ Server
Driver = freetds
Database = MyDB
ServerName = xyz
TDS_Version = 8.0
And my /etc/odbcinst.ini file:
[freetds]
Description = MS SQL database access with Free TDS
Driver = /usr/lib/i386-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/i386-linux-gnu/odbc/libtdsS.so
UsageCount = 1
And finally my /etc/freetds/freetds.conf file:
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 7.0
# The XYZ database
[xyz]
host = XYZ
port = 1433
tds version = 8.0
Looks like the version numbers in FreeTDS have been changed from 8.0 to 7.1 and 9.0 to 7.2.
See http://www.freetds.org/userguide/choosingtdsprotocol.htm