How To Increase heap space in eclipse - eclipse

I have a project in which i want to implement around 6000 else if statements , but the problem is when i paste the code in my java file, i get two errors , and the text cannot be pasted in the java file.i was thinking if there is a way to store else if statements in Strings.xml like strings are saved or can the code be split into two class files?
here are some screenshots of the errors i get
error 1 http://pho.to/4I2Zh
error 2 http://pho.to/4I2c8

The large heap will be the same as normal heap until your app needs more memory for a task. Put this line in the application tag. However this works above api level 15 :-
android:largeHeap= true
For Eclipse, edit the eclipse.ini file and add these lines :-
-Xms40m
-Xmx512m
If these lines are alraedy present, just change the values

Related

How to generate a 10000 lines test file from original file with 10 lines?

I want to test an application with a file containing 10000 lines of records (plus header and footer lines). I have a test file with 10 lines now, so I want to duplicate these line 1000 times. I don't want to create a C# code in my app to generate that file (is only for test), so I am looking for a different and simple way to do that.
What kind of tool can I use to do that? CMD? Visual Studio/VS Code extension? Any thought?
If your data is textual, load the 10 records from your test file into an editor. Select all, copy, insert at the end of file. Repeat until the file is of length 10000+
This procedure requires ceil(log_2(1000)) cycles, 10 in your case, in general ceil(log_2(<target_number_of_lines>/<base_number_of_lines>)).
Alternative (large files)
Modern editors should not have performance problems here. However, the principle can be applied using a cat cli command. Assuming that you copy the original file into a file named dup0.txt proceed as follows:
cat dup0.txt dup0.txt >dup1.txt
cat dup1.txt dup1.txt >dup0.txt
leaving you with the quadrupled number of lines in dup0.txt.

Cannot open text file containing 20 million lines

I want to open a large text file (400MB) which contains 20 millions lines of domain addresses.
This is the file when I open it normally:
But when I open it in Eclipse I get this error:
I tried to change Xms6000M but it's not working!!
Does anyone have a solution to this problem?
Use the current version of Eclipse instead of the outdated one you have.
If required, increase -Xmx in the eclipse.ini file.

CDT uses too much memory

I have an AUTOSAR project(1.1k sources) which I want to index using the C/C++ Indexer plugin on eclipse oxygen(4.7.3). After I got an Out of heap space error with -xmx4g I wanted to see how much memory it really needs so I configured -xmx10g, yet it wasn't enough.
Taking a snapshot with jvisualvm.exe from JDK 1.8 I see 7 gb of char[] objects kept in memory.
After about 10 minutes of running, the indexing didn't pass the first file from the 1.1k files to analyze.
What do I have to do to get a fix on such a problem?
Or where should I look to find the root cause?
The best way to get such a problem fixed is to reduce your project to a minimal set of files that reproduce the problem, and then file a CDT bug with the files attached.
The reduction can be done using binary search: delete half the files in your project, and see if the problem persists. If so, delete half of the remaining files, and so on. (It helps to consider dependency order when choosing which files to delete, i.e. avoid deleting a file before deleting files that depend on it.) When you only have a few files left, you can perform the binary search on their contents. Ideally you arrive at a minimal reproducing testcase of maybe 100-200 lines spread out over 1-3 files, at which point you can rename identifiers to be generic and post the code.
I would suggest testing with the latest release (CDT 9.5.2) before doing this, to make sure you're not running into an issue that has already been fixed.
Are you sure, -xmx is accepted .. or is it rather -Xmx.
I usually use the following in eclipse.ini:
-Xms512m
-Xmx4096m
1.1k Sources doesn't sound much (we have much more), but on the other side, some generated files can eat up a lot of memory and performance, e.g. Rte.c and Rte_*.h files (e.g. Rte.c here is about 100k LOC). Together with CDTs features of AST based syntax and semantic highlighting eats up memory and also performance.

SQL Developer won't attempt to import an .xlsx file because it's too large

I have two .xlsx files that total 1.6 million rows, and I'm trying to import these things into SQL Developer.
I right click the table name and select "Import Data..." and then select my file and nothing happens. It logs my attempt to open this file in the "File - Log" output
This is two separate attempts to import the same file logged here. When I click one of them, I get the following message:
However, I know that this warning is not true, because my attempts with importing a smaller .xlsx file are successful. So I figured the problem was just the file size is too large, and tried to change the memory available. I went into "C:\Users\User\Documents\sqldeveloper\sqldeveloper\bin" and changed sqldeveloper.conf to change one existing value to
AddVMOption -XX:MaxPermSize=2048M
and added another value
AddVMOption -Xmx2048M
Which helps the Java VM according to this source:
http://codechief.wordpress.com/2008/07/30/configuring-oracle-sql-developer-for-large-files-fix-out-of-memory-errors/
But this did nothing for me, and I still receive the same errors. I am using SQL Developer version 3.2.20.09 but I have also tried this on 4.0.1.14 to the same effect.
Many thanks!
I tried looking into SQL*Loader. Apparently you should be able to right click a table > Import Data > next and there should be an option to generate SQL*Loader files.
Unfortunately, not only did the import wizard not open with my large .xlsx files, the SQL*Loader option was not even present for smaller .xlsx files, or even .xls for that matter.
In the end, I decided to convert my .xlsx files into .csv and import those instead. This worked for all but 4 rows of my 1.6 million, and gave me the insert statement for those 4, of which 2 worked when run with no additional modifications.
In case of large volume of data, add a line "AddVMOption -Xmx4096M" to the sqldeveloper.conf file and even if it is not working change your file to *.csv format instead of *.xls. It will work fine.

file too large for eclipse

So... I was generating queries and then I pastes one particularly long one into eclipse, eclipse encountered a heap error and then crashed. I thought, no big deal, I can just go back in and delete a bunch, except for... every time I try to look at the file now eclipse will just crash. Either a heap error or a gc overhead limit error. I plan on just deleting this from outside eclipse, but I really want to know if there are any clever ways of attacking this problem. It's less than 14 megs, and I didn't really think that eclipse would have a problem with it, any insights on why?
First, make sure you are invoking a generic text editor. Either name the file with an extension of .txt, or make sure you don't have something like an SQL editor associated with the file type. (Check in Window | Preferences | General | Editors | File Associations). Another possibility is to right-click the file, choose Open With.., then choose 'Text Editor'.
Second, you might need to start Eclipse with a bit more JVM Heap, depending on what else is in the workspace. You do this by either adding command line arguments to your invocation (eclipse -vmargs -Xmx1000M), or editing your eclipse.ini. You can read more details at
http://wiki.eclipse.org/FAQ_How_do_I_increase_the_heap_size_available_to_Eclipse%3F