Optimal Performance tuning STS on Mac OS X - spring-tool-suite

What are the optimal performance tuning settings to put in my sts.ini file to ensure STS runs well on my Mac?
I am looking to optimize two machines. One is a MacBook Pro with 16GB ram and a 6-core 2.6Gh i7 processor and the other is an 8GB dual-core processor 2.2Ghz.
I am looking to get a faster overall speed for STS. The thing that really slows me down is the change event handler process. When it starts running then everything slows down.
There are quite a few one-off guides around for optimal performance tuning of the Spring Tool Suite. Some are written for a Windows platform and some for an OS X platform. Since STS runs on the JVM I thought the optimal settings would work in either environment.
I haven't seen a well-done list of performance tuning options. It would be nice to see if the configuration should change based off of system properties such as RAM, processor, and number of cores.

Related

Is there anyway to disable a region of RAM for use?

Lately I have been experiencing general crashed as freezes, so I ran a MemTest86, which failed. Seems like there are a small portion of RAM that have faulty bits, which are likely the cause.
Is there some way to disable this region of memory either in BIOS or in the OS (Win10, currently)?
The firmware might technically support something to exclude faulty RAM; but if it does it's not working.
I don't think Windows supports anything to exclude faulty RAM.
Linux does supports this; if and only if the memory isn't used before the kernel sets up its memory management. The problem would be installing an OS when the installer will probably use the faulty memory.
If you can get Linux to work, then you can install Windows inside a virtual machine running on Linux. Of course then there's still no way to determine how long it's going to last before more RAM becomes faulty.
Mostly; the easiest and safest option is to replace the faulty RAM.

Dymola 2018 performance on Linux (xubuntu)

The issue that I experience is that when running simulations (same IBPSA/AixLib-based models) on Linux I get a significant performance drop (simulation time is about doubled) in comparison to a Windows 8 machine. Below you find the individual specs of the two machines. In both cases I use Cvode solver with equal settings. Compilation is done with VC14.0 (Win) or GCC (Xubuntu).
Is this issue familiar to someone or can anyone help what the reason might be?
Win 8:
Intel Xeon #2.9GHz (6 logic processors)
32 GB RAM
64-Bit
Xubuntu 16.04 VM:
Intel Xeon #3.7GHz (24 logic processors)
64 GB RAM
64-Bit
Thanks!
In addition to the checklist in the comments, also consider enabling hardware virtualization support if not already done.
In general gcc tends to produce slower code than Visual Studio. In order to turn on optimization one could try adding the following line:
CFLAGS=$CFLAGS" -02"
at the top of insert/dsbuild.sh.
The reason for not having it turned on by default is to avoid lenghty compilations and bloated binaries. For industrial sized models these are actual issues.

Eclipse is extremely slow on Fedora 24

I am running a Fedora 24 OS (with GNOME 3) and I just installed the eclipse CDT package from the Fedora repositories (Eclipse CDT Neon.1), and it turned out to be extremely slow, but only when writing code or scrolling, the rest of the UI works perfectly and really quickly.
I have done some research about the topic and seemingly this problem is usually related to the GTK backend, and is commonly solved by running eclipse under GTK2 instead of GTK3. However, this has not helped me at all. The options I have tried are:
export SWT_GTK3=0
And
eclipse --launcher.GTK_version 2 (also tried in eclipse.ini)
These two options effectively switch to GTK2, which is noticeable because the graphical appearance changes.
I tried also older versions of Eclipse (Juno, Kepler and Luna) and the problem still exists, probably even worse. I have increased the memory size for the JVM to 3GB and the problem remains intact.
The underlying hardware is a 6 core Intel Xeon (12 virtual cores with Hyperthreading) and 32 GB of RAM, so I assume this should not represent a problem.
I also noticed that while scrolling or writing, one of my cores goes to 70%-100% utilization, which explains the lag, but I don't know how to solve.
Is there any other option I can try?

Enable AMD-virtualization

Before 3 weeks maybe, i faced a problem in launching WP emulator. After troubleshooting, i found that visualization option in my Laptop is not running successfully.
Laptop spec. (Acer 4253):
CPU: AMD E-350, Zacate 40nm Technology
OS: Operating System, Windows 10 Pro 64-bit
RAM: 4.00GB DDR3 # 532MHz
I have downloaded Speccy to check visualization info, since nothing relate to visualization is appear in bios settings, and i found that "Hyper-threading" is not supported!, any help?
Hyperthreading is only an Intel technology, AMD doesn't have hperthreading on any of it's processors evem the AMD FX generation.
Hyper-threading (officially called Hyper-Threading Technology or HT Technology, and abbreviated as HTT or HT) is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations.
For Virtualizaton, you have written it ok in the title, but you wrote visualization wrong every time... processors dont have visualization.:)
Everything on your WP configuration looks ok, you shouldn't worry what the amd parameters are because they are just fine... you have to just configure and run the program same as for any amd processor, which is probably the same with intel, programs have almost zero configuration differences between the two.

How can developers make use of Virtualization?

Where can virtualization techniques be applied by an application developer? How can virtualization be applied on a day-to-day basis?
I would like to understand from veteran developers making use of it. I am interested in the following things:
How it helps in development.
How it could be used for testing purposes.
What are the recommended practices.
The main benefit, in my view, is that in a single machine, you can test an application in:
Different OSs, in case your app is multiplatform
Different configurations, like testing a client in one machine and a server in the other, or trying different parameters
Diffferent performance characteristics, like with minimal CPU and RAM, and with multicore and high amounts of RAM
Additionally, you can provide VM images to distribute applications preconfigured, be it for testing or for running applications in virtualized environments, where it makes sense (for apps which do not demand much power)
Can't say I'm a veteran developer, but I've used virtualization extensively when environments need to be controlled. That goes for:
Development: not only is it really useful to have VMs about for different deployment environments (e.g. browser versions, Windows XP / Vista / 7) but especially for maintenance it's handy to have a VM with the right development tools configured for a particular job.
Testing: this is where VMs really shine: it's great to have different deployment environments that can be set back to a known good configuration and multiple server instances running in parallel to test load balancing.
I've also found it useful to have a standard test image available that I can run locally to verify that a fix works. If it doesn't then I can roll back to the previous snapshot with no problems.
I've been using Virtual PC running Windows XP to test products I'm developing. I have clients who still need XP support while my primary dev environment is Vista (haven't had time to jump to Win7 yet), so having a virtual setup for XP is a big time saver.
Before each client drop, I build and test on my Vista dev machine then fire up VPC with XP, drag the binaries to the XP guest OS (enabled by installing Virtual PC additions on the guest OS) and run my tests there. I use the Undo disk feature of Virtual PC so I can always start with a clean XP image. This process would have been really cumbersome without virtualization.
I can now dump my old PCs at the local PC Recycle with no regrets :)
Some sort of test environment: if you are debugging malware (either writing it or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.