Save file in eclipse makes processor works hard - eclipse

I am using eclipse juno. Every time i save a file, eclipse consume 100% processor.
Here are the snapshot from top command :
Tasks: 303 total, 1 running, 301 sleeping, 1 stopped, 0 zombie
%Cpu(s): 31,2 us, 1,4 sy, 0,0 ni, 65,6 id, 0,4 wa, 0,0 hi, 1,4 si, 0,0 st
KiB Mem: 8077332 total, 5122068 used, 2955264 free, 509476 buffers
KiB Swap: 8252412 total, 0 used, 8252412 free, 2242736 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3816 iwan 20 0 1141m 410m 35m S 100,9 5,2 59:00.47 eclipse
3882 iwan 20 0 594m 162m 52m S 2,3 2,1 6:09.30 skype
2646 iwan 20 0 309m 82m 32m S 2,0 1,0 9:05.18 compiz
3894 iwan 20 0 851m 171m 42m S 2,0 2,2 3:00.66 thunderbird
1305 root 20 0 266m 68m 55m S 1,3 0,9 7:55.87 Xorg
any ideas ?

Apply any available updates. If problems continue, keep an eye on http://bugs.eclipse.org/402018 .

Related

Yocto separate home partition; best practice for fstab generation

I've started experimenting with Yocto, starting from my evaluation kit manufacturer's repository. Since I want to have a read-only root, I want to move /home to a separate read/write partition.
I can create the partitions on the SD image by creating a custom wic.in.
part u-boot --source rawcopy --sourceparams="file=imx-boot" --ondisk mmcblk --no-table --align ${IMX_BOOT_SEEK}
part / --source rootfs --ondisk mmcblk --fstype=ext4 --label root --align 8192 --fixed-size=4096M --exclude-path=home/
part /home --source rootfs --rootfs-dir=${IMAGE_ROOTFS}/home --ondisk mmcblk --fstype=ext4 --label home --align 8192 --fixed-size=2048M
bootloader --ptable msdos
But the home partition is not used as HOME, but mounted as a normal data partition in media.
root#imx8mp-var-dart:/# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mmcblk2 179:0 0 14.6G 0 disk
`-mmcblk2p1 179:1 0 14.6G 0 part /run/media/mmcblk2p1
mmcblk2boot0 179:32 0 4M 1 disk
mmcblk2boot1 179:64 0 4M 1 disk
mmcblk1 179:96 0 14.8G 0 disk
|-mmcblk1p1 179:97 0 4G 0 part /
`-mmcblk1p2 179:98 0 4G 0 part /run/media/mmcblk1p2
Default fstab (file located in poky's layer) doesn't have the new home entry, so that's the expected behavior.
# stock fstab - you probably want to override this with a machine specific one
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
# uncomment this if your device has a SD/MMC/Transflash slot
#/dev/mmcblk0p1 /media/card auto defaults,sync,noauto 0 0
I've "stolen" the wic.in configuration from [here][1]. A comment in the same thread states that "WIC will automatically add an entry in fstab". But that doesn't seems the case...
How to let Yocto/WIC populate fstab correctly, so that the home partition is actually used as HOME?
Another approach is to add a manually generated fstab in a custom layer.
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
/dev/mmcblk1p2 /home auto defaults 0 0
Done it, and it works. Although I don't really like the fact that I have to define the SD card partition name as input. As an example, if I add a third partition between root and home, home partition will become mmcblk1p3, and I need to edit fstab, too.
Looks like the root partition is just pointing to /dev/root. I suppose it works because the partition is labeled root. (On a side note, /dev/root doesn't exist in root filesystem (searched with ls -la /dev)). Tried to use /dev/home, but it didn't work.
Is there a way to declare the home partition in fstab without using the "low level" SD partition name, but something more generic/portable?

Centos not using available memory

I have Centos installed on a server with 64gb memory and it seems as if the memory usage is being suppressed.
I came to this conclusion by running an insert statement where I insert 10million rows into a Postgres table in both a Timescaledb and a standard Postgres instance hosted on Docker.
I monitored the insert process in three different ways:
Docker stats timescaledb:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
timescaledb 73.14% 10.42 MiB / 62.75 GiB 0.02% 8.46 kB / 8.39 kB 0 B / 15.1 GB 12
free -i gives the following:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16298 avahi 20 0 16.2g 762356 759908 R 41.5 1.2 0:22.72 postgres
16127 avahi 20 0 16.2g 693080 691968 S 4.3 1.1 0:01.29 postgres
16129 avahi 20 0 16.2g 17748 16712 S 2.3 0.0 0:00.87 postgres
1578 root 30 10 1232780 86976 11568 S 0.7 0.1 0:46.34 osqueryd
17014 root 20 0 162264 2480 1596 R 0.7 0.0 0:00.03 top
928 root 20 0 90608 3212 2352 S 0.3 0.0 0:03.47 rngd
16128 avahi 20 0 16.2g 132064 131016 S 0.3 0.2 0:00.18 postgres
free -h gives the following
total used free shared buff/cache available
Mem: 62G 1.0G 58G 1.1G 3.1G 56G
Swap: 62G 0B 62G
I know that Timescaledb is an extension of Postgres which comes with its own memory configurations, but the Docker container of Timescaledb configures these automatically for you (for instance effective cache size is set at 48gb as opposed to the default 4gb that Postgres ships with). I also ran a similar process with Apache spark with 16gb assigned to the worker and it ran into an oom error. Additionally, I did a similar test on a different smaller VM and the memory usage increased as expected. All of this leads me to believe that it's a Centos config setting that I am missing somewhere, and nothing to do with Timescale/Postgres?
I have added the following parameters to vm.overcommit_memory = 2 and vm.overcommit_ratio = 95 in /etc/sysctl.conf and ran sysctl -p to implement the settings, but this didn't make a difference.
kernel.shmall = 8224280
kernel.shmmax = 33686650880
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95
Below is the output from cat /proc/meminfo
MemTotal: 65794240 kB
MemFree: 61098656 kB
MemAvailable: 59252660 kB
Buffers: 2120 kB
Cached: 3467144 kB
SwapCached: 0 kB
Active: 2817620 kB
Inactive: 884816 kB
Active(anon): 1109220 kB
Inactive(anon): 234708 kB
Active(file): 1708400 kB
Inactive(file): 650108 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 65535996 kB
SwapFree: 65535996 kB
Dirty: 88 kB
Writeback: 0 kB
AnonPages: 233188 kB
Mapped: 1175120 kB
Shmem: 1110756 kB
Slab: 204044 kB
SReclaimable: 142700 kB
SUnreclaim: 61344 kB
KernelStack: 7232 kB
PageTables: 14672 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 128040524 kB
Committed_AS: 18709300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 408824 kB
VmallocChunk: 34325399548 kB
Percpu: 9216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 96256 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 133604 kB
DirectMap2M: 66965504 kB
Is there maybe something I can try to increase my memory usage? Is there maybe a config setting that I am missing somehere?
Thanks in advance for any help
PostgreSQL also uses "unused" memory, because it uses buffered I/O. So this "unused memory" is used by the kernel to cache files – in the case of a database server, these will be database files. That way, I/O requests by PostgreSQL can be served from the kernel cache rather than causing disk I/O requests.

PulseAudio: Play samples at a set volume

I have built a Raspberry Pi music player, with volume control. The issue is that when the volume is very low or very high, the sound effects (button press bleeps, etc.) are also too weak or too loud. The Raspberry Pi uses PulseAudio (system daemon), and this is its PulseAudio set-up:
# pactl list short
0 module-udev-detect
1 module-alsa-card device_id="0" name="platform-soc_sound" card_name="alsa_card.platform-soc_sound" namereg_fail=false tsched=yes fixed_latency_range=no ignore_dB=no deferred_volume=yes use_ucm=yes card_properties="module-udev-detect.discovered=1"
2 module-native-protocol-unix auth-cookie-enabled=0
3 module-stream-restore
4 module-device-restore
5 module-default-device-restore
6 module-bluetooth-policy
7 module-bluetooth-discover
8 module-bluez5-discover
9 module-rescue-streams
10 module-always-sink
11 module-switch-on-connect
0 alsa_output.platform-soc_sound.analog-stereo module-alsa-card.c s16le 2ch 44100Hz RUNNING
0 alsa_output.platform-soc_sound.analog-stereo.monitor module-alsa-card.c s16le 2ch 44100Hz IDLE
1 0 0 protocol-native.c s24-32le 2ch 44100Hz
0 protocol-native.c mpd
48 protocol-native.c pactl
0 startup float32le 2ch 44100Hz 3.279
1 beep_60 float32le 1ch 44100Hz 0.119
2 beep_70 float32le 1ch 44100Hz 0.119
3 beep_60_70 float32le 1ch 44100Hz 0.166
4 error float32le 2ch 44100Hz 0.702
5 bt float32le 1ch 44100Hz 0.264
0 alsa_card.platform-soc_sound module-alsa-card.c
I play the samples, using:
pactl play-sample startup
This command can take an additional parameter, namely the PulseAudio sink on which to play.
The solution seems to, somehow, create a sink, attach it to the Alsa card and use that to play the samples. I assume this sink would have its own volume control.

What do these Windbg error messages mean?

I'm trying to do a !heap -s in Windbg to get heap information. When I attempt it I get the following output:
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
00000000005d0000 08000002 512 28 512 10 3 1 0 0
Error: Heap 0000000000000000 has an invalid signature eeffeeff
Front-end heap type info is not available
Front-end heap type info is not available
Virtual block: 0000000000000000 - 0000000000000000 (size 0000000000000000)
HEAP 0000000000000000 (Seg 0000000000000000) At 0000000000000000 Error: Unable to read virtual block
0000000000000000 00000000 0 0 0 0 0 0 1 0
-----------------------------------------------------------------------------
I can't find any reference as to what the unusual error/not available lines mean.
Can someone please give me a summary as to why I'm not getting an expected list of heaps?
The only thing I execute prior to !heap -s is !wow64exts.sw because the process dumps are from a 32 bit process but created by a 64 bit Task Manager.
After testing with the 32 and 64 bit Task Managers it appears that process dumps of 32 bit processes created by the 64 bit Task Manager can only be debugged successfully in some areas using !wow64exts.sw in Windbg to use 32 bit debugging.
That extension allows call stacks to be reviewed correctly, but !heap -s does not appear to work correctly under it. Instead you end up with the errors in the question.
For example, the output from a process dump of the 32 bit process using the 32 bit Task Manager:
0:000> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x06b058a2
Termination on corruption : DISABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
031b0000 08000002 1024 236 1024 2 13 1 0 0 LFH
001d0000 08001002 1088 188 1088 18 9 2 0 0 LFH
01e30000 08001002 1088 160 1088 4 3 2 0 0 LFH
03930000 08001002 256 4 256 2 1 1 0 0
038a0000 08001002 64 16 64 13 1 1 0 0
-----------------------------------------------------------------------------
The output from a process dump of the 32 bit process using the 64 bit Task Manager without !wow64exts.sw:
0:000> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x000000b406b058a2
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
0000000001f70000 08000002 512 28 512 10 3 1 0 0
0000000000020000 08008000 64 4 64 1 1 1 0 0
-------------------------------------------------------------------------------------
The output from a process dump of the 32 bit process using the 64 bit Task Manager with !wow64exts.sw:
0:000> !wow64exts.sw
Switched to 32bit mode
0:000:x86> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x000000b406b058a2
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
0000000001f70000 08000002 512 28 512 10 3 1 0 0
Error: Heap 0000000000000000 has an invalid signature eeffeeff
Front-end heap type info is not available
Front-end heap type info is not available
Virtual block: 0000000000000000 - 0000000000000000 (size 0000000000000000)
HEAP 0000000000000000 (Seg 0000000000000000) At 0000000000000000 Error: Unable to read virtual block
0000000000000000 00000000 0 0 0 0 0 0 1 0
-----------------------------------------------------------------------------
Those were all taken from the same process.

Comprehensive methods of viewing memory usage on Solaris [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
On Linux, the "top" command shows a detailed but high level overview of your memory usage, showing:
Total Memory, Used Memory, Free Memory, Buffer Usage, Cache Usage, Swap size and Swap Usage.
My question is, what commands are available to show these memory usage figures in a clear and simple way? Bonus points if they're present in the "Core" install of Solaris. 'sar' doesn't count :)
Here are the basics. I'm not sure that any of these count as "clear and simple" though.
ps(1)
For process-level view:
$ ps -opid,vsz,rss,osz,args
PID VSZ RSS SZ COMMAND
1831 1776 1008 222 ps -opid,vsz,rss,osz,args
1782 3464 2504 433 -bash
$
vsz/VSZ: total virtual process size (kb)
rss/RSS: resident set size (kb, may be inaccurate(!), see man)
osz/SZ: total size in memory (pages)
To compute byte size from pages:
$ sz_pages=$(ps -o osz -p $pid | grep -v SZ )
$ sz_bytes=$(( $sz_pages * $(pagesize) ))
$ sz_mbytes=$(( $sz_bytes / ( 1024 * 1024 ) ))
$ echo "$pid OSZ=$sz_mbytes MB"
vmstat(1M)
$ vmstat 5 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr rm s3 -- -- in sy cs us sy id
0 0 0 535832 219880 1 2 0 0 0 0 0 -0 0 0 0 402 19 97 0 1 99
0 0 0 514376 203648 1 4 0 0 0 0 0 0 0 0 0 402 19 96 0 1 99
^C
prstat(1M)
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1852 martin 4840K 3600K cpu0 59 0 0:00:00 0.3% prstat/1
1780 martin 9384K 2920K sleep 59 0 0:00:00 0.0% sshd/1
...
swap(1)
"Long listing" and "summary" modes:
$ swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 1048560 1048560
$ swap -s
total: 42352k bytes allocated + 20192k reserved = 62544k used, 607672k available
$
top(1)
An older version (3.51) is available on the Solaris companion CD from Sun, with the disclaimer that this is "Community (not Sun) supported".
More recent binary packages available from sunfreeware.com or blastwave.org.
load averages: 0.02, 0.00, 0.00; up 2+12:31:38 08:53:58
31 processes: 30 sleeping, 1 on cpu
CPU states: 98.0% idle, 0.0% user, 2.0% kernel, 0.0% iowait, 0.0% swap
Memory: 1024M phys mem, 197M free mem, 512M total swap, 512M free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
1898 martin 1 54 0 3336K 1808K cpu 0:00 0.96% top
7 root 11 59 0 10M 7912K sleep 0:09 0.02% svc.startd
sar(1M)
And just what's wrong with sar? :)
# echo ::memstat | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 7308 57 23%
Anon 9055 70 29%
Exec and libs 1968 15 6%
Page cache 2224 17 7%
Free (cachelist) 6470 50 20%
Free (freelist) 4641 36 15%
Total 31666 247
Physical 31256 244
"top" is usually available on Solaris.
If not then revert to "vmstat" which is available on most UNIX system.
It should look something like this (from an AIX box)
vmstat
System configuration: lcpu=4 mem=12288MB ent=2.00
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
2 1 1614644 585722 0 0 1 22 104 0 808 29047 2767 12 8 77 3 0.45 22.3
the colums "avm" and "fre" tell you the total memory and free memery.
a "man vmstat" should get you the gory details.
Top can be compiled from sources or downloaded from sunfreeware.com. As previously posted, vmstat is available (I believe it's in the core install?).
The command free is nice. Takes a short while to understand the "+/- buffers/cache", but the idea is that cache and buffers doesn't really count when evaluating "free", as it can be dumped right away. Therefore, to see how much free (and used) memory you have, you need to remove the cache/buffer usage - which is conveniently done for you.