Matlab - Smoothing crooked and aliased lines in plots - matlab

I'm sure this has an easy solution, but I can't get a way out of it. When I plot a thick line in Matlab and print it (r550) I get the crooked line seen below. I tried the 'smooth' command to no avail. Here's the code:
plot(x,y1,'b','LineWidth',8);hold on
plot(x,y2,'r','LineWidth',8);hold on
print -djpeg -r550 figure1
here's the blue line values (y1):
y1 = [1.9 1.81 1.73 1.65 1.63 1.6 1.65 1.59 1.54 1.52 1.47 1.52 1.53 1.48 1.44 1.43 1.‌​39 1.38 1.34 1.33 1.33 1.32 1.29 1.26 1.23 1.22 1.24 1.23 1.21 1.22 1.22 1.2 1.25‌​ 1.25 1.22 1.22 1.2 1.18 1.19 1.17 1.15 1.13 1.15 1.13 1.11 1.09 1.08 1.07 1.12 1‌​.1 1.1 1.08 1.08 1.07 1.05 1.04 1.03 1.01 1.01 1.01 1.01 1 1 1 1 1 0.99 1.01 1.01‌​ 1.01 1 0.99 0.98 0.98 0.98 0.97 0.97 0.97 0.97 0.97 0.97 0.96 0.96 0.96 0.96 0.9‌​5 0.95 0.99 0.98 1 0.99 0.98 0.98 0.98 0.98 0.98 0.97 0.97 0.97 0.96 0.95 0.94 0.‌​94 0.93 0.93 0.93 0.92 0.93 0.92 0.91 0.92 0.92 0.91 0.92 0.93 0.92 0.91 0.91 0.9‌​1 0.9 0.89 0.89 0.89 0.88 0.88];
Any help to make it look nice and smooth? Thanks!
---------WITH DMETA--------------
I used a resolution of 600dpi. Next figure doesn't look bad, but in a Word file or Powerpoint doesn't shows as good. Any ideas??

If you work in Windows, the good option is to export to a windows meta file (.emf) instead of jpeg:
print -dmeta figure1
Additionally, it looks much better in MS Office documents (vector format).
You can always convert emf to jpeg if required.

This problem is caused by how matlab renders objects for saving.
One thing you could try is the HG2 update to MATLAB (link). It is a MAJOR improvement to how visually appealing graphics are, however it can cause matlab to crash.
A workaround that you might find good enough is to add a marker plot for each data line. such as
plot(x,y1,'.b','MarkerSize',24);
Placing a marker at each node will fill in the rough edges of the plot. You might have to play around with the marker size a little.

Related

Errorbar formatting in octave

I was plotting some two dimensional plot in Octave my code is given below
A=dlmread('data.txt');
x=A(:,1);
y=A(:,2);
err=A(:,3);
errorbar(x,y,err,'or','markerfacecolor','r','markersize',5)
and the issue is that the markerfacecolor is not working and markersize is also not working. How could I solve this issue ?
The error shown in command window is as follows
error: errorbar: data argument 5 must be numeric
error: called from
__errplot__ at line 44 column 7
errorbar at line 184 column 10
rbar at line 5 column 1
and this code plots fine if I remove markerfacecolor and markersize. I mean it gives an output without markerfacecolor , markersize rather than showing errror in command window. Please help
The file data.txt is of this form
1.0 3.1 0.21
2.0 4.1 0.29
3.1 5.2 0.42
4.0 6.1 0.53
4.9 7.7 0.63
6.0 8.0 0.72
6.0 9.0 0.75
7.0 13.1 0.21
8.0 23.1 0.21
9.0 29.3 0.21
10.0 30.1 8.21
11.1 28.7 2.1
12.0 23.1 2.2
13.1 18.1 1.61
You can set markerfacecolor and markersize after drawing your errorbars. i.e.
h = errorbar(x,y,err,'or');
set(h,'markerfacecolor','r','markersize',5);
Result:

postgres memory performance vs sharedfileset in pgsql_tmp directory

We are running a postgres cluster with 32 CPU and 84 GB RAM in production with db size close to ~800GB.
It has replica setup as well with streaming wall replication (another host) kept as hot-standby.
Linux 2.6.32-642.6.2.el6.x86_64 (CentOS 6.4, upgrade pending)
Postgres Version - 11.11
We see some weird behaviour since last month with our instance having memory usage going every 80 hours even without having any cronjobs running at the specified moment (usually cron setup works specified-time/hourly/daily/weekly/monthly).
At the same time we see huge(~70gb) of temp files , e.g pgsql_tmp90128.0.sharedfileset/i231326of1048576.p2.0) being created and not getting deleted even after the postgres started working fine after 10 minutes.
My related settings for postgres memory utilization are following
work_mem 8 MB
temp_buffers 8 MB
shared_buffers 22 GB
max_connections 2500
During the window we see system cpu shoots up and came down again once memory refreshes.
$ sar -f /var/log/sa/sa17 -s 08:36:00 -e 08:47:00
>
08:36:21 AM CPU %user %nice %system %iowait %steal %idle
08:37:21 AM all 18.66 0.01 26.05 9.54 0.00 45.74
08:38:21 AM all 5.21 0.01 81.71 2.11 0.00 10.96
08:39:21 AM all 2.71 0.01 96.90 0.04 0.00 0.34
08:40:08 AM all 1.55 0.00 98.44 0.00 0.00 0.00
08:41:08 AM all 9.96 0.02 38.49 2.20 0.00 49.34
08:42:08 AM all 0.12 0.00 1.50 1.63 0.00 96.74
08:43:08 AM all 0.10 0.01 1.32 1.79 0.00 96.79
08:44:08 AM all 0.20 0.01 2.25 0.76 0.00 96.79
08:45:08 AM all 1.74 0.01 1.83 5.89 0.00 90.53
08:46:08 AM all 10.09 0.01 7.37 18.90 0.00 63.63
Average: all 5.11 0.01 34.18 4.36 0.00 56.34
RAM usage pattern
$ sar -r -f /var/log/sa/sa17 -s 08:36:00 -e 08:47:00
>
08:36:21 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit
08:37:21 AM 430888 88277480 99.51 12176 23869496 79753764 87.83
08:38:21 AM 420612 88287756 99.53 7212 23425316 80174100 88.29
08:39:21 AM 424276 88284092 99.52 3552 23246368 81653952 89.92
08:40:08 AM 417172 88291196 99.53 4316 22819904 82023344 90.33
08:41:08 AM 84692800 4015568 4.53 978760 1124852 25077588 27.62
08:42:08 AM 78749436 9958932 11.23 2098912 1139416 25067128 27.61
08:43:08 AM 73598564 15109804 17.03 3065228 1140756 25067516 27.61
08:44:08 AM 70213124 18495244 20.85 3175984 1141168 25067016 27.61
08:45:08 AM 56661972 32046396 36.13 3179640 13233168 27063188 29.80
08:46:08 AM 30585252 58123116 65.52 3207036 30364932 32130264 35.38
Average: 39619410 49088958 55.34 1573282 14150538 48307786 53.20
I have the following questions in this respect:
What can be the probable cause of this behaviour of memory usage going dropped at regular interval ?
What these temp_files are and is it safer to delete manually ?
Is there a bug in Postgres, why it's not getting automatically
cleaned, related-thread
I see several run of CHECKPOINT and even then the temp files are not getting cleared.

Centos not using available memory

I have Centos installed on a server with 64gb memory and it seems as if the memory usage is being suppressed.
I came to this conclusion by running an insert statement where I insert 10million rows into a Postgres table in both a Timescaledb and a standard Postgres instance hosted on Docker.
I monitored the insert process in three different ways:
Docker stats timescaledb:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
timescaledb 73.14% 10.42 MiB / 62.75 GiB 0.02% 8.46 kB / 8.39 kB 0 B / 15.1 GB 12
free -i gives the following:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16298 avahi 20 0 16.2g 762356 759908 R 41.5 1.2 0:22.72 postgres
16127 avahi 20 0 16.2g 693080 691968 S 4.3 1.1 0:01.29 postgres
16129 avahi 20 0 16.2g 17748 16712 S 2.3 0.0 0:00.87 postgres
1578 root 30 10 1232780 86976 11568 S 0.7 0.1 0:46.34 osqueryd
17014 root 20 0 162264 2480 1596 R 0.7 0.0 0:00.03 top
928 root 20 0 90608 3212 2352 S 0.3 0.0 0:03.47 rngd
16128 avahi 20 0 16.2g 132064 131016 S 0.3 0.2 0:00.18 postgres
free -h gives the following
total used free shared buff/cache available
Mem: 62G 1.0G 58G 1.1G 3.1G 56G
Swap: 62G 0B 62G
I know that Timescaledb is an extension of Postgres which comes with its own memory configurations, but the Docker container of Timescaledb configures these automatically for you (for instance effective cache size is set at 48gb as opposed to the default 4gb that Postgres ships with). I also ran a similar process with Apache spark with 16gb assigned to the worker and it ran into an oom error. Additionally, I did a similar test on a different smaller VM and the memory usage increased as expected. All of this leads me to believe that it's a Centos config setting that I am missing somewhere, and nothing to do with Timescale/Postgres?
I have added the following parameters to vm.overcommit_memory = 2 and vm.overcommit_ratio = 95 in /etc/sysctl.conf and ran sysctl -p to implement the settings, but this didn't make a difference.
kernel.shmall = 8224280
kernel.shmmax = 33686650880
kernel.shmmni = 4096
vm.overcommit_memory = 2
vm.overcommit_ratio = 95
Below is the output from cat /proc/meminfo
MemTotal: 65794240 kB
MemFree: 61098656 kB
MemAvailable: 59252660 kB
Buffers: 2120 kB
Cached: 3467144 kB
SwapCached: 0 kB
Active: 2817620 kB
Inactive: 884816 kB
Active(anon): 1109220 kB
Inactive(anon): 234708 kB
Active(file): 1708400 kB
Inactive(file): 650108 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 65535996 kB
SwapFree: 65535996 kB
Dirty: 88 kB
Writeback: 0 kB
AnonPages: 233188 kB
Mapped: 1175120 kB
Shmem: 1110756 kB
Slab: 204044 kB
SReclaimable: 142700 kB
SUnreclaim: 61344 kB
KernelStack: 7232 kB
PageTables: 14672 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 128040524 kB
Committed_AS: 18709300 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 408824 kB
VmallocChunk: 34325399548 kB
Percpu: 9216 kB
HardwareCorrupted: 0 kB
AnonHugePages: 96256 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 133604 kB
DirectMap2M: 66965504 kB
Is there maybe something I can try to increase my memory usage? Is there maybe a config setting that I am missing somehere?
Thanks in advance for any help
PostgreSQL also uses "unused" memory, because it uses buffered I/O. So this "unused memory" is used by the kernel to cache files – in the case of a database server, these will be database files. That way, I/O requests by PostgreSQL can be served from the kernel cache rather than causing disk I/O requests.

How to check which process is occupying the most disk i/o in solaris

I am trying to check which process is taking up most disk i/o on my solaris server as it is behaving very much slow. Need help.
iostat -xtc
extended device statistics tty cpu
device r/s w/s kr/s kw/s wait actv svc_t %w %b tin tout us sy wt id
sd1 75.9 979.9 113.3 3524.9 0.0 5.4 5.1 0 69 0 53 1 2 0 97
nfs1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
It is quite hard, iotop script from Brendan Gregg can help you.
Here is the script:
http://www.brendangregg.com/DTrace/iotop
Here is the explanatory paper:
http://www.brendangregg.com/Solaris/paper_diskubyp1.pdf

Save file in eclipse makes processor works hard

I am using eclipse juno. Every time i save a file, eclipse consume 100% processor.
Here are the snapshot from top command :
Tasks: 303 total, 1 running, 301 sleeping, 1 stopped, 0 zombie
%Cpu(s): 31,2 us, 1,4 sy, 0,0 ni, 65,6 id, 0,4 wa, 0,0 hi, 1,4 si, 0,0 st
KiB Mem: 8077332 total, 5122068 used, 2955264 free, 509476 buffers
KiB Swap: 8252412 total, 0 used, 8252412 free, 2242736 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3816 iwan 20 0 1141m 410m 35m S 100,9 5,2 59:00.47 eclipse
3882 iwan 20 0 594m 162m 52m S 2,3 2,1 6:09.30 skype
2646 iwan 20 0 309m 82m 32m S 2,0 1,0 9:05.18 compiz
3894 iwan 20 0 851m 171m 42m S 2,0 2,2 3:00.66 thunderbird
1305 root 20 0 266m 68m 55m S 1,3 0,9 7:55.87 Xorg
any ideas ?
Apply any available updates. If problems continue, keep an eye on http://bugs.eclipse.org/402018 .