I have a server that has great specs, dual 6 core 3.3GHz 196GB ram with RAID 10 across 4 10K SAS drives. I wrote a script that should download each of the North America files and process them one by one rather than the entire section all at once.
processList.sh:
wget http://download.geofabrik.de/north-america/us/alabama-latest.osm.pbf -O ./geoFiles/north-america/us/alabama-latest.osm.pbf
osm2pgsql -d gis --create --slim -G --hstore --tag-transform-script ~/src/openstreetmap-carto/openstreetmap-carto.lua -C 2000 --number-processes 15 -S ~/src/openstreetmap-carto/openstreetmap-carto.style ./geoFiles/north-america/us/alabama-latest.osm.$
while read in;
do wget http://download.geofabrik.de/$in -O ./geoFiles/$in;
osm2pgsql -d gis --append --slim -G --hstore --tag-transform-script ~/src/openstreetmap-carto/openstreetmap-carto.lua -C 2000 --number-processes 15 -S ~/src/openstreetmap-carto/openstreetmap-carto.style ./geoFiles/$in;
done < maplist.txt
At first it starts out processing at nearly 400K points/second, then slows to 10k or less
osm2pgsql version 0.96.0 (64 bit id space)
Using lua based tag processing pipeline with script /root/src/openstreetmap-carto/openstreetmap-carto.lua
Using projection SRS 3857 (Spherical Mercator)
Setting up table: planet_osm_point
Setting up table: planet_osm_line
Setting up table: planet_osm_polygon
Setting up table: planet_osm_roads
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=2000MB, maxblocks=32000*65536, allocation method=11
Mid: pgsql, cache=2000
Setting up table: planet_osm_nodes
Setting up table: planet_osm_ways
Setting up table: planet_osm_rels
Reading in file: ./geoFiles/north-america/us/alabama-latest.osm.pbf
Using PBF parser.
Processing: Node(5580k 10.7k/s) Way(0k 0.00k/s) Relation(0 0.00/s))
I applied the performance stuff from https://wiki.openstreetmap.org/wiki/Osm2pgsql/benchmarks for Postgresql:
shared_buffers = 14GB
work_mem = 1GB
maintenance_work_mem = 8GB
effective_io_concurrency = 500
max_worker_processes = 8
max_parallel_workers_per_gather = 2
max_parallel_workers = 8
checkpoint_timeout = 1h
max_wal_size = 5GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
random_page_cost = 1.1
min_parallel_table_scan_size = 8MB
min_parallel_index_scan_size = 512kB
effective_cache_size = 22GB
Though it starts out well, it quickly deteriorates within about 20 seconds. Any idea why? I looked at top, but it didn't show anything really:
top - 22:48:46 up 3:11, 2 users, load average: 3.49, 4.03, 3.38
Tasks: 298 total, 1 running, 297 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 87.5 id, 12.5 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 19808144+total, 19237500+free, 780408 used, 4926040 buff/cache
KiB Swap: 29321212 total, 29321212 free, 0 used. 19437014+avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16156 root 20 0 50.819g 75920 8440 S 0.7 0.0 0:02.81 osm2pgsql
16295 root 20 0 42076 4156 3264 R 0.3 0.0 0:00.27 top
1 root 20 0 37972 6024 4004 S 0.0 0.0 0:07.10 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
6 root 20 0 0 0 0 S 0.0 0.0 0:00.58 kworker/u64:0
8 root 20 0 0 0 0 S 0.0 0.0 0:01.79 rcu_sched
9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
10 root rt 0 0 0 0 S 0.0 0.0 0:00.05 migration/0
11 root rt 0 0 0 0 S 0.0 0.0 0:00.03 watchdog/0
It had a large load average without listing anything as using it. Here are the results from iotop
Total DISK READ : 0.00 B/s | Total DISK WRITE : 591.32 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 204.69 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
28638 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.60 % [kworker/u65:1]
20643 be/4 postgres 0.00 B/s 204.69 K/s 0.00 % 0.10 % postgres: wal writer process
20641 be/4 postgres 0.00 B/s 288.08 K/s 0.00 % 0.00 % postgres: checkpointer process
26923 be/4 postgres 0.00 B/s 98.55 K/s 0.00 % 0.00 % postgres: root gis [local] idle in transaction
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]
6 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u64:0]
8 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched]
9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh]
10 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0]
12 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/1]
Related
I was writing a Powershell script using a pipeline with a Process block and it started doing something unexpected: listing all the running processes and then dumping the script contents. I kept minimizing the script to try to figure out what was going on and ended up with this:
[CmdletBinding()]
Param()
$varname = "huh"
Process
{
# nothing here
}
So it looks like this:
PS /Volumes/folder> cat ./test.ps1
[CmdletBinding()]
Param()
$varname = "huh"
Process
{
# nothing here
}
PS /Volumes/folder> pwsh ./test.ps1
NPM(K) PM(M) WS(M) CPU(s) Id SI ProcessName
------ ----- ----- ------ -- -- -----------
0 0.00 0.00 0.00 0 639
0 0.00 0.00 0.00 1 1
0 0.00 0.00 0.00 60 60
0 0.00 0.00 0.00 61 61
0 0.00 0.00 0.00 65 65
0 0.00 0.00 0.00 67 67
0 0.00 0.00 0.00 68 68
0 0.00 0.00 0.00 69 69
0 0.00 0.00 0.00 71 71
0 0.00 0.00 0.00 73 73
0 0.00 0.00 0.00 75 75
0 0.00 25.60 75.82 68475 1 Activity Monito
0 0.00 11.74 97.63 1053 1 Adobe Crash Han
0 0.00 11.76 97.62 1084 1 Adobe Crash Han
0 0.00 11.69 97.64 1392 1 Adobe Crash Han
0 0.00 112.50 83.59 973 1 Adobe Desktop S
0 0.00 11.94 97.31 986 1 AdobeCRDaemon
0 0.00 16.95 105.99 966 1 AdobeIPCBroker
0 0.00 61.52 168.92 721 1 Adobe_CCXProces
0 0.00 18.57 3.01 454 1 adprivacyd
0 0.00 16.46 23.16 700 1 AGMService
0 0.00 13.65 4.43 701 1 AirPlayUIAgent
--snip--
0 0.00 9.11 12.72 89003 …03 VTDecoderXPCSer
0 0.00 13.32 4.69 418 1 WiFiAgent
0 0.00 12.21 1.58 543 543 WiFiProxy
# nothing here
I haven't done much in Powershell for a long time so if this is something stupid simple I'm going to laugh but I couldn't find anything searching the net.
Can someone tell me what's happening?
In order to use a process block (possibly alongside a begin, end, and, in v7.3+, the clean block), there must not be any code OUTSIDE these blocks - see the conceptual about_Functions help topic.
Therefore, remove $varname = "huh" from the top-level scope of your function body (possibly move it into one of the aforementioned blocks).
As for what you tried:
By having $varname = "huh" in the top-level scope of your function body, you've effectively made the function in one whose code runs in an implicit end block only.
process - because it is on its own line - was then interpreted as a command, which - due to the best-avoided default-verb logic - was interpreted as an argument-less call to the Get-Process cmdlet.
The output therefore included the list of all processes on your system.
The { ... } on the subsequent lines was then interpreted as a script block literal. Since that script block wasn't invoked, it was implicitly output, which results in its stringification, which is its _verbatim content, excluding { and }, resulting in output of the following string:
# nothing here
I am using unbuntu 20.04 and postgresql 12 .
My memory is 128 GB , SDD is 1TB. Cpu is i7 (16 core 20 threads)
I made simple c++ program which connect postgresql and generting tile map (just map image).
It's similar with osmtilemaker.
Once program starts, it takes serveral hours to servral months to finish job.
For first 4~5 hours, it runs well.
I monitored memory usage, and It doesnot occupy more than 10% of whole Memory at most.
Here is the screenshot of top command
Tasks: 395 total, 16 running, 379 sleeping, 0 stopped, 0 zombie
%Cpu(s): 70.3 us, 2.9 sy, 0.0 ni, 26.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 128565.5 total, 111124.7 free, 11335.7 used, 6105.1 buff/cache
MiB Swap: 2048.0 total, 2003.6 free, 44.4 used. 115108.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2217811 postgres 20 0 2369540 1.7g 678700 R 99.7 1.3 25:27.36 postgres
2217407 postgres 20 0 2393448 1.7g 678540 R 99.0 1.4 25:32.04 postgres
2217836 postgres 20 0 2352936 1.7g 679348 R 98.0 1.3 25:26.22 postgres
2217715 postgres 20 0 2368268 1.7g 680144 R 97.7 1.3 25:29.78 postgres
2217684 postgres 20 0 2384308 1.7g 679248 R 97.3 1.4 25:29.49 postgres
2217539 postgres 20 0 2386156 1.7g 680124 R 97.0 1.4 25:30.46 postgres
2216651 postgres 20 0 2429348 1.8g 678128 R 95.7 1.4 26:05.99 postgres
2217025 postgres 20 0 2396408 1.7g 679292 R 94.4 1.4 25:51.85 postgres
2238487 postgres 20 0 1294752 83724 54024 R 14.3 0.1 0:00.43 postgres
2238488 postgres 20 0 1294968 219304 189116 R 14.0 0.2 0:00.42 postgres
2238489 postgres 20 0 1294552 85624 56068 R 12.6 0.1 0:00.38 postgres
2062928 j 20 0 861492 536088 47396 S 6.6 0.4 19:18.64 mapTiler
2238490 postgres 20 0 1290132 73244 48084 R 6.3 0.1 0:00.19 postgres
2238491 postgres 20 0 1289876 73064 48160 R 6.3 0.1 0:00.19 postgres
928763 postgres 20 0 1181720 61368 59300 S 0.7 0.0 11:59.45 postgres
1306124 j 20 0 19668 2792 2000 S 0.3 0.0 0:06.84 screen
2238492 postgres 20 0 1273864 49192 40108 R 0.3 0.0 0:00.01 postgres
2238493 postgres 20 0 1273996 50172 40852 R 0.3 0.0 0:00.01 postgres
1 root 20 0 171468 9564 4864 S 0.0 0.0 0:09.40 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
I used 8 threads in program, so 8 process is using cpu a lot ,
but memory usage is always below 10%.
But, after 4~5 hours, oom-killer killed postgres processes and program stopped running.
Here is the result of dmesg
[62585.503398] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),
cpuset=/,mems_allowed=0,global_oom,
task_memcg=/system.slice/system-postgresql.slice/postgresql#12-main.service,
task=postgres,pid=463942, uid=129
[62585.503406] Out of memory: Killed process 463942 (postgres)
total-vm:19010060kB, anon-rss:17369476kB, file-rss:0kB,
shmem-rss:848380kB, UID:129 pgtables:36776kB oom_score_adj:0
It looks like out of memory error.
But, how that can happend when I have free memory more than 100GB?
A full import of https://download.geofabrik.de/europe-latest.osm.pbf gets stuck as soon as it gets to ways.
Processing: Node(2605985k 456.3k/s) Way(638k 0.18k/s) Relation(0 0.0/s)
I have tried it with both v1.20 of osm2pgsql that comes with Nominatim v3.5.1 and also osm2pgsql 1.3.0 which is the most up to date version.
I'm running the import on a 96core machine with 50GB of RAM and 750GB SSD. It has worked previously very quickly for other regions. My --osm2pgsql-cache is set to 25000 which is enough to hold the entire pbf file in memory.
My postgres.conf is:
# Defaults from postgres db init
max_connections = 100 # (change requires restart)
dynamic_shared_memory_type = posix # the default is the first option
min_wal_size = 80MB
log_timezone = 'Etc/UTC'
datestyle = 'iso, mdy'
timezone = 'Etc/UTC'
lc_messages = 'C.UTF-8' # locale for system error message
lc_monetary = 'C.UTF-8' # locale for monetary formatting
lc_numeric = 'C.UTF-8' # locale for number formatting
lc_time = 'C.UTF-8' # locale for time formatting
# Based on https://wiki.openstreetmap.org/wiki/PostgreSQL
shared_buffers = 4GB
maintenance_work_mem = 10GB
autovacuum_work_mem = 1GB
work_mem = 256MB
checkpoint_timeout = 10min
max_wal_size = 25GB
Output of top as well:
top - 03:21:56 up 7:36, 0 users, load average: 1.10, 1.09, 1.04
Tasks: 19 total, 1 running, 18 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.0 sy, 0.0 ni, 99.7 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 87092.1 total, 30475.4 free, 27253.4 used, 29363.3 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 54874.0 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
38 postgres 20 0 4404008 4.1g 4.1g S 1.7 4.8 1:30.25 postgres
169 postgres 20 0 4392784 4.1g 4.1g D 1.3 4.8 4:41.63 postgres
165 nominat+ 20 0 27.2g 24.7g 11836 S 1.0 29.1 31:39.34 osm2pgsql
39 postgres 20 0 4391716 4.0g 4.0g S 0.3 4.7 0:23.79 postgres
1 root 20 0 8700 3572 3276 S 0.0 0.0 0:00.01 init.sh
36 postgres 20 0 4391580 127672 125776 S 0.0 0.1 0:00.97 postgres
40 postgres 20 0 4391580 21996 20080 S 0.0 0.0 1:31.52 postgres
41 postgres 20 0 4392120 8544 6328 S 0.0 0.0 0:00.33 postgres
42 postgres 20 0 71768 4808 2912 S 0.0 0.0 0:13.07 postgres
43 postgres 20 0 4392004 6920 4828 S 0.0 0.0 0:00.00 postgres
62 nominat+ 20 0 78824 25384 19972 S 0.0 0.0 0:00.03 setup.php
67 postgres 20 0 4410852 36408 32900 S 0.0 0.0 0:00.01 postgres
164 nominat+ 20 0 2612 608 540 S 0.0 0.0 0:00.00 sh
168 postgres 20 0 4410044 4.1g 4.1g S 0.0 4.8 62:42.30 postgres
171 postgres 20 0 4427500 1.9g 1.9g S 0.0 2.3 4:04.50 postgres
When I ran the following code and killed it immediately(that means to abnormally exit), the CPU rate of Mongodb would go extremely high(around 100%):
#-*- encoding:UTF-8 -*-
import threading
import time
import pymongo
single_conn = pymongo.Connection('localhost', 27017)
class SimpleExampleThread(threading.Thread):
def run(self):
print single_conn['scrapy'].zhaodll.count(), self.getName()
time.sleep(20)
for i in range(100):
SimpleExampleThread().start()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
VIRT RES SHR S %CPU %MEM TIME+ COMMAND
696m 35m 6404 S 1181.7 0.1 391:45.31 mongod
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
My Mongodb version is 2.2.3. When the Mongodb worked well, ran the command "strace -c -p " for 1 minute giving the following output:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.50 0.322951 173 1867 nanosleep
33.19 0.319950 730 438 recvfrom
21.16 0.203969 16 12440 select
12.13 0.116983 19497 6 restart_syscall
0.02 0.000170 2 73 write
0.00 0.000016 0 146 sendto
0.00 0.000007 0 73 lseek
0.00 0.000000 0 2 read
0.00 0.000000 0 3 open
0.00 0.000000 0 3 close
0.00 0.000000 0 2 fstat
0.00 0.000000 0 87 mmap
0.00 0.000000 0 2 munmap
0.00 0.000000 0 1 pwrite
0.00 0.000000 0 3 msync
0.00 0.000000 0 29 mincore
0.00 0.000000 0 73 fdatasync
------ ----------- ----------- --------- --------- ----------------
100.00 0.964046 15248 total
When the cpu rate of Mongodb went very high(around 100%), ran the same command giving the following output:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
29.12 5.064230 3088 1640 nanosleep
28.83 5.013239 27851 180 recvfrom
22.72 3.950399 658400 6 restart_syscall
19.30 3.356491 327 10268 select
0.02 0.004026 67 60 sendto
0.01 0.001000 333 3 msync
0.00 0.000269 9 30 write
0.00 0.000125 4 30 fdatasync
0.00 0.000031 10 3 open
0.00 0.000000 0 2 read
0.00 0.000000 0 3 close
0.00 0.000000 0 2 fstat
0.00 0.000000 0 30 lseek
0.00 0.000000 0 57 mmap
0.00 0.000000 0 2 munmap
0.00 0.000000 0 1 pwrite
0.00 0.000000 0 14 mincore
------ ----------- ----------- --------- --------- ----------------
100.00 17.389810 12331 total
And if I run the command "lsof", there are many socks with the description "can't identify protocol". I don't know what goes wrong. Are there some bugs in Mongodb?
Thanks!
What's the best method for benchmarking the performance of my various templates when using Template::Toolkit?
I want something that will break down how much cpu/system time is spent processing each block or template file, exclusive of the time spent processing other templates within. Devel::DProf, for example, is useless for this, since it simply tells me how much time is spent in the various internal methods of the Template module.
It turns out that Googling for template::toolkit profiling yields the best result, an article from November 2005 by Randal Schwartz. I can't copy and paste any of the article here due to copyright, but suffice to say that you simply get his source and use it as a module after template, like so:
use Template;
use My::Template::Context;
And you'll get output like this to STDERR when your script runs:
-- info.html at Thu Nov 13 09:33:26 2008:
cnt clk user sys cuser csys template
1 0 0.06 0.00 0.00 0.00 actions.html
1 0 0.00 0.00 0.00 0.00 banner.html
1 0 0.00 0.00 0.00 0.00 common_javascript.html
1 0 0.01 0.00 0.00 0.00 datetime.html
1 0 0.01 0.00 0.00 0.00 diag.html
3 0 0.02 0.00 0.00 0.00 field_table
1 0 0.00 0.00 0.00 0.00 header.html
1 0 0.01 0.00 0.00 0.00 info.html
1 0 0.01 0.01 0.00 0.00 my_checklists.html
1 0 0.00 0.00 0.00 0.00 my_javascript.html
1 0 0.00 0.00 0.00 0.00 qualifier.html
52 0 0.30 0.00 0.00 0.00 referral_options
1 0 0.01 0.00 0.00 0.00 relationship_block
1 0 0.00 0.00 0.00 0.00 set_bgcolor.html
1 0 0.00 0.00 0.00 0.00 shared_javascript.html
2 0 0.00 0.00 0.00 0.00 table_block
1 0 0.03 0.00 0.00 0.00 ticket.html
1 0 0.08 0.00 0.00 0.00 ticket_actions.html
-- end
Note that blocks as well as separate files are listed.
This is, IMHO, much more useful than the CPAN module Template::Timer.