How to read profile results of PostgreSQL JIT - postgresql

According to docs of PostgreSQL, turning jit_profiling_support to on, PostgreSQL generate the data to allow to perf.
https://www.postgresql.org/docs/12/runtime-config-developer.html
If LLVM has the required functionality, emit the data needed to allow perf to profile functions generated by JIT. This writes out files to $HOME/.debug/jit/;
Now I set jit_profiling_support to on and execute some queries.
testdb=# show jit_profiling_support;
jit_profiling_support
-----------------------
on
(1 row)
Certainly, it generates the file to $HOME/.debug/jit/.
/home/postgres/.debug/jit/llvm-IR-jit-20191216-a64065:
total 12
drwx------ 2 postgres postgres 4096 Dec 16 13:47 .
drwx------ 12 postgres postgres 4096 Dec 17 14:08 ..
-rw------- 1 postgres postgres 2030 Dec 16 13:47 jit-3880.dump
However, I can't this dump file. I tried to read with perf, but failed (as below).
$ perf report -v -i /home/postgres/.debug/jit/llvm-IR-jit-20191216-a64065/jit-3880.dump
magic/endian check failed
incompatible file format (rerun with -v to learn more)
How can I read this file?

You should be doing "perf inject" to get symbols in "perf report".
perf inject -j -i perf.data -o perf.jitted.data
perf report -i perf.jitted.data
This should work.
More details: PostgreSQL uses LLVM compiler suite for JIT. The suite generates code at runtime, also dumps symbols information in binary data file ending with *.dump under JITDUMPDIR directory. The perf tool consumes these symbols during inject step to display correct profiling info.

Related

multiple entries for synchronous_standby_names

Trying to achieve sync streaming to barman server and i need to add an entry to postgresql.conf for this parameter, which already has an entry and tried a few variations but does not work. Any ideas? Also tried '&&' but in vain
synchronous_standby_names='ANY 1 (*)',barman-wal-archive
2022-06-10 16:50:54.272 BST [11241-43] # app= LOG: syntax error in
file "/var/lib/pgsql/13/data/postgresql.conf" line 22, near token ","
2022-06-10 16:50:54.272 BST [11241-44] # app= LOG: configuration file
"/var/lib/pgsql/13/data/postgresql.conf" contains errors; no changes
were applied
The syntax you are using is not valid, and you won't be able to specify that Barman should be kept synchronous and any one of the others. The best you can do is
synchronous_standby_names = 'FIRST 2 ("barman-wal-archive", standby1, standby2, standby3)'
(You have to double quote all names that are not standard SQL identifiers, for example if they contain -.)
Then PostgreSQL will always keep Barman synchronized, as well as the first available standby server. But that won't have transactions fail if Barman is not available, which seems to be what you want.
Keep just
synchronous_standby_names='ANY 1 (*)'
and set
synchronous_commit = on
or
synchronous_commit = remote_write

Can not create backup of Firebird database because of the errors

At the time of backup firebird database (gbak -g -ig) I have the following error:
gbak: writing data for table ORDERS
gbak: ERROR:message length error (encountered 532, expected 528)
gbak: ERROR:gds_$receive failed
gbak:Exiting before completion due to errors
When I'm using gfix with different parameters (-v -full, -mend, -ignore), I have the message:
Summary of validation errors
Number of index page errors : 540
In firebird.log file I see the lines:
PC (Server) Thu Sep 20 08:37:01 2018
Database: E:\...GDB
Index 2 is corrupt on page 134706 level 1. File: ..\..\..\src\jrd\validation.cpp, line: 1699
in table COMPONENTS (197)
However, the database works OK without problems.
Please help me to fix the error and make a backup.
(I need the backup to migrate to on 64bit server).

Filtering openstreetmap data for postgis

I am creating a postgis database and want to use filtered OpenStreetMap data.
For this i have tried the following process:
Downloaded the planet.osm.bz2 file from https://planet.osm.org/
Unpacked to *.osm using bzip2
Filtered the file using osmfilter through the command prompt
Uploaded the filtered *.osm file to my database using osm2pgsql in command prompt
For my first attempt i have filtered for land area only.
However, in step 4 using osm2pgsql, i receive the following error in the command prompt: "Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
"
As shown from the command prompt on my windows computer:
Z:\OpenStreetMap>osm2pgsql -U postgres -W -m -d osm -p filteredland -S "C:\Progr
am Files (x86)\HOTOSM\share\default.style" filteredland2.osm
osm2pgsql version 0.92.0 (64 bit id space)
Password:
Using built-in tag processing pipeline
Using projection SRS 3857 (Spherical Mercator)
Setting up table: filteredland_point
Setting up table: filteredland_line
Setting up table: filteredland_polygon
Setting up table: filteredland_roads
Allocating memory for sparse node cache
Node-cache: cache=800MB, maxblocks=12800*65536, allocation method=1
Mid: Ram, scale=100
Reading in file: filteredland2.osm
Using XML parser.
Processing: Node(1230k 61.5k/s) Way(0k 0.00k/s) Relation(0 0.00/s)node cache: st
ored: 1233078(100.00%), storage efficiency: 50.00% (dense blocks: 0, sparse node
s: 1233078), hit rate: -nan(ind)%
Osm2pgsql failed due to ERROR: XML parsing error at line 3137102, column 61: not
well-formed (invalid token)
I have also attempted two alternate routes, which also failed:
Downloading the planet.pbf -> Converting to .o5m using osmconvert ->
Filtering using osmfilter
Downloading the planet.pbf -> Converting to .osm using osmconvert ->
Filtering using osmfilter(Gave warnings) -> Using osm2pgsql to
transfer to database
Anyone know how to avoid this error or have experience with filtering the planet.osm file and uploading to postgis?
I suggest using Osmium instead of osmfilter, which doesn't require to convert the planet to a different format first and natively is able to return PBF data, which can be processed directly by osm2pgsql. It's faster, too.

MongoDB: mongoimport loses connection when importing big files

I have some trouble importing a JSON file to a local MongoDB instance. The JSON was generated using mongoexport and looks like this. No arrays, no hardcore nesting:
{"_created":{"$date":"2015-10-20T12:46:25.000Z"},"_etag":"7fab35685eea8d8097656092961d3a9cfe46ffbc","_id":{"$oid":"562637a14e0c9836e0821a5e"},"_updated":{"$date":"2015-10-20T12:46:25.000Z"},"body":"base64 encoded string","sender":"mail#mail.com","type":"answer"}
{"_created":{"$date":"2015-10-20T12:46:25.000Z"},"_etag":"7fab35685eea8d8097656092961d3a9cfe46ffbc","_id":{"$oid":"562637a14e0c9836e0821a5e"},"_updated":{"$date":"2015-10-20T12:46:25.000Z"},"body":"base64 encoded string","sender":"mail#mail.com","type":"answer"}
If I import a 9MB file with ~300 rows, there is no problem:
[stekhn latest]$ mongoimport -d mietscraping -c mails mails-small.json
2015-11-02T10:03:11.353+0100 connected to: localhost
2015-11-02T10:03:11.372+0100 imported 240 documents
But if try to import a 32MB file with ~1300 rows, the import fails:
[stekhn latest]$ mongoimport -d mietscraping -c mails mails.json
2015-11-02T10:05:25.228+0100 connected to: localhost
2015-11-02T10:05:25.735+0100 error inserting documents: lost connection to server
2015-11-02T10:05:25.735+0100 Failed: lost connection to server
2015-11-02T10:05:25.735+0100 imported 0 documents
Here is the log:
2015-11-02T11:53:04.146+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45237 #21 (6 connections now open)
2015-11-02T11:53:04.532+0100 I - [conn21] Assertion: 10334:BSONObj size: 23592351 (0x167FD9F) is invalid. Size must be between 0 and 16793600(16MB) First element: insert: "mails"
2015-11-02T11:53:04.536+0100 I NETWORK [conn21] AssertionException handling request, closing client connection: 10334 BSONObj size: 23592351 (0x167FD9F) is invalid. Size must be between 0 and 16793600(16MB) First element: insert: "mails"
I've heard about the 16MB limit for BSON documents before, but since no row in my JSON file is bigger than 16MB, this shouldn't be a problem, right? When I do the exact same (32MB) import one my local computer, everything works fine.
Any ideas what could cause this weird behaviour?
I guess the problem is about performance, any way you can solved used:
you can use mongoimport option -j. Try increment if not work with 4. i.e, 4,8,16, depend of the number of core you have in your cpu.
mongoimport --help
-j, --numInsertionWorkers= number of insert operations to run
concurrently (defaults to 1)
mongoimport -d mietscraping -c mails -j 4 < mails.json
or you can split the file and import all files.
I hope this help you.
looking a little more, is a bug in some version
https://jira.mongodb.org/browse/TOOLS-939
here another solution you can change the batchSize, for default is 10000, reduce the value and test:
mongoimport -d mietscraping -c mails < mails.json --batchSize 1
Quite old, but I struggled on same issue.
If you want to import big files, especially remote with Compass or by Program just add
&wtimeoutMS=0
to your Connection-String. This removes Timeout on Write-Operations.

Mongodump and mongorestore; field not found

I'm trying to dump a database from another server (this works fine), then restore it on a new server (this does not work fine).
I first run:
mongodump --host -d
This creates a folder dump/db which contains all of the bson documents.
Then in the dump folder, I'm running:
mongorestore -d dbname db
This works and iterates through the files, but I get this error on dbname.system.users
Wed May 23 02:08:05 { key: { _id: 1 }, ns: "dbname.system.users", name: "_id_" }
Error creating index dbname.system.usersassertion: 13111 field not found, expected type 16
Any ideas how to resolve this?
If it realy different versions, use --noIndexRestore option. And create all index after that.
Any chance the source and destination are different versions?
In any case, to get around this, restore the collections individually using the -c flag to the target DB and then build the indexes afterward. The system collection is the one used for indexes, so it is fairly easy to recreate - try it last once everything else has been restore, and if it still fails you can always just recreate the relevant indexes.
The issue could also caused by this bug in older versions of Mongo (In my case it was 2.0.8):
https://jira.mongodb.org/browse/SERVER-7181
Basically, you get 13111 field not found, expected type 16 error when it should actually be prompting you to enter your authentication details.
And example of how I fixed it:
root#precise64:/# mongorestore /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:48:15 going into namespace [test.system.indexes]
Fri May 24 11:48:15 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Error creating index test.system.usersassertion: 13111 field not found, expected type 16
# Error when not giving username and password
root#precise64:/# mongorestore -u fakeuser -p fakepassword /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:57:11 /backups/demand/ondemand.05-24-2013T114223/test/system.users.bson
Fri May 24 11:57:11 going into namespace [test.system.users]
1 objects found
# Works fine when giving username and password! :)
Hope that helps anyone who's issue doesn't get fixed by the previous 2 replies!
This can also happen if you are trying to mongorestore into MongoDB 2.6+ and the dump you are trying to restore contains a system.users table in any database other than admin. In MongoDB 2.2 and 2.4 the system.userscollections could occur in any database. The auth schema migration associated with MongoDB 2.6 moved all users into the system.users table in the admin database, but left behind the system.users tables in the other databases (MongoDB 2.6 just ignores these). This seems to cause this assertion when importing into MongoDB 2.6.