Postgres import issues - huge table shows 0 rows - postgresql

I have an issue with a database import. Basically, I am doing this on a postgres 9.6 (production):
/usr/bin/pg_dump mydb | /bin/gzip | /usr/bin/ssh root#1.2.3.4 "cat > /root/20210130.sql.gz"
And on a remote machine I am importing on Postgres 11 like this:
step 1: import schema from a beta machine with Postgres 11
step 2: import data from that export like this: time zcat 20210130.sql.gz | psql mydb
The issue that I have is that one of the tables has 0 rows even though it uses a lot of disk space.
In the original db:
table_schema | table_name | row_estimate | total | index | toast | table
--------------------+--------------------------+--------------+------------+------------+------------+------------
public | test | 5.2443e+06 | 18 GB | 13 GB | 8192 bytes | 4864 MB
In the new db:
table_schema | table_name | row_estimate | total | index | toast | table
--------------------+--------------------------+--------------+------------+------------+------------+------------
public | test | 0 | 4574 MB | 4574 MB | 8192 bytes | 16 kB
What is going on here? How can I fix it?
I can't import the entire DB again because it needed ~7 hours to do the import.

Related

PG_WAL is very big size

I have a Postgres cluster with 3 nodes: ETCD+Patroni+Postgres13.
Now there was a problem of constantly growing pg_wal folder. It now contains 5127 files. After searching the internet, I found an article advising you to pay attention to the following database parameters (their meaning at the time of the case is this):
archive_mode off;
wal_level replica;
max_wal_size 1G;
SELECT * FROM pg_replication_slots;
postgres=# SELECT * FROM pg_replication_slots;
-[ RECORD 1 ]-------+------------
slot_name | db2
plugin |
slot_type | physical
datoid |
database |
temporary | f
active | t
active_pid | 2247228
xmin |
catalog_xmin |
restart_lsn | 2D/D0ADC308
confirmed_flush_lsn |
wal_status | reserved
safe_wal_size |
-[ RECORD 2 ]-------+------------
slot_name | db1
plugin |
slot_type | physical
datoid |
database |
temporary | f
active | t
active_pid | 2247227
xmin |
catalog_xmin |
restart_lsn | 2D/D0ADC308
confirmed_flush_lsn |
wal_status | reserved
safe_wal_size |
All other functionality of the Patroni cluster works (switchover, reinit, replication);
root#srvdb3:~# patronictl -c /etc/patroni/patroni.yml list
+ Cluster: mobile (7173650272103321745) --+----+-----------+
| Member | Host | Role | State | TL | Lag in MB |
+--------+------------+---------+---------+----+-----------+
| db1 | 10.01.1.01 | Replica | running | 17 | 0 |
| db2 | 10.01.1.02 | Replica | running | 17 | 0 |
| db3 | 10.01.1.03 | Leader | running | 17 | |
+--------+------------+---------+---------+----+-----------+
Patroni patroni-edit:
loop_wait: 10
maximum_lag_on_failover: 1048576
postgresql:
parameters:
checkpoint_timeout: 30
hot_standby: 'on'
max_connections: '1100'
max_replication_slots: 5
max_wal_senders: 5
shared_buffers: 2048MB
wal_keep_segments: 5120
wal_level: replica
use_pg_rewind: true
use_slots: true
retry_timeout: 10
ttl: 100
Help please, what could be the matter?
This is what I see in pg_stat_archiver:
postgres=# select * from pg_stat_archiver;
-[ RECORD 1 ]------+------------------------------
archived_count | 0
last_archived_wal |
last_archived_time |
failed_count | 0
last_failed_wal |
last_failed_time |
stats_reset | 2023-01-06 10:21:45.615312+00
If you have wal_keep_segments set to 5120, it is completely normal if you have 5127 WAL segments in pg_wal, because PostgreSQL will always retain at least 5120 old WAL segments. If that is too many for you, reduce the parameter. If you are using replication slots, the only disadvantage is that you might only be able to pg_rewind soon after a failover.

PostgreSQL insert performance - why would it be so slow?

I've got a PostgreSQL database running inside a docker container on an AWS Linux instance. I've got some telemetry running, uploading records in batches of ten. A Python server inserts these records into the database. The table looks like this:
postgres=# \d raw_journey_data ;
Table "public.raw_journey_data"
Column | Type | Collation | Nullable | Default
--------+-----------------------------+-----------+----------+---------
email | character varying | | |
t | timestamp without time zone | | |
lat | numeric(20,18) | | |
lng | numeric(21,18) | | |
speed | numeric(21,18) | | |
There aren't that many rows in the table; about 36,000 presently. But committing the transactions that insert the data is taking about a minute each time:
postgres=# SELECT pid, age(clock_timestamp(), query_start), usename, query
FROM pg_stat_activity
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%'
ORDER BY query_start desc;
pid | age | usename | query
-----+-----------------+----------+--------
30 | | |
32 | | postgres |
28 | | |
27 | | |
29 | | |
37 | 00:00:11.439313 | postgres | COMMIT
36 | 00:00:11.439565 | postgres | COMMIT
39 | 00:00:36.454011 | postgres | COMMIT
56 | 00:00:36.457828 | postgres | COMMIT
61 | 00:00:56.474446 | postgres | COMMIT
35 | 00:00:56.474647 | postgres | COMMIT
(11 rows)
The load average on the system's CPUs is zero and about half of the 4GB system RAM is available (as shown by free). So what causes the super-slow commits here?
The insertion is being done with SqlAlchemy:
db.session.execute(import_table.insert([
{
"email": current_user.email,
"t": row.t.ToDatetime(),
"lat": row.lat,
"lng": row.lng,
"speed": row.speed
}
for row in data.data
]))
Edit Update with the state column:
postgres=# SELECT pid, age(clock_timestamp(), query_start), usename, state, query
FROM pg_stat_activity
WHERE query NOT ILIKE '%pg_stat_activity%'
ORDER BY query_start desc;
pid | age | usename | state | query
-----+-----------------+----------+-------+--------
32 | | postgres | |
30 | | | |
28 | | | |
27 | | | |
29 | | | |
46 | 00:00:08.390177 | postgres | idle | COMMIT
49 | 00:00:08.390348 | postgres | idle | COMMIT
45 | 00:00:23.35249 | postgres | idle | COMMIT
(8 rows)

Unable to create new database

When I create a new mysql db, slashdb's test connection fails.
Here is how I log into mysql:
$ mysql -u 7stud -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 5.5.5-10.4.13-MariaDB Homebrew
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| chat |
| ectoing_repo |
| ejabberd |
| information_schema |
| mydb |
| mysql |
| performance_schema |
| test |
+--------------------+
8 rows in set (0.00 sec)
mysql> use mydb;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| cheetos |
| greetings |
| mody |
| people |
+----------------+
4 rows in set (0.00 sec)
mysql> select * from people;
+----+--------+------+
| id | name | info |
+----+--------+------+
| 1 | 7stud | abc |
| 2 | Beth | xxx |
| 3 | Diane | xyz |
| 4 | Kathy | xyz |
| 5 | Kathy | xyz |
| 6 | Dave | efg |
| 7 | Tom | zzz |
| 8 | David | abc |
| 9 | Eloise | abc |
| 10 | Jess | xyz |
| 11 | Jeffsy | 2.0 |
| 12 | XXX | xxx |
| 13 | XXX | xxx |
+----+--------+------+
13 rows in set (0.00 sec)
In the slashdb form for creating a new database, here is the info I entered:
Hostname: 127.0.0.1
Port: 80
Database Login: 7stud
Database Password: **
Database Name: mydb
Then I hit the "Test Connection" button, whereupon I get a spinning wheel, which disappears after a few minutes, but no "Connection Successful" message. What am I doing wrong?
Now, I'm using port 3306:
mysql> SHOW GLOBAL VARIABLES LIKE 'PORT';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3306 |
+---------------+-------+
1 row in set (0.00 sec)
but when slashdb tries to connect, I get the error:
Host localhost:3306 is not accessible
Your port is wrong in the database connection. You said that your MySQL is configured on port 3306, but you also posted your SlashDB config for database with port 80. Please change that to 3306.
Also, not sure how if you don't need to enable remote access to MySQL. Even if your SlashDB is running on the same machine as the MySQL database, it uses TCP/IP to connect.

Enabling Parallelization in Spark with Partition Pushdown in MemSQL

I have a columnstore table in MemSQL that has a schema similar the one below:
CREATE TABLE key_metrics (
source_id TEXT,
date TEXT,
metric1 FLOAT,
metric2 FLOAT,
…
SHARD KEY (source_id, date) USING CLUSTERED COLUMNSTORE
);
I have a Spark application (running with Spark Job Server) that queries the MemSQL table. Below is a simplified form of the kind of Dataframe operation I am doing (in Scala):
sparkSession
.read
.format(“com.memsql.spark.connector”)
.options( Map (“path” -> “dbName.key_metrics”))
.load()
.filter(col(“source_id”).equalTo(“12345678”)
.filter(col(“date”)).isin(Seq(“2019-02-01”, “2019-02-02”, “2019-02-03”))
I have confirmed by looking at the physical plan that these filter predicates are being pushed down to MemSQL.
I have also checked that there is a pretty even distribution of the partitions in the table:
±--------------±----------------±-------------±-------±-----------+
| DATABASE_NAME | TABLE_NAME | PARTITION_ID | ROWS | MEMORY_USE |
±--------------±----------------±-------------±-------±-----------+
| dbName | key_metrics | 0 | 784012 | 0 |
| dbName | key_metrics | 1 | 778441 | 0 |
| dbName | key_metrics | 2 | 671606 | 0 |
| dbName | key_metrics | 3 | 748569 | 0 |
| dbName | key_metrics | 4 | 622241 | 0 |
| dbName | key_metrics | 5 | 739029 | 0 |
| dbName | key_metrics | 6 | 955205 | 0 |
| dbName | key_metrics | 7 | 751677 | 0 |
±--------------±----------------±-------------±-------±-----------+
My question is regarding partition pushdown. It is my understanding that with it, we can use all the cores of the machines and leverage parallelism for bulk loading. According to the docs, this is done by creating as many Spark tasks as there are MemSQL database partitions.
However when running the Spark pipeline and observing the Spark UI, it seems that there is only one Spark task that is created which makes a single query to the DB that runs on a single core.
I have made sure that the following properties are set as well:
spark.memsql.disablePartitionPushdown = false
spark.memsql.defaultDatabase = “dbName”
Is my understanding of partition pushdown incorrect? Is there some other configuration that I am missing?
Would appreciate your input on this.
Thanks!
Singlestore credentials have to be the same on all nodes to take advantage of partition pushdown.
And if you have same credentials thru out all the nodes please try installing latest version of spark connector.
Because it often occurs due to spark connector and singlestore compatibility issues.

How to get database size accurately in postgresql?

We are running on PostgreSQL version 9.1, previously we had over 1Billion rows in one table and has been deleted. However, it looks like the \l+ command still reports inaccurately about the actual database size (it reported 568GB but in reality it's much much less than).
The proof of that 568GB is wrong is that the individual table size tally didn't add up to the number, as you can see, top 20 relations has 4292MB in size, the remaining 985 relations are all well below 10MB. In fact all of them add up to about less than 6GB.
Any idea why PostgreSQL so much bloat? If confirmed, how can I debloat? I am not super familiar with VACUUM, is that what I need to do? If so, how?
Much appreciate it.
pmlex=# \l+
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
-----------------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------
pmlex | pmlex | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 568 GB | pg_default |
pmlex_analytics | pmlex | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 433 MB | pg_default |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 5945 kB | pg_default | default administrative connection database
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 5841 kB | pg_default | unmodifiable empty database
| | | | | postgres=CTc/postgres | | |
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 5841 kB | pg_default | default template for new databases
| | | | | postgres=CTc/postgres | | |
(5 rows)
pmlex=# SELECT nspname || '.' || relname AS "relation",
pmlex-# pg_size_pretty(pg_relation_size(C.oid)) AS "size"
pmlex-# FROM pg_class C
pmlex-# LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
pmlex-# WHERE nspname NOT IN ('pg_catalog', 'information_schema')
pmlex-# ORDER BY pg_relation_size(C.oid) DESC;
relation | size
-------------------------------------+---------
public.page_page | 1289 MB
public.page_pageimagehistory | 570 MB
pg_toast.pg_toast_158103 | 273 MB
public.celery_taskmeta_task_id_key | 233 MB
public.page_page_unique_hash_uniq | 140 MB
public.page_page_ad_text_id | 136 MB
public.page_page_kn_result_id | 125 MB
public.page_page_seo_term_id | 124 MB
public.page_page_kn_search_id | 124 MB
public.page_page_direct_network_tag | 124 MB
public.page_page_traffic_source_id | 123 MB
public.page_page_active | 123 MB
public.page_page_is_referrer | 123 MB
public.page_page_category_id | 123 MB
public.page_page_host_id | 123 MB
public.page_page_serp_id | 121 MB
public.page_page_domain_id | 120 MB
public.celery_taskmeta_pkey | 106 MB
public.page_pagerenderhistory | 102 MB
public.page_page_campaign_id | 89 MB
...
...
...
pg_toast.pg_toast_4354379 | 0 bytes
(1005 rows)
Your options include:
1). Ensuring autovacuum is enabled and set aggressively.
2). Recreating the table as I mentioned in an earlier comment (create-table-as-select + truncate + reload the original table).
3). Running CLUSTER on the table if you can afford to be locked out of that table (exclusive lock).
4). VACUUM FULL, though CLUSTER is more efficient and recommended.
5). Running a plain VACUUM ANALYZE a few times and leaving the table as-is, to eventually fill the space back up as new data comes in.
6). Dump and reload the table via pg_dump
7). pg_repack (though I haven't used it in production)
it will likely look different if you use pg_total_relation_size instead of pg_relation_size
pg_relation_size doesn't give the total size of the table, see
https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-DBSIZE