We've moved to Google Cloud SQL, created couple of databases and imported lots of data. Alongside it was a pretty large amount of queries which were interrupted here and there which have left some garbage in form of temp files. And the storage usage went far above 1TB.
postgres=> SELECT datname, temp_files AS "Temporary files", pg_size_pretty(temp_bytes) AS "Size of temporary files" FROM pg_stat_database;
datname | Temporary files | Size of temporary files
---------------+-----------------+-------------------------
cloudsqladmin | 0 | 0 bytes
template0 | 0 | 0 bytes
postgres | 0 | 0 bytes
template1 | 0 | 0 bytes
first | 33621 | 722 GB
second | 9 | 3399 MB
third | 293313 | 153 GB
(7 rows)
According to the results of the query above we have ~1TB of potentially useless files. There are couple of questions:
How to identify temp files not used by any running queries?
How to remove them having that postgres is managed by Google Cloud SQL?
As per the PostgreSQL documentation, the field temp_bytes is defined as:
Total amount of data written to temporary files by queries in this
database. All temporary files are counted, regardless of why the
temporary file was created, and regardless of the log_temp_files
setting.
Meaning, that the number is the sum of the temporary file sizes since the creation of the database (or since last pg_stat_reset()), and not the current temp file usage.
The current usage could be determined using the 'file functions' in non-cloud database instance, but in Cloud SQL a normal user can not execute select pg_ls_dir('base/pgsql_temp') as this is reserved only to superusers.
As you said, Cloud SQL is a managed service, therefore at the moment, there is no way to see the current temp file usage.
One thing that will definitely clear the number you see is pg_stat_reset(), though as said before, it is not about current temp file usage, but a historical total;
One thing guaranteed to clean out temp files is restarting of the database instance, as part of the start process is wiping the base/pgsql_temp directory.
Related
I have imported a local PostgreSQL database to a managed cluster on Digital Ocean. It will be used with a Python app that will also be hosted on Digital Ocean. I used pg_dump and pg_restore to achieve the import. Now, to make sure the import was successful, I am running some psql queries and commands via my MacOS terminal app that is set up with zsh and it connects via a shell script that prompts me for host, database name, port, user and password. I am successful in connecting to the managed cluster this way, and I can execute some queries with no problem, while others are causing errors. For example:
my_support=> \dt
List of relations
Schema | Name | Type | Owner
--------+----------------------+-------+---------
public | ages | table | doadmin
public | articles | table | doadmin
public | challenges | table | doadmin
public | cities | table | doadmin
public | comments | table | doadmin
public | messages | table | doadmin
public | relationships | table | doadmin
public | topics | table | doadmin
public | users | table | doadmin
(9 rows)
my_support=> \dt+
sh: more: command not found
my_support=>
Also:
my_support=> SELECT id,sender_id FROM messages;
id | sender_id
----+-----------
1 | 1
2 | 2
3 | 4
4 | 1
5 | 2
(5 rows)
my_support=> SELECT * FROM messages;
sh: more: command not found
my_support=>
So the terminal app seems to dislike certain characters, such as * and +, but I can't find any documentation that tells me I should escape them, or how. I tried backslash in front of them, but it did not work. And what's more confusing is that these very same queries are successful when I connected to my LOCAL copy of the database, using the very same terminal app, launched from the very same shell script.
In case it's helpful, here's what I see in the CLI when I connect:
psql (14.1, server 14.2)
SSL connection (protocol: TLSv1.3, cipher: <alphanumeric string here>, bits: 256, compression: off)
Type "help" for help.
my_support=>
Does it matter that my local PostgreSQL version is 14.1 and the server is 14.2? I'm assuming the "server" refers to the hosted environment, but it seems like something as basic as "SELECT * FROM" should not be version-dependent.
Ultimately what matters is whether my Python app (which uses psycopg library to talk to PostgreSQL) can run those queries, and I haven't test that yet. But it sure would be handy to test things on the managed cluster using my local terminal app.
BTW, I have an open ticket with Digital Ocean to answer this question, but I find SO to be faster and more helpful in most cases.
psql is trying to use a pager to display results that are longer than the number of lines in the terminal. The error message
more: command not found
indicates that the pager (more) it tries to use is not available. You can turn off the use of a pager:
\pset pager off
or set a different command to be used as the pager. See the manual for details
Summary
I failed to import CSV files into a table on PostgreSQL.
Even though it says that the import was successfully completed, there's no rows created.
How did this happen, and how can I fix this? Thank you.
Details
1. The CSV file I (failed to) imported, is like this 1. CSV file imported
| number | ticket | category | question | answer | url | note |
|--------|------------|-----------|--------------------------|-----------------------|----------------|----------|
| 1 | #0000000 | Temp>123 | *confirming* | Would you...? | https:///....a | - |
| 2 | #1234567 | AAA / BBB | "a" vs "b" | If A, "a". If B, "b". | https:///....b | #0000000 |
| 3 | #1234567-2 | AAA>abc | Can we do sth using "a"? | Yes, blah blah blah. | https:///....b | - |
And this is the table on PostgreSQL
numberr : numeric
ticketr : char
category : char[]
question : char
answer : char
url : char
note : char
2.\ The message after the import
Even though it says that the import was "successfully completed"
When I hit “More details” of the import pop up (3. Message - Completed)
--command " "\\copy public.test (\"number\", ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '\"' ESCAPE '''';""
3. The message when I made sure that the file was actually imported
When I click "Count Rows", it says "Table rows counted: 0"
I tried the following script in Query Tool of the table, and it shows no rows created
SELECT * FROM (table name)
For references
Created Postgres Log, but only the header is created.
Screenshot
4. No row created / 1. CSV file imported / 2. Import Preference / 3. Message - Completed / 5. postgres_log
After changing the name of a column from "number" to "consecutive", the error message showed up in Query Tool (not in Import/Export)
Tried Query Tool instead of Import/Emport
--> the situation didn’t change
Changed the first column name from “number” to “constructive” in both csv and psql table
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8' QUOTE '"' ESCAPE '''';""
--> the situation didn’t change
Tried Query Tool
copy public.test (consecutive, ticket, category, question, answer, url, note) FROM '/Users/alice/Desktop/test5.csv' DELIMITER ',' CSV HEADER ENCODING 'UTF8'
—>got error message
ERROR: could not open file "/Users/alice/Desktop/test5.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
check columns settings in tab at 2. Import Preference image
right from options tab
there you should set columns order as in your file
also check more details at 3. Message - Cpmpleted
This is a file permission issue.
Open a shell terminal, go to the directory where the data file is stored, and run chmod +rx * and retry loading the data file into your DB.
I'm trying to find how much stress PostgreSQL puts on disks and results are kind of discouraging so far. Please take a look on methodology, apparently I'm missing something or calculating numbers in a wrong way.
Environment
PostgreSQL 9.6.0-1.pgdg16.04+1 is running inside a separate LXC container with Ubuntu 16.04.1 LTS (kernel version 4.4.0-38-generic, ext4 filesystem on top of SSD), has only one client connection from which I run tests.
I disabled autovacuum to prevent unnecessary writes.
Calculation of written bytes is done by following command, I want to find total number of bytes written by all PostgreSQL processes (including WAL writer):
pgrep postgres | xargs -I {} cat /proc/{}/io | grep ^write_bytes | cut -d' ' -f2 | python -c "import sys; print sum(int(l) for l in sys.stdin)"
Tests
With # sign I marked a database command, with → I marked result of write_bytes sum after the database command. The test case is simple: a table with just one int4 column filled with 10000000 values.
Before every test I run set of commands to free disk space and prevent additional writes:
# DELETE FROM test_inserts;
# VACUUM FULL test_inserts;
# DROP TABLE test_inserts;
Test #1: Unlogged table
As documentation states, changes in UNLOGGED table are not written to WAL log, so it's a good point to start:
# CREATE UNLOGGED TABLE test_inserts (f1 INT);
→ 1526276096
# INSERT INTO test_inserts SELECT generate_series(1, 10000000);
→ 1902977024
The difference is 376700928 bytes (~359MB), which sort of makes sense (ten millions of 4-byte integers + rows, pages and other costs), but still looks a bit too much, almost 10x of actual data size.
Test #2: Unlogged table with primary key
# CREATE UNLOGGED TABLE test_inserts (f1 INT PRIMARY KEY);
→ 2379882496
# INSERT INTO test_inserts SELECT generate_series(1, 10000000);
→ 2967339008
The difference is 587456512 bytes (~560MB).
Test #3: regular table
# CREATE TABLE test_inserts (f1 INT);
→ 6460669952
# INSERT INTO test_inserts SELECT generate_series(1, 10000000);
→ 7603630080
There the difference is already 1142960128 bytes (~1090MB).
Test #4: regular table with primary key
# CREATE TABLE test_inserts (f1 INT PRIMARY KEY);
→ 12740534272
# INSERT INTO test_inserts SELECT generate_series(1, 10000000);
→ 14895218688
Now the difference is 2154684416 bytes (~2054MB) and after about 30 seconds additional 100MB were written.
For this test case I made a breakdown by processes:
Process | Bytes written
/usr/lib/postgresql/9.6/bin/postgres | 0
\_ postgres: 9.6/main: checkpointer process | 99270656
\_ postgres: 9.6/main: writer process | 39133184
\_ postgres: 9.6/main: wal writer process | 186474496
\_ postgres: 9.6/main: stats collector process | 0
\_ postgres: 9.6/main: postgres testdb [local] idle | 1844658176
Any ideas, suggestions on how to measure values I'm looking for correctly? Maybe it's a kernel bug? Or PostgreSQL really does so many writes?
Edit: To double check what write_bytes means I wrote a simple python script that proved, that this value is the actual written bytes value.
Edit 2: For PostgreSQL 9.5 Test case #1 showed 362577920 bytes, test #4 showed 2141343744 bytes, so it's not about PG version.
Edit 3: Richard Huxton mentioned Database Page Layout article and I'd like to elaborate: I agree with the storage cost, that includes 24 bytes of row header, 4 bytes of data itself and even 4 bytes for data alignment (8 bytes usually), which gives 32 bytes per row and with that amount of rows it's about 320MB per table and this is something I got with test #1.
I could assume that primary key in that case should be about the same size as data and it test #4 both, data and PK, would be written to WAL. That gives something like 360MB x 4 = 1.4GB, which is less than result I got.
Sometimes one gets a message like:
Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.
I am wondering how Sugar determines what version of the database it is using. In the above case, I get the following output:
select * from config where name='sugar_version';
+----------+---------------+-------+
| category | name | value |
+----------+---------------+-------+
| info | sugar_version | 6.4.5 |
+----------+---------------+-------+
1 row in set (0.00 sec)
cat config.php |grep sugar_version
'sugar_version' => '6.4.5',
Given the above output, I am wondering how to debug the output "Sugar CRM 6.4.5 Files May Only Be Used With A Sugar CRM 6.4.5 Database.": Sugar seems to think the files are not of version 6.4.5 even though the sugar_version is 6.4.5 in config.php; where should I look next?
Two options for the issue:
Option 1: Update your database for the latest version.
Option 2: Follow the steps below and change the SugarCRM cnfig version.
mysql> select * from config where name ='sugar_version';
+----------+---------------+---------+----------+
| category | name | value | platform |
+----------+---------------+---------+----------+
| info | sugar_version | 7.7.0.0 | NULL |
+----------+---------------+---------+----------+
1 row in set (0.00 sec)
Update your sugarcrm version to apporipriate :
mysql> update config set value='7.7.1.1' where name ='sugar_version';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0
The above commands seem to be correct. Sugar seems to check that config.php and the config table in the database contain the same version. In my case I was making the mistake of using the wrong database -- so if you're like me and tend to have your databases mixed up, double check in config.php that 'dbconfig' is indeed pointing to the right database.
I am thinking to migrate my website to Google Cloud SQL and I signed up for a free account (D32).
Upon testing on a table with 23k records the performances were very poor so I read that if I move from the free account to a full paid account I would have access to faster CPU and HDD... so I did.
performances are still VERY POOR.
I am running my own MySQL server for years now, upgrading as needed to handle more and more connections and to gain raw speed (needed because of a legacy application). I highly optimize tables, configuration, and heavy use of query cache, etc...
A few pages of our legacy system have over 1.5k of queries per page, currently I was able to push the mysql query time (execution and pulling of the data) down to 3.6seconds for all those queries, meaning that MySQL takes about 0.0024 seconds to execute the queries and return the values.. not the greatest but acceptable for those pages.
I upload a table involved in those many queries to Google Cloud SQL. I notices that the INSERT already takes SECONDS to execute instead than milliseconds.. but I think that it might be the sync vs async setting. I change it to async and the execution time for the insert doesn't feel like it changes. for now not a big problem, I am only testing queries for now.
I run a simple select * FROM <table> and I notice that it takes over 6 seconds.. I think that maybe the query cache needs to build.. i try again and this times it takes 4 seconds (excluding network traffic). I run the same query on my backup server after a restart and with no connections at all, and it takes less than 1 second.. running it again, 0.06 seconds.
Maybe the problem is the cache, too big... let's try a smaller subset
select * from <table> limit 5;
to my server: 0.00 seconds
GCS: 0.04
so I decide to try a dumb select on an empty table, no records at all, just created with only 1 field
to my server: 0.00 seconds
GCS: 0.03
profiling doesn't give any insights except that the query cache is not running on Google Cloud SQL and that the queries execution seems faster but .. is not...
My Server:
mysql> show profile;
+--------------------------------+----------+
| Status | Duration |
+--------------------------------+----------+
| starting | 0.000225 |
| Waiting for query cache lock | 0.000116 |
| init | 0.000115 |
| checking query cache for query | 0.000131 |
| checking permissions | 0.000117 |
| Opening tables | 0.000124 |
| init | 0.000129 |
| System lock | 0.000124 |
| Waiting for query cache lock | 0.000114 |
| System lock | 0.000126 |
| optimizing | 0.000117 |
| statistics | 0.000127 |
| executing | 0.000129 |
| end | 0.000117 |
| query end | 0.000116 |
| closing tables | 0.000120 |
| freeing items | 0.000120 |
| Waiting for query cache lock | 0.000140 |
| freeing items | 0.000228 |
| Waiting for query cache lock | 0.000120 |
| freeing items | 0.000121 |
| storing result in query cache | 0.000116 |
| cleaning up | 0.000124 |
+--------------------------------+----------+
23 rows in set, 1 warning (0.00 sec)
Google Cloud SQL:
mysql> show profile;
+----------------------+----------+
| Status | Duration |
+----------------------+----------+
| starting | 0.000061 |
| checking permissions | 0.000012 |
| Opening tables | 0.000115 |
| System lock | 0.000019 |
| init | 0.000023 |
| optimizing | 0.000008 |
| statistics | 0.000012 |
| preparing | 0.000005 |
| executing | 0.000021 |
| end | 0.000024 |
| query end | 0.000007 |
| closing tables | 0.000030 |
| freeing items | 0.000018 |
| logging slow query | 0.000006 |
| cleaning up | 0.000005 |
+----------------------+----------+
15 rows in set (0.03 sec)
keep in mind that I connect to both server remotely from a server located in VA and my server is located in Texas (even if it should not matter that much).
What am I doing wrong ? why simple queries take this long ? am I missing or not understanding something here ?
As of right now I won't be able to use Google Cloud SQL because a page with 1500 queries will take way too long (circa 45 seconds)
I know this question is old but....
CloudSQL has poor support for MyISAM tables, it's recommend to use InnoDB.
We had poor performance when migrating a legacy app, after reading through the doc's and contacting the paid support, we had to migrate the tables into InnoDB; No query cache was also a killer.
You may also find later on you'll need to tweak the mysql conf via the 'flags' in the google console. An example being 'wait_timeout' is set too high by default (imo.)
Hope this helps someone :)
Query cache is not as yet a feature of Cloud SQL. This may explain the results. However, I recommend closing this question as it is quite broad and doesn't fit the format of a neat and tidy Q&A. There are just too many variables not mentioned in the Q&A and it doesn't appear clear what a decisive "answer" would look like to the very general question of optimization when there are so many variables at play.