What is the cost to DROP DATABASE in Google Cloud SQL? - google-cloud-sql

Google Cloud SQL has a price for I/O operations.
What is the cost of a DROP DATABASE operation? E.g., is it a function of the size of the database, or a fixed cost?
Similar questions for DROP TABLE as well as deleting an entire instance.

Google Cloud SQL currently uses innodb_file_per_table=OFF so all the data is stored in the system table space. When a large database is dropped all the InnoDB pages associated will be put in the list of free pages. This will only require updating the InnoDB pages that hold the bitmap table for the free pages so the number of I/O operations should be small. A just did a test and dropping a database 60GiB+ took about 18 seconds.
Dropping and table or an instance incur the same cost.
Deleting an instance doesn't cost anything. :-)
Note that, due to the use of innodb_file_per_table=OFF the size of the database will not decrease.

Related

Is there a limit on the import in Google Cloud SQL?

I am new to Google Cloud SQL and have created a schema on Cloud SQL. I have imported (using import in google GUI) a CSV file of 5M(unique) rows into this table but only 0.5M rows show up. Not sure if there is a limit I am missing something.
P.S. I also have enough free storage available.
Yes there's a row size limit of 65kb for mySQL even if your storage is capable of storing larger rows. This may be the reason why your table only displays a limited number of rows. There are several factors that affect the row size limit like storage engine (InnoDB/MyISAM) and page header and trailer data used by the storage engine. This is based on the mySQL documentation on row size limits.
Since Cloud SQL supports current and previous versions of mySQL (as well as PostgreSQL and SQL Server), the row size limit for those versions are also applicable.

How much shared memory does Postgresql need per max_locks_per_transaction?

I have a Postgresql 10 database with about 300k tables in 23k schemas. I am trying to upgrade to Postgresql 13 using pg_upgradecluster. This is failing while attempting dump all the schemas:
pg_dump: error: query failed: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "a45119740"."activity_hrc" IN ACCESS SHARE MODE
Is setting max_locks_per_transaction to 300k something that can be done? I haven't be able to find anything explaining how much shared memory this might need. The machine has 64G of RAM.
(I understand that I need to change my db design .. I have been backing up one schema at a time until now so wasn't aware of this problem)
Your lock table needs to be big enough to lock all your tables and metadata tables.
Since the lock table has room enough for
max_locks_per_transaction * (max_connections + max_prepared_transactions)
locks, all you need to do is set max_locks_per_transaction big enough that the lock table can hold the locks your pg_dump and the other workload needs.
To answer the question how much space each entry in the lock table needs, that can vary based on your architecture, but in general it is sizeof(LOCK) + sizeof(LOCKTAG), which is 168 bytes on my Linux system.

Using existing Cloud SQL Table loads slow; Creating Table through AppMaker in Cloud SQL loads fast

I am using a Cloud SQL database with one table of ~700 records with ~100 fields and it takes 8-10 seconds to load an individual record from the database into AppMaker. This is how long it takes even when I have set the Query Page size to "1".
To test if this was an issue with my database I created an External Model (e.g., new table) in the database through AppMaker. The new table created through AppMaker loads in less than 2 seconds, which is fairly typical load speeds for AppMaker.
Why would my pre-existing table load slowly whereas my table created in the SQL DB through AppMaker load quickly?
The only idea that popups in my mind is that you had different regions for the databases. App Maker's documentation recommends to create database instance in us-central region to achieve best performance. Some time ago I've tried to pair App Maker with database from some European region and it worked fairly slow.

Postgres - Is it necessary to create tablespace in my case?

I have a mobile/web project, using pg9.3 as database, and linux as server.
The data won't be huge, but as time goes on, the data increase.
For long term considering, I want to know about:
Questions:
1. Is it necessary for me to create tablespace for my database, or just use the default one?
2. If I create new tablespace, what is the proper location on linux to create the folder, and why?
3. If I don't create it now, and wait until I have to, till then, will it be easy for me to migrate db with data to new tablespace?
Just use the default tablespace, do not create new tablespaces. Tablespaces are only useful if you have multiple physical disks, so you can define which data is stored on which physical disk. The directory where your data is located is not that important for the workings of postgres, so if you only have one disk it is useless to use tablespaces
Should your data grow beyond the capacity of 1 disk, you will have to perform a full data migration anyway to move it to another physical disk, so you can configure tablespaces at that time
The idea behind defining which data is located on which disk (with tablespaces) is that you can do things like putting a big table which is hardly used on a slow disk, and putting this very intensively used table on a separated faster disk. But I assume you're not there yet, so don't over complicate things

No merge tables in MariaDB - options for very large tables?

I've been happily using MySQl for years, and have followed the MariahDB fork with interest.
The server for one of my projects is reaching end of life and needs to be rehosted - likely to CentOS 7, which includes MariahDB
One of my concerns is the lack of the merge table feature, which I use extensively. We have a very large (at least by my standards) data set with on the order of 100M records/20 GB (with most data compressed) and growing. I've split this into read only compressed myisam "archive" tables organized by data epoch, and a regular myisam table for current data and inserts. I then span all of these with a merge table.
The software working against this database is then written such that it figures out which table to retrieve data from for the timespan in question, and if the timespan spans multiple tables, it queries the overlying merge table.
This does a few things for me:
Queries are much faster against the smaller tables - unfortunately, the index needed for the most typical query, and preventing duplicate records is relatively complicated
Frees the user from having to query multiple tables and assemble the results when a query spans multiple tables
Allowing > 90% of the data to reside in the compressed tables saves alot of disk space
I can back up the archive tables once - this saves tremendous time, bandwidth and storage on the nightly backups
An suggestions for how to handle this without merge tables? Does any other table type offer the compressed, read-only option that myisam does?
I'm thinking we may have to go with separate tables, and live with the additional complication and changing all the code in the multiple projects which use this database.
MariaDB 10 introduced the CONNECT storage engine that does a lot of different things. One of the table types it provides is TBL, which is basically an expansion of the MERGE table type. The TBL CONNECT type is currently read only, but you should be able to just insert into the base tables as needed. This is probably your best option but I'm not very familiar with the CONNECT engine in general and you will need to do a bit of experimentation to decide if it will work.