Db2 on IBM Cloud: How to get the table space? - db2

I use DB2 on IBM Cloud. I got a message:
Unable to allocate new pages in table space "XXXXX"
May I know what's the table space for free plan and Flex plan? How to expand the space?

If it's Db2 on Cloud's free plan, your tablespace would be <username>space1.
For the free version you are limited to that one tablespace and cannot increase it yourself.
Flex is a different story where you can increase it through a management page.

Related

Is there a limit on the import in Google Cloud SQL?

I am new to Google Cloud SQL and have created a schema on Cloud SQL. I have imported (using import in google GUI) a CSV file of 5M(unique) rows into this table but only 0.5M rows show up. Not sure if there is a limit I am missing something.
P.S. I also have enough free storage available.
Yes there's a row size limit of 65kb for mySQL even if your storage is capable of storing larger rows. This may be the reason why your table only displays a limited number of rows. There are several factors that affect the row size limit like storage engine (InnoDB/MyISAM) and page header and trailer data used by the storage engine. This is based on the mySQL documentation on row size limits.
Since Cloud SQL supports current and previous versions of mySQL (as well as PostgreSQL and SQL Server), the row size limit for those versions are also applicable.

SAP / DB2 for LUW 11.1 Finding out process that is creating high number of TLOG entries

I have problem with replicating data from DB2 LUW (that is DB for SAP ECC app).
Replication procedure utilize DB2 low level API DB2 Log Reader.
https://www.ibm.com/docs/en/db2/11.1?topic=apis-db2readlog-read-log-records
Recently I can see that I can't read anymore logs in timely manner (I am connecting with Express Route between DB2 and Software that reads log so network is super good).
What I was able to identify so far is that there was 4 hours interval where reading logs were getting slower. I was also replicating AUFK (SAP ECC table) and saw every 4 hours enormous number of UPDATES. We were able to find out job on SAP ECC and disable it.
Now I am spotting different interval.. (probably other job on different table)
Is there a way (on SAP ECC or directly on DB2) to verify which table is having most DML(writes) or causing high consumption of log space?
I would like to get statistics like top 10 :
TABLE Name || % of Daily Log utilization
or
TABLE Name || No of Inserts || No of Updates || No of Deletes
What would be even better to maybe get the stats by Hour so I can associate it with some Job that is running.
Note: I don't have access to DB2 or SAP ECC and need to give some kind of guideline to team what I expect them to run on SAP ECC or DB2 side.

Aurora PostgreSQL Temporary storage issue for index creation

We have migrated some of the data to PostgreSQL from MS-SQL Server. And are using R6G.Large aurora PostgreSQL RDS instance. We have transferred the data using DMS to PostgreSQL instance, and table size is around 183 GB and it has around 1.5 billion records. Now we are trying to create a Primary Key on an Id column, but it is failing with the below error...
ERROR: could not write to file "base/pgsql_tmp/pgsql_tmp18536.30": No
space left on device CONTEXT: SQL statement "ALTER TABLE
public.tbl_actions ADD CONSTRAINT tbl_actions_pkey PRIMARY KEY
(action_id)" PL/pgSQL function inline_code_block line 10 at SQL
statement SQL state: 53100
When looked at the documentation we found that index creation will use the temporary storage of the instance, and for r6g.large has 32 GiB. And for this huge table, that storage is not sufficient hence the index creation is failed with above error.
Is there any workaround to solve this without having to upgrade the instance type, may be by changing some values in parameter group or options groups.
To me, this looks like the storage has run out and not the RAM. You can check this using the Monitoring tab under the heading "Free Storage Space" on the RDS instance in AWS Console.
Try this:
To increase storage for a DB instance
Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.
In the navigation pane, choose Databases.
Choose the DB instance that you want to modify.
Choose Modify.
Enter a new value for Allocated storage. It must be greater than the current value.
More details here:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.Storage
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Troubleshooting.html#CHAP_Troubleshooting.Storage

What is the cost to DROP DATABASE in Google Cloud SQL?

Google Cloud SQL has a price for I/O operations.
What is the cost of a DROP DATABASE operation? E.g., is it a function of the size of the database, or a fixed cost?
Similar questions for DROP TABLE as well as deleting an entire instance.
Google Cloud SQL currently uses innodb_file_per_table=OFF so all the data is stored in the system table space. When a large database is dropped all the InnoDB pages associated will be put in the list of free pages. This will only require updating the InnoDB pages that hold the bitmap table for the free pages so the number of I/O operations should be small. A just did a test and dropping a database 60GiB+ took about 18 seconds.
Dropping and table or an instance incur the same cost.
Deleting an instance doesn't cost anything. :-)
Note that, due to the use of innodb_file_per_table=OFF the size of the database will not decrease.

Local vs Heroku Postgres speed

I have a local database with a single table which has roughly 5.5 million records. I created a new database (Basic Plan) on Heroku Postgres and restored the local dump into it. Firing up psql and doing some queries I noticed that the speed is significantly lower than locally. I then provisioned another database with the Crane plan and the numbers are similarly bad.
Here some numbers:
select count(*) from table;
Local: 1216.744 ms
Heroku (Basic): 4697.073 ms
Heroku (Crane): 2972.302 ms
select column from table where id = 123456;
Local: 0.249 ms
Heroku (Basic): 127.557 ms
Heroku (Crane): 137.617 ms
How are these huge differences possible? Could this be entirely related to hardware differences? Is there any easy way to increase the throughput?
The 120-130ms for the single-row select is likely indicative of network latency, as Lukos suggests, and you can reduce this by running your queries closer to the database server. However, the 2-3 seconds of latency is more likely to do with database speed -- specifically, I/O throughput. This is why Heroku emphasizes that difference in their database offerings has to do with cache size.
Heroku Postgres stores your data on Amazon EBS. (This is revealed in an incident report, and would incidentally explain the 1 TB limit too.) EBS performance is a bit like a rollercoaster; much more so than local disks. Some reads can be fast, and others can be slow. Sometimes the whole volume drops to 100 KB/s, sometimes it maxes out the interconnect.
In my experience hosting databases in EC2, I found that running RAID 10 over EBS smoothed out performance differences. I don't believe Heroku does this, as it would greatly increase costs above the database plan price points. AWS recently announced provisioned IOPS for EBS that would allow you to pay for dedicated capacity, further increasing predictability but also increasing costs.
So you are asking if it is possible for an online database service to be slower than a local database? Yes, of course it will be slower, it has all the network latency to deal with
Is there an easy way to speed it up? Apart from the obvious like faster network links and moving offices to be next door to Heroku's data centre - No
I created a new database (Basic Plan) on Heroku Postgres and restored the local dump into it.
Looks like you were playing with different DB at the same time.
select column from table where id = 123456;
Local: 0.249 ms
Heroku (Basic): 127.557 ms
Heroku (Crane): 137.617 ms
I highly suspect your Heroku app and Heroku DB to not be hosted in the same region. Both need to be either in the US or EU.