Is there a limit for data MySQL Workbench? - mysql-workbench

I'm working on a script to push data into MySQL workbench. While exploring the workbench, I saw there is this option which says "Limit to 50000 rows" and it can be changed from Don't Limit to 50000. Is this just to limit the data/row being displayed for each table, or it will only store 50000 rows for each table ? Is there a limit to the data or row being stored for each table? Or as long as my hard disk size is sufficient it will keep saving the data. Also is there a way to check how much size each table is ? Thanks.

This limit option simply applies a LIMIT clause to queries which support it (unless the user has already added such a clause).
This is a simple measure to help new users to avoid the common pitfall of running a SELECT on a large data set unconditionally. More advanced users usually disable this option and manually apply a LIMIT clause where needed.

Related

Increase copy and index speed in RDS postgresql

I'm ingesting billions of rows into a postgresql table using a copy statement in a SQL script. Once the table is built I need to add a couple indexes. This is the only thing I'm doing with the database right now, so I would like to optimize it for copying/indexing. I heard it is best to adjust the maintenance_work_mem parameter value. But when I look at the value in RDS I see:
maintenance_work_mem = GREATEST({DBInstanceClassMemory*1024/63963136},65536)
The database I am using is a db.r6g.12xlarge so it has 384GB of memory. What do you think I should set the value to? Is adjusting the parameter in configuration > parameter group the right place?

Limit size of temporary tables (PostgreSQL)

I'm managing a PostgreSQL database server for some users who need to create temporary tables. One user accidentally sent a query with ridiculously many outer joins, and that completely filled the disk up.
PostgreSQL has a temp_file_limit parameter but it seems to me that it is not relevant:
It should be noted that disk space used for explicit temporary tables, as opposed to temporary files used behind-the-scenes in query execution, does not count against this limit.
Is there a way then to put a limit on the size on disk of "explicit" temporary tables? Or limit the row count? What's the best approach to prevent this?
The only way to limit a table's size in PostgreSQL is to put it in a tablespace on a file system of an appropriate size.
Since temporary tables are created in the default tablespace of the database you are connected to, you have to place your database in that size restricted tablespace. To keep your regular tables from being limited in the same way, you'd have to explicitly create them in a different, less limited tablespace. Make sure that your user has no permissions on that less limited tablespace.
This is a rather unappealing solution, so maybe you should rethink your requirement. After all, the user could just as well fill up the disk by inserting the data into a permanent table.

Is there any size limitation of SQL result?

I am using Azure PostgreSQL, I have a lot of files saved as byeta datatype in a table. In my project, I will execute some SQL query to get these files.
Sometimes a query will involve multiple files so the result data size of SQL query will be large. My questions: is there has some data size limit of SQL result for one SQL query ? Should I do some limit here? Any suggestion is appreciated.
There is no limit for the size of a result set in PostgreSQL.
However, many clients cache the whole result set in memory, which can easily lead to an out-of-memory condition on the client side.
There are ways around that:
Use cursors and fetch the result row by row or in batches. That should work with any client API.
With the C API (libpq), you could activate single-row mode.
With JDBC, you could set the fetch size.
Note that this means that you could get a runtime error from the database server in the middle of processing a result set.

DB2 updated rows since last check

I want to periodically export data from db2 and load it in another database for analysis.
In order to do this, I would need to know which rows have been inserted/updated since the last time I've exported things from a given table.
A simple solution would probably be to add a timestamp to every table and use that as a reference, but I don't have such a TS at the moment, and I would like to avoid adding it if possible.
Is there any other solution for finding the rows which have been added/updated after a given time (or something else that would solve my issue)?
There is an easy option for a timestamp in Db2 (for LUW) called
ROW CHANGE TIMESTAMP
This is managed by Db2 and could be defined as HIDDEN so existing SELECT * FROM queries will not retrieve the new row which would cause extra costs.
Check out the Db2 CREATE TABLE documentation
This functionality was originally added for optimistic locking but can be used for such situations as well.
There is a similar concept for Db2 z/OS - you have to check that out as I have not tried this one.
Of cause there are other ways to solve it like Replication etc.
That is not possible if you do not have a timestamp column. With a timestamp, you can know which are new or modified rows.
You can also use the TimeTravel feature, in order to get the new values, but that implies a timestamp column.
Another option, is to put the tables in append mode, and then get the rows after a given one. However, this option is not sure after a reorg, and affects the performance and space utilisation.
One possible option is to use SQL replication, but that needs extra tables for staging.
Finally, another option is to read the logs, with the db2ReadLog API, but that implies a development. Also, just appliying the archived logs into the new database is possible, however the database will remain in roll forward pending.

Which will have more Performance in DB2

I need to insert a table from a master table having 2 billion records . Insert needs to satisfy some conditons and also in the some columns to be calculated and then it has to be inserted.
I am having 2 options but I dont know which to follow to improve performance.
1 option
Create a cursor by filtering from master table with the conditons. and get one by one record for caluclation and then last insertion to the child table
2 option
insert first using into conditon and then calculation using update statement.
Please Assist.
Having a cursor to get data, perform calculation, and then insert into the database will be time consuming. My guess is that since it involves data connections and I/O for each retrieval and insertion (for both the databases )
Databases are usually better with bulk operations, so it will definitely give you better performance if you use Option 2. Option 2 is better for troubleshooting also ( as the process is cleanly separated - step1: download, step2: calculate) than Option 1 where in case of an error in the middle of the process, you'll be forced to redo all the steps again.
Opening a cursor and inserting records one by one might have serious performance issues at the volumes on the order of a Billion . Especially if you have a weak network between your Database tier and App tier . The fastest way to do this could be to use Db2 export utility to download data , let the program manipulate the data from the file and later load the file back to the child table . Apart from the file based option you can also consider the following approaches
1) Write an SQL stored procedure (No need to ship the data out of the database to make changes )
2) If you using Java/JDBC use Batch Update feature to update multiple records at the same time
3) If you using a tool like Informatica, turn on the bulk load feature in informatica
Also see the IBM DW article on imporving insert performance . The article is a little bit older but concepts are still valid . http://www.ibm.com/developerworks/data/library/tips/dm-0403wilkins/