Error: 1105 in sql server - sql-server-2012-express

I'm using SOL server 2012 as database server.
Right now mdf file size is around 10 GB.
When ever I'm doing any transaction into this database sql server troughs bellow error
Error Number : 1105 Error Message :Could not allocate space for object dbo.tblsdr . PK_tblsdr_3213E83F0AD2A005 in database hwbsssdr because the PRIMARY filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the file-group.
There is almost 400 GB of free space is available on my disc.
Can any one tell me what is the issue and how can i solve that.

Since you are using Express edition of SQL Server 2012, there is a limitations of 10GB per database, so this is your problem.
By the way, problem doesn't need necessarily to be with disk that is running out.
Also, if you have database with set up MAXSIZE to specific value, and if database reach that value every next transaction will report error from your question.
So, if you are sure that you have enough disk space for next transactions, check MAXSIZE property of your database executing next code:
use master;
go
exec sp_helpdb [YourDatabase]
If you want to change MAXSIZE database property, you can do this with next code:
alter database [YourDatabase]
modify file
(
name = 'YourDatabaseFile',
maxsize = X MB
)

Explanation
The specified filegroup has run out of free space.
Action
To gain more space, you can free disk space on any disk drive containing a file in the full filegroup, allowing files in the group to grow. Or you can gain space using a data file with the specified database.
Freeing disk space
You can free disk space on your local drive or on another disk drive. To free disk space on another drive:
Move the data files in the filegroup with an insufficient amount of free disk space to a different disk drive.
Detach the database by executing sp_detach_db.
Attach the database by executing sp_attach_db, pointing to the moved files.
Using a data file
Another solution is to add a data file to the specified database using the ADD FILE clause of the ALTER DATABASE statement. Or you can enlarge the data file by using the MODIFY FILE clause of the ALTER DATABASE statement, specifying the SIZE and MAXSIZE syntax.

Related

How to resolve Amazon RDS Postgresql instance's DiskFull error?

We are having a very small Database for storing some relational data in an Amazon RDS instance. The version of the PostgreSQL Engine is 12.7.
There are a number of lambda functions in AWS in the same region, that access this instance for inserting records. In the process, some join queries are also used. We use psycopg2 Python library to interact with the DB. Since, the size of the data is very small, we have used a t2.small instance with 20GB storage and 1 CPU. In the production, however, a t2.medium instance has been used. Auto Scaling has not been enabled.
Recently, we have started experiencing an issue with this database. After the lambda functions run for a while, at some point, they time out. This is because the database takes too long to return a response, or, some times throws a Disk Full error as follows:
DiskFull
could not write to file "base/pgsql_tmp/pgsql_tmp1258.168": No space left on device
I have referred this documentation to identify the cause. Troubleshoot RDS DiskFull error
Following are the queries for checking the DB file size:
SELECT pg_size_pretty(pg_database_size('db_name'));
The response of this query is 35 MB.
SELECT pg_size_pretty(SUM(pg_relation_size(oid))) FROM pg_class;
The output of the above query is 33 MB.
As we can see, the DB file size is very small. However, on checking the size of the temporary files, we see the following:
SELECT datname, temp_files AS "Temporary files",temp_bytes AS "Size of temporary files" FROM pg_stat_database;
If we look at the size of the temporary files, its roughly 18.69 GB, which is why the DB is throwing a DiskFull error.
Why is the PostgreSQL instance not deleting the temporary files after the queries have finished? Even after rebooting the instance, the temporary file size is the same (although this is not a feasible solution as we want the DB to delete the temporary files on its own). Also, how do I avoid the DiskFull error as I may want to run more lambda functions that interact with the DB.
Just for additional information, I am including some RDS Monitoring graphs taken while the DB slowed down for CPU Utilisation and Free Storage Space:
From this, I am guessing that we probably need to enable autoscaling as the CPU Utilisation hits 83.5%. I would highly appreciate if someone shared some insights and helped in resolving the DiskFull error and identify why the temporary files are not deleted.
One of the join queries the lambda function runs on the database is:
SELECT DISTINCT
scl1.*, scl2.date_to AS compiled_date_to
FROM
logger_main_config_column_name_loading
JOIN
column_name_loading ON column_name_loading.id = logger_main_config_column_name_loading.column_name_loading_id
JOIN
sensor_config_column_name_loading ON sensor_config_column_name_loading.column_name_loading_id = column_name_loading.id
JOIN
sensor_config_loading AS scl1 ON scl1.id = sensor_config_column_name_loading.sensor_config_loading_id
INNER JOIN (
SELECT id, hash, min(date_from) AS date_from, max(date_to) AS date_to
FROM sensor_config_loading
GROUP BY id, hash
) AS scl2
ON scl1.id = scl2.id AND scl1.hash=scl2.hash AND scl1.date_from=scl2.date_from
WHERE
logger_main_config_loading_id = %(logger_main_config_loading_id)s;
How can this query be optimized? Will running smaller queries in a loop be faster?
pg_stat_database does not show the current size and number of temporary files, it shows cumulative historical data. So your database had 145 temporary files since the statistics were last reset.
Temporary files get deleted as soon as the query is done, no matter if it succeeds or fails.
You get the error because you have some rogue queries that write enough temporary files to fill the disk (perhaps some forgotten join conditions). To avoid the out-of-space condition, set the parameter temp_file_limit in postgresql.conf to a reasonable value and reload PostgreSQL.

db2 Restored database size vs source database size

When a DB2 database is restored and recovered from backups
is the restored database a physical copy - ie: identical block for block with the source database (as it was at time of backup) - and of identical size of source database?
or
is the restored database a logical copy - where datafile blocks are reorganized and coalesced (so the most of unused, fragmented free space in datafiles has been removed - causing restored database to often be smaller in storage footprint?
It is a page-for-page physical copy, but only of the used extents of pages in each table space. You can not change the logical contents of the data during a restore but you could alter the layout of the physical persistent storage.
There are also some changes you can cause during the restore process which can affect the persistently stored state of the system, such as a redirected restore altering the table space definitions or storage groups, replacing the DB history file, changing the encryption method in use, or upgrading the DB to a new release level.
It's a page-for-page physical copy. (This is why you can't, for example, backup on a little-endian system and restore onto a big-endian system.)

Vacuuming Postgresql has used up disk space

I have just run Vacuum on a Postgres table to try to recover disk space, however the result is that all the disk space has been consumed. Does Vacuum crete log files or transaction logs that can be deleted?
I'm assuming you performed a VACUUM FULL as the standard VACUUM just makes space in the data file that is associated with deleted records free so that Postgres can use that space for new records. It doesn't release the space to the operating system.
VACUUM FULL does release space, but it does this by copying all the information that it wants to keep from the data file into a new data file, and then when complete it deletes the old data file. As such VACUUM FULL requires extra space while it is running.
http://www.postgresql.org/docs/current/static/sql-vacuum.html

ASA 8 database.db file size doesn't increase

I'm running an app on a asa8 database. Since the beginning with a 'clean' database the size was 9776 kb. Now after a while and being populated with 30 tables with lots of data, the size is still 9776kb.
My question is - is there a ceiling as to how much data can be added to this db or will this file automatically increase in size when it needs to?
Thanks in advance
Alex
I am running an app on ASA8 for years. Even a empty database (regarding data) has a size because of its data structure. This is also influenced by the page size of the database.
When you begin adding data the database might be able to fill in the data into the existing pages without the need to extend the file size for some time. But you should soon see an increasing file size of the database.
ASA writes new or updated data to a log file in the same directory as the database file (default option). The changes from this log file are fed into the main database when the database server executes a so called checkpoint. Check the server log messages - a message should inform you about checkpoints to happen. After a checkpoint the timestamp of the database file gets updated. Check this too.

How to set the table space of a sub select in PostgreSQL

I have a rather large Insert Query, and upon running this my slow disks fill up towards 100% upon where I revive:
Transaction aborted because DBD::Pg::db do failed: ERROR: could not write to hash-join temporary file: No space left on device
Sounds believable, I have a Fast drive with lots of space on it that i could use instead of the slow disk but I dont want to make the fast disk the default table-space or move the table im inserting into to the fast disk, I just want that data-blob that is generated as part of the insert query to be on the fast disk table-space. Is this possible is PostgreSQL and if so how?
version 9.1
You want the temp_tablespaces configuration directive. See the docs.
Temporary files for purposes such as sorting large data sets are also
created in these tablespaces
You must CREATE TABLESPACE the tablespace(s) before using them in an interactive SET temp_tablespaces command.
SET LOCAL temp_tablespaces may be used to set it only for the current transaction.