Google Cloud SQL wrong instance size - google-cloud-sql

Few days ago instance's size in Cloud SQL stucked. There was a lot of data (~164 GB) but two days ago database was greatly maintained and cleaned, so now it takes much less space (~10-20 GB I think). Developers Console still shows "163.9 GB of 250 GB". Don't know how to force size update. Restart did not help.
And also a strange thing with backups at "Operations" tab:
Aug 17, 2014 7:10:07 AM Backup Done An unknown error occurred
On "Overview" tab the last backup in the list is from August 14. So the backups are broken.
May be, there is a hidden link between these two.
Any thoughts?

Those are probably unrelated.
See: MySQL InnoDB not releasing disk space after deleting data rows from table.
You will need to contact Cloud SQL support directly for the issue with your backup. Email them at cloud-sql#google.com and include the instance name.

Related

Is it possible to take incremental back up of Database in Openerp?

Daily I am able to take the back up of PostgreSQL Database in Openerp by using Cron job. Every day database dump is coming around 50 MB. I want to take it daily which means 50 mb each day, which will consume large amount to hard disk space. I don't want it to happen. I want to take incremental Database back up every day.Can any one help me.Thank's in advance.
Check the format of the backups to ensure OpenERP is compressing them. If not you can do it manually or use pg_dump or pgadmin3 to do the backups.
If you want point in time backups then you will need to get tricky with postgres with check points and log shipping and I would think carefully before going down that road.
The other thing to note is OpenERP stores the attachments in the database so if you have a lot of attached documents or emails with attachments, these will be in the ir_attachments table and cause your backups to grow quickly.

SQL Server 2008 R2 log files filled up the drive

So some of us dev's are starting to take over the management of some of our SQL Server boxes as we upgrade to SQL Server 2008 R2. In the past, we've manually reduced the log file sizes by using
USE [databaseName]
GO
DBCC SHRINKFILE('databaseName_log', 1)
BACKUP LOG databaseName WITH TRUNCATE_ONLY
DBCC SHRINKFILE('databaseName_log', 1)
and I'm sure you all know how the truncate only has been deprecated.
So the solutions that I've found so far are setting the recovery = simple, then shrink, then set it back... however, this one got away from us before we could get there.
Now we've got a full disk, and the mirroring that is going on is stuck in a half-completed, constantly erroring state where we can't alter any databases. We can't even open half of them in object explorer.
So from reading about it, the way around this happening in the future is to have a maintenance plan set up. (whoops. :/ ) but while we can create one, we can't start it with no disk space and SQL Server stuck in its erroring state (event viewer is showing it recording errors about 5 per second... this has been going on since last night.)
Anyone have any experience with this?
So you've kind of got a perfect storm of bad circumstances here in that you've already reached the point where SQL Server is unable to start. Normally at this point it's necessary to detach a database and move it to free space, but if you're unable to do that you're going to have to start breaking things and rebuilding.
If you have a mirror and a backup that is up to date, you're going to need to blast one unlucky database on the disk to get the instance back online. Once you have enough space, then take emergency measures to break any mirrors necessary to get the log files back to a manageable size and get them shrunk.
The above is very much emergency recovery and you've got to triple check that you have backups, transaction log backups, and logs anywhere you can so you don't lose any data.
Long term to manage the mirror you need to make sure that your mirrors are remaining synchronized, that full and transaction log backups are being taken, and potentially reconfiguring each database on the instance with a maximum file size where the sum of all log files does not exceed the available volume space.
Also, I would double check that your system databases are not on the same volume as your database data and log files. That should help with being able to start the instance when you have a full volume somewhere.
Bear in mind, if you are having to shrink your log files on a regular basis then there's already a problem that needs to be addressed.
Update: If everything is on the C: drive then consider reducing the size of the page file to get enough space to online the instance. Not sure what your setup is here.

do not undsrstand file structure of pgsql

Can someone provide a bit of clarification?
I understand that the /base folder show a data folder for each database. In PgAdmin, I have 13 databases listed under 1 server. In the /base folder, there are 14 folders. So that should be 1 per database and 1 for the the server equalling 14.
I do not know how do know what folder is for what database. However, only one has a lot of data. When I search for large files on my system, this displays:
16M: /var/lib/pgsql/9.2/data/base/18642/18652
13M: /var/lib/pgsql/9.2/data/base/18642/18751
1.0G: /var/lib/pgsql/9.2/data/base/21719/21804
12M: /var/lib/pgsql/9.2/data/base/21719/21806
15M: /var/lib/pgsql/9.2/data/base/21719/21750
20M: /var/lib/pgsql/9.2/data/base/21719/21837
118M: /var/lib/pgsql/9.2/data/base/21719/21834
Now, if this is (21719) actually the only running database used by staff, when I archive (pgdump) it, the size of the dump is approx 6 Gig. The size of the dump and the data listed above do not match.
Can someone shed some light on my confusion?
Thanks a bunch.
This was a result of trying to find out why I have almost 700 gig of drive space being used when the only stuff on it is postgresql and an occasional runaway vnc-error-log that eats up drive space (figured out how to solve that). However, I still have over 60% of my drive used, I cannot account for it, and found the data sizes in postgresql.
Thanks for any insight that can be provided on postgresql db data
I do not know how do know what folder is for what database
The folder name is the OID of the database, which you can get with the following SQL query, along with each db size according to the SQL engine:
select oid,datname,pg_database_size(datname) from pg_database;
If there are 13 databases and 14 folders, the additional folder is probably the pgsql_tmp directory used for temporary files. The concept of server of pgAdmin does not come into play in a specific server's data directory.
Also as said in the comments, the dump size may be greater than the disk size due compression. It can also be smaller since it doesn't contain any index data. On the whole, knowing the size on disk does not help much to predict the size of the SQL dump and vice versa.

mongodb repair/compact/index infinite like behaviour

From the start,
I have a collection with about 51m records. So I indexed one of the fields when the percent progress started to increase above 100% (800% complete) so I cancelled this and figured I must have some db corruption
So I did a validation of the collections and found they were ok. Nonetheless I tried a compact and a repair and I find that in the temp folder (for a repair) or my db folder for a compact
I used to have 'collection.0' to 'collection.14' but checking it after 14 hours I found it counting to 'collection.64' and had to cancel it. Its highly unusual from my previous experiences that this is normal behaviour.
Previously the database size was 20 gig and during this compact it increased to well over 100gb due to this
What could be wrong and how would I fix my database?
From the console each one will have
Allocating new datafile ... Size: 2047MB, took 107 seconds
(For each of the 15 to 64 additional files)

tfs database size - version control

I have TFS installed on a single server and am running out of space on the disk. (We've been using the instance for about 2 years now.)
Looking at the tables in SQL Server what seems to be culprit is the tbl_content table, it is at 70 GB. If I do a get on the entire source tree for all projects it is only about 8 GB of data.
Is this just all the histories of the files? It seems like a 10:1 ratio just the histories...since I would think the deltas would be very small.
Does anyone know if that is a reasonable size given 8 GB of source (and 2 yrs of activity)? And if not what to look at to 'fix' this?
Thanks
I can't help with the ratio question at the moment, sorry. For a short-term fix you might check to see if there is any space within the DB files that can be freed up. You may have already, but if not..
SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;
If the statement above returns some space you want to recover you can look into a one time DBCC SHRINKDATABASE or DBCC SHRINKFILE along with scheduling routine SQL maintenance plan that may include defragmenting the database.
DBCC SHRINKDATABASE and DBCC SHRINKFILE aren't things you should do on a regular basis, because SQL Server needs some "swap" space to move things around for optimal performance. So neither should be relied upon as your long term fix, and both could cause some noticeable performance degradation of TFS response times.
JB
Are you seeing data growth every day, even when no activity occurs on the system? If the answer is yes, are you storing any binaries outside of the 8GB of source somewhere?
The reason that I ask is that if TFS is unable to calculate a delta or if the file exceeds the size of delta generation, TFS will duplicate the entire binary file. I don't have the link with me, but I have it on my work machine, which describes this scenario and how to fix it, in the event that this is the cause of your problems.