What the title says -
Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
The query in question first pulls out some rows from a databased on the linked server (matching strings in the server I'm querying from), stores them in a table, then uses this table to pull out more matching records from the linked server. This is what I get on the second part.
The basic question is, is there something else hiding in this error message? Which tempdb is it, my tempdb, or the linked server's tempdb? I don't think mine can be a problem, as increasing the size doesn't help, the recover mode is simple, autogrowth is on, etc.
You first need to check your SQL Server's tempDB.... is the drive that TempDB and its log got lots of free disc space? It might be on two different drives. I would expect such an error to write a message in the SQL Server error log - have you checked there as well at the time of the problem? You then need to do the same on the remote server, if you have access.
Whether it's tempDB or a user/application database, just because the recovery model is simple doesn't mean that the transaction log won't grow - and take all the disk space! It does make it less likely, but long transactions can cause it to "blow".
Related
I want to log test data into a postgresql database. This will be a new table each day, name then something like testtable_16122017.
At the end of each day this table will be backed up to another SQL server. So I dont need synchronisation and all that, since the table will be created once, filled with data and never changed again.
Can this be done with Slony, or maybe I just have to put a few lines in the testrig software, that at the end of each day this table will be pushed into backup database?
Why all that, there is a Labview PC at the testrig and an SQL server in a LAN So if anything else breaks down, we still get the test data. And for furhter analysis and backup it will be sent to another server later on. Also I have to worry less about performance.
Thanks!
I have a .csv file, comma-delimited (located at C:/). I am using the DB2 LOAD utility to load data present in the CSV file in a DB2 table.
LOAD CLIENT FROM C:\Users\somepath\FileName.csv of del
MODIFIED BY NOCHARDEL COLDEL, insert into SchemaName.TABLE_NAME;
CSV file has 25 rows. After the utility completed I got an error message for NOCHARDEL. My table has all 25 rows properly loaded. Now when I try to execute an insert/update/delete statement on any of the tables present in that schema I am getting following error.
Lookup Error - DB2 Database Error: ERROR [55039] [IBM][DB2/AIX64] SQL0290N Table space access is not allowed.
Could you please help me whether I am making any mistake or missing a parameter that is causing lock on the table.
Earlier while loading the file similar situation occurred, where DBA confirmed that Table space in question is in “load in progress” state
Changes generated by the DB2 LOAD utility are not logged (one of the side-effects of its high performance). If the database crashes immediately after the load it will be impossible to recover the table that was loaded by replaying log records, because there are no such records. For this reason the tablespace containing the loaded table is automatically placed in the BACKUP PENDING mode, forcing you to take a backup of that tablespace or the entire database to ensure it is fully recoverable.
There are options that you can specify for the LOAD command that can help you avoid this situation in the future:
NONRECOVERABLE -- this option does not place the tablespace into the BACKUP PENDING mode, but, as its name implies, the table you're loading to becomes non-recoverable in case of a crash, and your only option in that situation will be to drop and re-create the table.
COPY YES -- this option creates a copy of the table prior to loading, which can be used to recover the table to its pre-LOAD state in case of a crash.
If you are only loading 25 records, I suggest you use the IMPORT utility instead -- it does not have these restrictions because it is fully logged (at the price of lower performance, which for 25 records won't matter).
Thanks #mustaccio. I had 60 Million rows to insert. I was using 25 as sample to check the outcome.
To add another point, we later came to know that this is a known DB2 bug that keeps the load in progress state (DB2 is unable to acknowledge that the load has completed and the session remains open indefinitely) and place the table space in backup pending state.
Recovery is the only option to release the table space once it is in pending state.
This issue is fixed in fix pack 10 as per the DB2 team (we are yet to deploy and test). Mean while NONRECOVERABLE key word is working fine for us
The reason why your table is stuck in the LOAD IN PROGRESS state is the NOCHARDEL error happening at the end of the LOAD.
Have you tried restarting the database? This should reinitialize all table spaces and remove any rogue states.
http://www-01.ibm.com/support/docview.wss?uid=swg1IC65395
http://www-01.ibm.com/support/docview.wss?uid=swg21427102
We need an audit log in the product that we are creating. We use SQL Server 2008 R2. I learned that the LDF file keeps an complete log of all transactions that where made*.
I've found ApexSQL Log, this tools analyses the LDF file and provides a GUI. It's a great demonstration of what's possible. But it's expensive. More info: http://www.apexsql.com/sql_tools_log.aspx
Do you know of other programs that can analyse the LDF file's? Or perhaps other methods to provide audit-trail functionality? I know that it's possible to create triggers. But if it isn't necessary to add things to my database scheme then I would rather not do it.
*Only if you select the full recovery model.
How about the new Change Data Capture (CDC) functionality in R2. Doesnt that serve your purpose ?
When it comes to the information stored in an LDF file, make sure to form a full log chain. A log chain is a continuous sequence of transaction log backups. It starts with a full database backup followed by all subsequent log backups up through the auditing point. If it becomes broken, only the transactions in the logs up to the last backup before the missing one can be shown with full information (e.g. a schema and object name, or a row history)
Unlike INSERT and DELETE operations, which are fully logged in the LDF files, UPDATE operations are logged minimally – only the changes that are made are logged, but the old and new values are not. When logging UPDATE operations, SQL Server doesn’t log complete before and after row states but only the incremental change that occurred to the row. For example, if a word “log” was updated to word “blog” SQL Server will, in general case, only log an addition of letter “b” at index 0. This is enough for its purpose of ensuring ACID but not enough to easily show before and after states of the row. So, in order to understand what changed really occurred, you have to reconstruct the context in which the change occurred from the rest of transaction log and/or backup and online database data
i know this has been asked here. But my question is slightly different. When the dataset was designed keeping the disconnected principle in mind, what was provided as a feature which would handle unexpected termination of the application, say a power failure or a windows hang or system exception leading to restart. Say the user has entered some 100 rows and it is modified at the dataset alone. Usually the dataset is updated at the application close or at a timely period.
In old times which programming using vb 6.0 all interaction used to take place directly with the database, thus each successful transaction was committing itself automatically. How can that be done using datasets?
DataSets are never for direct access to database, they are a disconnected model only. There is no intent that they be able to recover from machine failures.
If you want to work live against the database you need to use DataReaders and issue DbCommands against the database live for changes. This of course will increase your load on the database server though.
You have to balance the two for most applications. If you know a user just entered vital data as a new row, execute an insert command to the database, and put a copy in your local cached DataSet. Then your local queries can run against the disconnected data, and inserts are stored immediately.
A DataSet can be serialized very easily, so you could implement your own regular backup to disk by using serialization of the DataSet to the filesystem. This will give you some protection, but you will have to write your own code to check for any data that your application may have saved to disk previously and so on...
You could also ignore DataSets and use SqlDataReaders and SqlCommands for the same sort of 'direct access to the database' you are describing.
I have a Sybase SQL Anywhere 11.0.1 database that I am using to sync with an Oracle Consolidated Database.
I know that the SQL Anywhere database keeps track of all of the changes that are made to it so that it knows what to synchronize with the consolidated database. My question is whether or not there is a SQL command that will tell you if the database has changes to sync.
I have a mobile application and I want to show a little flag to the user anytime they have made changes to the handheld that need to be synced. I could just create another table to track that stuff myself but I would much rather just ping the database and ask it if it has changes that need to be synced.
There's nothing automatic to tell you that there is data to synchronize. In addition to Ben's suggestion, another idea would be to query the SYS.SYSSYNC table at the remote database to get an idea of whether there might be changes. The following statement returns a result set that shows a simple status of your last synchronization :
select ss.site_name, sp.publication_name, ss.log_sent,ss.progress
from sys.syssync ss, sys.syspublication sp
where ss.publication_id = sp.publication_id
and ss.publication_id is not null
and ss.site_name is not null
If progress < log_sent, then the status of the last synchronization is unknown. The last upload may or may not have been applied at the consolidated, because the upload was sent, but no response was received from the MobiLink server. In this case, suggesting a synch isn't a bad idea.
If progress = log_sent, then the last synch was successful. Knowing this, you could check the value of db_property('CurrentRedoPos'), which will return the current log offset of the remote database. If this value is significantly higher than the progress value, there have been many operations applied to the database since the last synchronization, so there's a good chance that there is data to synchronize. There are lots of reasons why even a large difference in progress and db_property('CurrentRedoPos') could result in no actual data needing synchronization.
The download from the ML Server is applied by dbmlsync after the progress value at the remote is updated by dbmlsync when the upload is confirmed by the ML Server. Operations applied in the download by dbmlsync are not synchronized back to the ML Server, so the entire offset range could just be the last download that was applied. This could be worked around by tracking the current log offset in the sp_hook_dbmlsync_end hook when the exit code value in the #hook_dict table value is zero. This would tell you the log offset of the database after the download was applied, and you could now compare the saved value with the current log offset.
All the operations in the transaction log could be operations on tables that are not synchronized.
All the operations in the transaction log could have been rolled back.
My solution is not ideal. Tracking the changes to synchronized tables yourself is the best solution, but I thought I could offer an alternative that might be OK for your needs, with the advantage that you are not triggering an extra action on every operation performed on a synchronized table.
The mobile database doesn't keep track of when the last sync was, the MobiLink server keeps all of that information in the MobiLink tables of the consolidated database.
Since synchronization only transfers necessary information, you could simply initiate a sync. If there's nothing to sync, then very little data will be used by your application.
As a side note, SQL Anywhere has its own SO clone which is monitored by Sybase engineers. If anyone knows for sure, it'll be them.
As of SQL Anywhere 17, SAP PM maps to a local Sybase database that contains a TTRANSACTION_UPLOAD table, so to determine if a synchronization is necessary we simply query this table to see if it has any records that need to be sync'd to the HANA consolidation database.