SyBase SQL anywhere check if Synchronization is needed? - sqlanywhere

I have a Sybase SQL Anywhere 11.0.1 database that I am using to sync with an Oracle Consolidated Database.
I know that the SQL Anywhere database keeps track of all of the changes that are made to it so that it knows what to synchronize with the consolidated database. My question is whether or not there is a SQL command that will tell you if the database has changes to sync.
I have a mobile application and I want to show a little flag to the user anytime they have made changes to the handheld that need to be synced. I could just create another table to track that stuff myself but I would much rather just ping the database and ask it if it has changes that need to be synced.

There's nothing automatic to tell you that there is data to synchronize. In addition to Ben's suggestion, another idea would be to query the SYS.SYSSYNC table at the remote database to get an idea of whether there might be changes. The following statement returns a result set that shows a simple status of your last synchronization :
select ss.site_name, sp.publication_name, ss.log_sent,ss.progress
from sys.syssync ss, sys.syspublication sp
where ss.publication_id = sp.publication_id
and ss.publication_id is not null
and ss.site_name is not null
If progress < log_sent, then the status of the last synchronization is unknown. The last upload may or may not have been applied at the consolidated, because the upload was sent, but no response was received from the MobiLink server. In this case, suggesting a synch isn't a bad idea.
If progress = log_sent, then the last synch was successful. Knowing this, you could check the value of db_property('CurrentRedoPos'), which will return the current log offset of the remote database. If this value is significantly higher than the progress value, there have been many operations applied to the database since the last synchronization, so there's a good chance that there is data to synchronize. There are lots of reasons why even a large difference in progress and db_property('CurrentRedoPos') could result in no actual data needing synchronization.
The download from the ML Server is applied by dbmlsync after the progress value at the remote is updated by dbmlsync when the upload is confirmed by the ML Server. Operations applied in the download by dbmlsync are not synchronized back to the ML Server, so the entire offset range could just be the last download that was applied. This could be worked around by tracking the current log offset in the sp_hook_dbmlsync_end hook when the exit code value in the #hook_dict table value is zero. This would tell you the log offset of the database after the download was applied, and you could now compare the saved value with the current log offset.
All the operations in the transaction log could be operations on tables that are not synchronized.
All the operations in the transaction log could have been rolled back.
My solution is not ideal. Tracking the changes to synchronized tables yourself is the best solution, but I thought I could offer an alternative that might be OK for your needs, with the advantage that you are not triggering an extra action on every operation performed on a synchronized table.

The mobile database doesn't keep track of when the last sync was, the MobiLink server keeps all of that information in the MobiLink tables of the consolidated database.
Since synchronization only transfers necessary information, you could simply initiate a sync. If there's nothing to sync, then very little data will be used by your application.
As a side note, SQL Anywhere has its own SO clone which is monitored by Sybase engineers. If anyone knows for sure, it'll be them.

As of SQL Anywhere 17, SAP PM maps to a local Sybase database that contains a TTRANSACTION_UPLOAD table, so to determine if a synchronization is necessary we simply query this table to see if it has any records that need to be sync'd to the HANA consolidation database.

Related

How to rollback an update in PostgreSQL

While editing some records in my PostgreSQL database using sql in the terminal (in ubuntu lucid), I made a wrong update.
Instead of -
update mytable set start_time='13:06:00' where id=123;
I typed -
update mytable set start_time='13:06:00';
So, all records are now having the same start_time value.
Is there a way to undo this change? There are some 500+ records in the table, and I do not know what the start_time value for each record was
Is it lost forever?
I'm assuming it was a transaction that's already committed? If so, that's what "commit" means, you can't go back.
Some data may be recoverable if you're lucky. Stop the database NOW.
Here's an answer I wrote on the same topic earlier. I hope it's helpful.
This might be too: Recoved deleted rows in postgresql .
Unless the data is absolutely critical, just restore from backups, it'll be lots easier and less painful. If you didn't have backups, consider yourself soundly thwacked.
If you catch the mistake and immediately bring down any applications using the database and take it offline, you can potentially use Point-in-Time Recovery (PITR) to replay your Write Ahead Log (WAL) files up to, but not including, the moment when the errant transaction was made. This would return the database to the state it was in prior, thus effectively 'undoing' that transaction.
As an approach for a production application database it has a number of obvious limitations, but there are circumstances in which PITR may be the best option available, especially when critical data loss has occurred. However, it is of no value if archiving was not already configured before the corruption event.
https://www.postgresql.org/docs/current/static/continuous-archiving.html
Similar capabilities exist with other relational database engines.

##DBTS and BInaryFormatter

I have written a client that uses the SyncFramework to coordinate the consolidating of data in a hub and spoke architecture warehousing application.
When the sync transactions process the sync framework updates a specified anchor table with the value of ##DBTS, indicating when the last sync was processed and uploaded to the server.
I would like to offer as part of this scenario the ability to allow one client to relay the data on behalf of one of the others.
This would be used in cases where one client may not be able to make contact with the warehouse; its database could be retrieved and synchronized by a client that does have access to the warehouse (Exchanged as a database backup on DVD or USB flash media).
The problem with this theory is that without the SentAnchor being set on the client database when the snapshot is retrieved, the next time this process is performed, the whole database is replicated in a second time.
What I would like to do, is when I grab a snapshot of the client database, update its SentAnchor so the next time I grab a copy the sync framework will know its SentAnchor as if it had actually communicated with the server.
So my first impulse was to simply update the anchor table, set the SentAnchor to ##DBTS, however the problem with that is sync framework inserts the same value in a different format, it runs it through the BinaryFormatter first.
So same intrinsic value, different headers, and when I try just updating with the value of ##DBTS, the SyncFramework errors trying to convert that back from the format it anticipates to have set itself.
What I would like to do is set via a TSQL statement, the same format for ##DBTS that the sync framework uses; I do not want to have to write an application to execute a single SQL statement if this can be done in the statement already being executed to create the backup.
Something like...
USE MyDB
GO
BACKUP DATABASE MyDb
TO DISK = 'F:\01032012MyDb.bak'
WITH FORMAT,
NAME = '20120103 Full Backup of MyDb'
GO
UPDATE Anchor SET SentAnchor = ##DBTS
GO
Essentially replacing ##DBTS above with whatever is needed to get the same value into the correct fromat that the SyncFramework will use.
Servers are 2008R2 Express.
the problem with setting the SentAnchor is that you might actually miss uploading changes. by setting the value, you have effectively told Sync Framework it has sent changes up to that value of ##DBTS.
i suggest you explore using the SqlSyncProvider instead.

How can I audit with the Microsoft SQL Server LDF file?

We need an audit log in the product that we are creating. We use SQL Server 2008 R2. I learned that the LDF file keeps an complete log of all transactions that where made*.
I've found ApexSQL Log, this tools analyses the LDF file and provides a GUI. It's a great demonstration of what's possible. But it's expensive. More info: http://www.apexsql.com/sql_tools_log.aspx
Do you know of other programs that can analyse the LDF file's? Or perhaps other methods to provide audit-trail functionality? I know that it's possible to create triggers. But if it isn't necessary to add things to my database scheme then I would rather not do it.
*Only if you select the full recovery model.
How about the new Change Data Capture (CDC) functionality in R2. Doesnt that serve your purpose ?
When it comes to the information stored in an LDF file, make sure to form a full log chain. A log chain is a continuous sequence of transaction log backups. It starts with a full database backup followed by all subsequent log backups up through the auditing point. If it becomes broken, only the transactions in the logs up to the last backup before the missing one can be shown with full information (e.g. a schema and object name, or a row history)
Unlike INSERT and DELETE operations, which are fully logged in the LDF files, UPDATE operations are logged minimally – only the changes that are made are logged, but the old and new values are not. When logging UPDATE operations, SQL Server doesn’t log complete before and after row states but only the incremental change that occurred to the row. For example, if a word “log” was updated to word “blog” SQL Server will, in general case, only log an addition of letter “b” at index 0. This is enough for its purpose of ensuring ACID but not enough to easily show before and after states of the row. So, in order to understand what changed really occurred, you have to reconstruct the context in which the change occurred from the rest of transaction log and/or backup and online database data

what happens to my dataset in case of unexpected failure

i know this has been asked here. But my question is slightly different. When the dataset was designed keeping the disconnected principle in mind, what was provided as a feature which would handle unexpected termination of the application, say a power failure or a windows hang or system exception leading to restart. Say the user has entered some 100 rows and it is modified at the dataset alone. Usually the dataset is updated at the application close or at a timely period.
In old times which programming using vb 6.0 all interaction used to take place directly with the database, thus each successful transaction was committing itself automatically. How can that be done using datasets?
DataSets are never for direct access to database, they are a disconnected model only. There is no intent that they be able to recover from machine failures.
If you want to work live against the database you need to use DataReaders and issue DbCommands against the database live for changes. This of course will increase your load on the database server though.
You have to balance the two for most applications. If you know a user just entered vital data as a new row, execute an insert command to the database, and put a copy in your local cached DataSet. Then your local queries can run against the disconnected data, and inserts are stored immediately.
A DataSet can be serialized very easily, so you could implement your own regular backup to disk by using serialization of the DataSet to the filesystem. This will give you some protection, but you will have to write your own code to check for any data that your application may have saved to disk previously and so on...
You could also ignore DataSets and use SqlDataReaders and SqlCommands for the same sort of 'direct access to the database' you are describing.

Database last updated?

I'm working with SQL 2000 and I need to determine which of these databases are actually being used.
Is there a SQL script I can used to tell me the last time a database was updated? Read? Etc?
I Googled it, but came up empty.
Edit: the following targets issue of finding, post-facto, the last access date. With regards to figuring out who is using which databases, this can definitively monitored with the right filters in the SQL profiler. Beware however that profiler traces can get quite big (and hence slow/hard to analyze) when the filters are not adequate.
Changes to the database schema, i.e. addition of table, columns, triggers and other such objects typically leaves "dated" tracks in the system tables/views (can provide more detail about that if need be).
However, and unless the data itself includes timestamps of sorts, there are typically very few sure-fire ways of knowing when data was changed, unless the recovery model involves keeping all such changes to the Log. In that case you need some tools to "decompile" the log data...
With regards to detecting "read" activity... A tough one. There may be some computer-forensic like tricks, but again, no easy solution I'm afraid (beyond the ability to see in server activity the very last query for all still active connections; obviously a very transient thing ;-) )
I typically run the profiler if I suspect the database is actually used. If there is no activity, then simply set it to read-only or offline.
You can use a transaction log reader to check when data in a database was last modified.
With SQL 2000, I do not know of a way to know when the data was read.
What you can do is to put a trigger on the login to the database and track when the login is successful and track associated variables to find out who / what application is using the DB.
If your database is fully logged, create a new transaction log backup, and check it's size. The log backup will have a fixed small lengh, when there were no changes made to the database since the previous transaction log backup has been made, and it will be larger in case there were changes.
This is not a very exact method, but it can be easily checked, and might work for you.