by Pavel Císař

Firebird 3 introduces a pseudocolumn named RDB$RECORD_VERSION that can be retrieved for a record in any table or single-table view, in the same way as RDB$DB_KEY returns the record's physical key. The value returned is the number of the transaction that created the record version, as viewed from the current transaction.

Note that the monitoring (MON$) tables always return NULL for this column.

A transaction in snapshot (concurrency) isolation mode produces stable RDB$RECORD_VERSION values from one query to the next, whereas a transaction in read committed isolation mode might return different values if concurrent transactions were committed between subsequent queries.

Changes within the current transaction are reflected in subsequent queries. For these records, RDB$RECORD_VERSION returns the ID of the current transaction, rather than the ID of the transaction that committed the record.

RDB$RECORD_VERSION could be very useful in certain situations. For example:

  1. Preventing lost updates caused by concurrent read commited transactions.
  2. Incremental backup.
  3. Detecting data changes for caching purposes or changes performed by specific transaction(s).

1. Scenario - Preventing lost updates

It is quite common for interactive applications to use two transactions for data handling: a read-only read committed transaction to present an up-to-date view of the database state for the end user and a read/write transaction to write modified data back to the database. Typically, in interactive sessions, end users may leave data forms open for hours. The dual-transaction approach avoids problems with garbage collection and performance degradation caused by long version chains when transactions run for a very long time.

However, when two users modify the same record simultaneously, the update that commits first is lost without warning. The sequence of events that leads to a lost update would be as follows:

  1. User A: reads most recent data using R/O Read Committed transaction
  2. User B: reads most recent data using R/O Read Committed transaction, seeing the same record values
  3. User B: changes some values and writes a row back to database using the Primary Key or RDB$DB_KEY as the selector for update and commits the transaction.
  4. User A: Changes some values on the same row, writes it back to the database using the Primary Key or RDB$DB_KEY as the selector for update and commits the transaction. Changes performed by User B are overwritten without notice.

The application developer has to employ extra measures to detect changes performed by other users, if these lost updates are to be avoided. One option is to use all columns and their previously read values as the selectors for update. This method is not very efficient. It does not work at all where the table contains BLOB columns.

Another option is to use a special column, such as an auto-incrementing number or a timestamp, that acts as a record version. The old value of this column is used as an additional selector for the update. Thus, if the row has been modified by another transaction in the meantime, the number of affected rows returned by engine is zero and the user would be notified to re-read the row and update it again if the change is still applicable.

With the RDB$RECORD_VERSION pseudo-column always available, the special column is no longer necessary. Even using the RDB$DB_KEY as the row identifier, instead of the primary key is safe, and the write will be faster.

Python example:

import fdb

print "Connection A..."
conA = fdb.connect(dsn="localhost:employee.fdb", user="sysdba", password="masterkey")
transA_ro = conA.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO)
transA_rw = conA.trans(fdb.ISOLATION_LEVEL_READ_COMMITED)
readA = transA_ro.cursor()
writeA = transA_rw.cursor()

print "Connection B..."
conB = fdb.connect(dsn="localhost:employee.fdb", user="sysdba", password="masterkey")
transB_ro = conB.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO)
transB_rw = conB.trans(fdb.ISOLATION_LEVEL_READ_COMMITED)
readB = transB_ro.cursor()
writeB = transB_rw.cursor()

print "A - Reading data..."
transA_ro.begin()
readA.execute("select rdb$db_key, rdb$record_version, a.* from COUNTRY A")
rowsA = readA.fetchallmap()

print "B - Reading data..."
transB_ro.begin()
readB.execute("select rdb$db_key, rdb$record_version, a.* from COUNTRY A")
rowsB = readB.fetchallmap()

print "B - Updating data from first row..."
keyB = rowsB[0]['DB_KEY']
rverB = rowsB[0]['RDB$RECORD_VERSION']
currencyB = rowsB[0]['CURRENCY']

transB_rw.begin()
writeB.execute("update COUNTRY set CURRENCY = ? where rdb$db_key = ? and db$record_version = ?",
               [currencyB.upper(),keyB,rverB])
if writeB.rowcount != 1:
  print "B - Data changed by another transaction"
else:
  transB_rw.commit()
  print "B - Committed."

print "A - Updating data from first row..."
keyA = rowsA[0]['DB_KEY']
rverA = rowsA[0]['RDB$RECORD_VERSION']
currencyA = rowsA[0]['CURRENCY']

transA_rw.begin()
writeA.execute("update COUNTRY set CURRENCY = ? where rdb$db_key = ? and db$record_version = ?",
               [currencyA.title(),keyA,rverA])
if writeA.rowcount != 1:
  print "A - Data changed by another transaction"
  transA_rw.rollback()
else:
  transA_rw.commit()
  print "A - Committed."

Output from this sample script:

Connection A...
Connection B...
A - Reading data...
B - Reading data...
B - Updating data from first row...
B - Committed.
A - Updating data from first row...
A - Data changed by another transaction

2. Scenario - Incremental backup

Replication and incremental backup have similar characteristics and could use the same method to select data for transfer. Theoretically, such a method could utilize RDB$RECORD_VERSION as the selector. However, there are reasons why RDB$RECORD_VERSION is not very efficient for replication:

  1. A pseudo-column cannot be indexed. Retrieval of rows for replication would thus require a natural scan that reads all rows in table. For big databases, the non-indexed reads could limit the frequency for performing replication runs significantly.
  2. Simply using the ID of the last transaction that performed the replication run to specify the range of transaction IDs for a replication is not feasible. The transaction "last seen" by a replication run would have an ID of zero for the first-ever replication run, and subsequently the ID of the OAT from the previous replication run. However, any transactions more recent than the Oldest Active Transaction (OAT) could be either active or committed, so the only transactions safe for replication would be those that the current (replication) transaction determines to be between the one "last seen" and the Oldest Active.

The limitation is thus that any long running transaction effectively blocks replication, similarly to the way it blocks garbage collection.

This query could retrieve the OAT seen by the current transaction:

SELECT MON$OLDEST_ACTIVE
FROM MON$TRANSACTIONS
WHERE MON$TRANSACTION_ID = current_transaction

While the constraints discussed present a significant limit on the usefulness of RDB$RECORD_VERSION for replication purposes, they might be less of a problem for incremental backups, that are typically performed at low frequency and in off-work hours. However, because RDB$RECORD_VERSION could be used only to select newly inserted or updated records, it would be necessary to implement special handling for deleted rows. An audit table, managed by triggers, could achieve this.

It is important to note that incremental backup must be performed in a SNAPSHOT (concurrency) transaction to ensure that the backed-up data remains consistent with the state of the database at the start of the backup.

3. Scenario - Detecting data changes

Sometimes, for performance reasons, it is desirable to cache data from a database at the client side or in a middle tier. An example would be a cache of rows from a lookup table. Utilizing Firebird events and RDB$RECORD_VERSION together enables this remote caching to be maintained quite efficiently.

4. Scenario - Batch processing

RDB$RECORD_VERSION could be useful also for batch/flow processing in producer/consumer schemes using shared table(s). For example, in a situation where multiple working threads/applications are storing batches of data into a common work log table that will be subsequently processed and deleted by another thread/application, you could use RDB$RECORD_VERSION as the batch identifier, instead of creating and maintaining a separate column for that purpose.

Like this post? Share on: TwitterFacebookEmail


Related Articles


Author

Pavel Císař

Reading Time

~5 min read

Published

Category

Articles

Tags