Vulcan - Work in Progress: Lock Manager

From Jim Starkey on the Firebird Development List 3rd December 2004

There were two fundamental deficiencies in the lock manager: The inability for a "lock owner" to wait on more than one lock request, and no mechanism to wake up a thread (they are not unrelated).

A non-functional change I made was to rename lhb, own, lbl, and lrq to be LockHeader, LockOwner, LockBlock, and LockRequest, respectively. I also added a LockEvent, to be discussed later on. The names are not only more descriptive, but collate clearly in Visual Studio.

The first functional change in LockOwner was to to replace the (offset) pointer to the "pending" request to a self-relative que of pending requests. Besides the definition of the structures themselves, this involved replacing references to the pending request to either an "insert_tail", a "remove_que", for a loop through the list of pending requests.

A second problem was code that tried to modify the lock table without acquiring it "for performance reasons". In a multithreaded environment it is critical that shared objects be protected by synchronization objects "for reliability reasons." To make sure I got them all, I added a module static LOCK_table as the pointer to the shared lock table. The lock table acquire() sets the module static LOCK_header to point to the lock table while the lock table release() sets it to -1, which should catch stray undisciplined references to make their presence known. A closely related problem was a general sloppiness on whether a called function released the table at the end of its operation. This was entirely too much ESP for my taste, so I normalized things a bit. In general, guys that are called with the lock table acquired don't release it.

The third step (still underway) is to change events. In the past, a wakeup event was statically assigned to an owner block, which will never fly in brave new world of many threads. I've made a new lock table object type, LockEvent, that can be allocated and released as needed. LockEvents are allocated through and released to an owner. If an owner doesn't have any on hand, it tries to get one from a free list hanging off the table header block, otherwise it allocates a new object in the lock header space. The owner then initializes the event, and returns it. Once initialized, the lock event stays with an owner until the owner is deleted, in which case it goes to a free event list hanging off the header.

The lock event mechanism is using the AsyncEvent encapsulation of the ISC_event mechanism. Among other things, the AyncEvent class is platform independent, so I expect a great deal of platform specific code to disappear from the lock manger.

I'm planning to ditch all of the old conditionally compiled semaphore management crud with the single lock event mechanism, which should further reduce the code size and resource utilitzation of the lock manager.

It's still a pig. But when I'm done, it will be a smarter, cleaner, faster pig.