OSRI Architecture

by Jim Starkey

In my efforts to pry the layers apart, it's become clear that an understanding of the OSRI architecture is far from universal. I thought it might make sense to take a few minutes and review it.

The Philosophy

Everything changes. Companies come and go, networks come and go, processor families come and go, standards come and go. Of the four operating systems supported by the initial InterBase version, two of the three companies are out of business, none of the operating systems are still supported, and two of the three supported networks have disappeared from the planet. Yet things (and people) need to work together now, the future, and during the transition.

Networks are large and complex, populated by machines of many architectures and manufacturers. The only way to cope with this degree of change is to implement a strong architecture providing a stable interface to database clients while supporting multiple implementations, versions, platforms, and connectivity solutions.

The Model

The foundation of the OSRI interface is that functional objects are represented by abstract objects referenced by handles and that operations are performed by send and receive messages from these objects. The primary objects are the database object (created by attachDatabase and createDatabase, and destroyed by detachDatabase), the transaction object (created by startTransaction and destroyed by commit and rollback), the request object (created by compileRequest and destroyed by releaseRequest). Requests are executed by temporarily binding the request. Data flows by messages sent to and received from the running request.

The model is intentionally abstract. The client creates, interacts with, and destroys local objects which, in turn, communicate through plumbing to dispatch, communication, gateway, and the actual database engine. Each component understands the mechanics but not the semantics of the architecture. The local dispatch component, the Y-valve, knows how to poll subsystems during an attach, how to encapsulate the subsystem handle into a Y-valve handle, and how to route subsequent interactions to that object. Things that don't concern the Y-valve are just passed along by length and address. Parameters are generally passed by self describing messages that a component can handle, modify, extend, strip out, or simply pass through depending on requirements. All components are required to ignore parameters that they don't understand.

The Application Program Interface

The API was once formally specified by the OSRI (Open Systems Relational Interface), published by InterBase (and suppressed by Borland). OSRI is a slight generalization of DSRI (DEC Standard Relation Interface) published, then suppressed by Digital Equipment Corporation. OSRI differs from DSRI in the internal format of status vectors, format of the data data type, and prefix symbols. Calling sequences, codes, semantics, and general architecture are identical.

OSRI requires that all entry points be published -- OSRI forbids backdoors. A prospective service must conform to the standards and conventions of the architecture. Client side layered services that use the published standard are not considered part of OSRI. The helper function "isc_vtov", for example, is provided for client convenience but is not considered part of OSRI.

All objects are presented by opaque handles of type (void*) (original specification was 32 bits, but that has to go). A handle is a proxy for an object, not an object itself. It is absolutely forbidden for any OSRI client or component to make any assumption as to internal structure of a handle. A handle may be a pointer to an internal object, an index into an array, a random number used to enter a hash table, or virtually anything else.

All OSRI calls take as their first argument a 20 longword status vector to receive an error code and return an integer status indicating success or failure. Status codes indicate success or failure, not state. A null status vector indicates that in case of error, the OSRI function is to post the error to stderr and terminate the process. OSRI calls do not throw exceptions.

OSRI calls pass all interface object handles by reference. Handles must be zero before object creation and are reset to zero when an object is destroyed. All other data is either passed by value or the length and address of a formatted message. Parameter messages are self describing and prefixed by a version number. Individual parameters are represented by a parameter code, a value length, and the actual value, which may be either ascii or numeric. If numeric, the value is represented least significant byte first.

OSRI originally didn't allow any component to retain addresses from its clients. This was relaxed to allow an option for the Y-valve to retain and zero addresses of handle. There have probably been subsequent violations which need to be corrected.


The OSRI client API is implemented by a subsystem independent dispatch layer (call the Y-valve) which, in turn, passes the call to one or more subsystems. The Y-valve, in general, is a thin transmission layer. The Y-valve does, however, manage multi-database transactions and certain information calls. The Y-valve must be thread safe. The Y-valve must operate without information as to the characteristics of its subsystems. The Y-valve executes an attachDatabase call by polling its subsystems in a prescribed order until a particular subsystem reports a successful attachment. The Y-valve is forbidden to communicate with any subsystem by anything other than a parallel analog of the OSRI architecture. In the original InterBase implementation, the subsystem APIs were defined as differently prefixed OSRI calls. In the Vulcan implementation, the subsystem API is a formal C++ class.

Sitting under the Y-valve are zero or more subsystems (no subsystems would make for a very boring instantiation of the architecture). The subsystems may be remote interfaces (network transmission layer), central server interfaces (specialized interprocess communication), gateways to other database systems, or engines (sometimes access access methods). Depending on system configuration, a client side Y-valve may have only a remote interface. A server client, architecturally no different from any other client, may have one or more engines (current release, maybe a prior release for transitional compatibility, and maybe a beta version), as well as a remote interface to handle double-indirect attachments or other cross network gateways.

It is explicitly legal for any party to interpose an second architecturally conforming Y-valve, platform characteristics permitting, between an application program and another Y-valve to support third (fourth?) party subsystems, logging, additional security, debugging, or another other desired services. No component, in other words, may make any assumptions about its caller.

Each subsystem is logically independent and capable of release independent of any particularly Y-valve. No subsystem is forbidden to call back into the system Y-valve, but it cannot make any assumptions about the internal structure or behavior of the Y-valve.


Firebird 1.5 currently mixes internal and external code in single modules and is grossly out of compliance with the architecture.

The original InterBase implementation of OSRI allowed a QLI used on an Apollo system to open an Rdb/VMS database on a VAX and a Datatrieve user on a VAX to attach to an InterBase database running on an Apollo. The flow on control in the first case was:

  1. QLI called gds_$attach_database in the Apollo Y-valve.
  2. The Y-valve called the remote interface that detected a TCP node name in the attach string. The remote interface opened a connection to that node, a VMS system.
  3. InterBase TCP server on the VMS received the attach call from the remote interface and passed it to the Interbase VMS Y-valve.
  4. The InterBase VMS Y-valve passed the call to the Rdb gateway (a proper DSRI client).
  5. The Rdb gateway passed the call to the DEC Rdb Y-valve.
  6. The Rdb Y-valve passed the call to Rdb/VMS which sooner or later did something and returned success.

The opposite path was even more interesting.

  1. VAX Datatrieve called RDB$ATTACH_DATABASE. By logical name trickery, the call was fielded by the InterBase DSRI Y-valve.
  2. The InterBase DSRI Y-valve, having politely offered the attachment to the bona fide DEC Rdb Y-valve, passed the call its InterBase gateway (a proper DSRI subsystem and a proper OSRI client).
  3. The InterBase VMS Interbase gateway passed the call to the InterBase VMS OSRI Y-valve.
  4. The InterBase VMS OSRI Y-valve passed the call the the TCP remote interface.
  5. The InterBase VMS TCP remote interface passed the call over the wire to the InterBase Apollo remote server.
  6. The InterBase Apollo remote server passed the call to the InterBase Apollo Y-valve.
  7. The InterBase Apollo Y-valve passed the call to the InterBase engine.

This level of connectivity and transparency can be built, maintained, and extended only by rigid adherence to the architecture and layering.

Implementation Rules

Some hard and fast rules must be respected by all developers at all times. Among the rules are:

  • Thou shall not use global variables. Ever. This means you.
  • Thou shall not use module static variables without a damn good reason and then only if protected by formal synchronization primitives. (Read-only initialized structures are ok).
  • Thou shall not commingle engine or remote code with user callable code. User visible code does not belong in any subsystem and subsystem code does not belong in the Y-valve.
  • Thou shall not trespass into other subsystems.
  • Thou shall not attempt to bypass the Y-valve.
  • Thou shall not try to introduce ESP into the formal API. Dumb and predictable trumps cute.