Reinier Olislagers wrote:

I'm trying to document the Firebird/Interbase connector library for the FreePascal language.

I came across BLOB segment size and understand that, when writing BLOBs, you need to write in chunks smaller than or equal to the segment size.

Have looked at Firebird language reference update, IB 6 language reference, and grepped through the doc directory of a FB 2.5 install.

The IB 6 language reference mentions:

BLOB [SUB_TYPE {int | subtype_name}] [SEGMENT SIZE int]

but doesn't explain what the segment size actually stands for.

Presumably having a larger segment size might improve performance for reading/writing large BLOBs, but might lead to bigger storage requirements per BLOB?

It also says BLOB segment size is limited to 64k.

I did find this post by Helen Borrie:

http://tech.groups.yahoo.com/group/firebird-support/message/94611
> > From Firebird's perspective, can any amount of data be written
> >to a blob in a single operation? Without segmenting it?

Broadly, yes. In the modern era the segmenting of blobs occurs at the server
side, according to some algorithm determined by page size and probably also
whether blob data is stored on blob pages or data pages. It is not
affected by any segment size you define for your blob column so you can safely
forget about it. By default the segment size will be 80 bytes but the engine
doesn't care about that, either.

Excerpt of the code to write blobs (hope I got the right part):

while BlobBytesWritten < (BlobSize-BlobSegmentSize) do
  begin
  isc_put_segment(@FStatus[0], @blobHandle, BlobSegmentSize, @s[(i*BlobSegmentSize)+1]);
  inc(BlobBytesWritten,BlobSegmentSize);
  inc(i);
  end;
if BlobBytesWritten <> BlobSize then
  isc_put_segment(@FStatus[0], @blobHandle, BlobSize-BlobBytesWritten, @s[(i*BlobSegmentSize)+1]);

My questions:

  1. Do Firebird users need to know more about segment size (and if so, what ;) ), or is it indeed just a relic of the past?
  2. If relevant, is there some default segment size?
  3. If relevant, has the max segment size changed since IB6?
  4. Seeing Helen's post, can the code be changed to just output the entire BLOB in one go or have I misunderstood?

Ann W. Harrison answers:

Your understanding is wrong. The segment size was a suggestion for some higher-level tools that wanted a hint as to the size chunks of blob that would be convenient to handle. To the best of my knowledge, nothing uses it now. Even when it was used, it did not limit the size segment that could be passed in and out.

That interface was designed in 1982 - think about the increase in available RAM since then.

Pass the largest chunks that are convenient to handle, remembering that at various places there are 16 bit integers that describe the length of things. Passing larger chunks has no effect on the storage size.

As you pass in segments of a blob, they are written to empty pages in cache by concatenating the segments if necessary. If the total size of the blob is less than a page, it will be written on a data page. If not, Firebird builds a vector of page numbers that hold the parts of the blob, writes out the blob pages, then writes the vector to a data page. If the vector itself won't fit on a page, Firebird writes the vector to a series of pages, keeping a vector of blob/vector pages.

To sum it up:

  1. It's a Relic of ancient times.
  2. The whole blob segmentation mechanism was a way to get around the tiny memory sizes of machines in the early 1980's.

Like this post? Share on: TwitterFacebookEmail


Related Articles


Author

Firebird Community

Published

Category

Gems from Firebird Support list

Tags