Designing Client/Server Systems Right the First Time

by David S. Linthicum, Borland Developers Conference 1995.


Remember the good old days when computing was always a centralized concept? The days when application programs, the interface, and the data all lived together in the isolated world of the mainframe computer? Long gone are those days. Client/server and other distributed processing solutions are the new battle cries in an industry fully committed to the "downsizing" movement. Yet the sheer complexity of client/server computing puts a new emphasis on the design and planning process. This paper is for application developer considering a move to client/server. It provides an overview of what it takes to design, develop, and implement client/server applications, including:

Working with Complexity

Designing a client/server system requires a very different approach to system design than that required by its predecessor. The key difference is that you design for more than one system, with far reaching implications for the design and development of an application. Your design needs to account for such things as the location of processing, the communications between the processes, and a variety of additional architectural issues that range from the user interface to the operating systems. The application integration happens at the client side, not at the back-end. We handle growth with increased distribution rather than by increasing machine size. Moreover, in most client/server systems, the client and the server (and sometimes the application server) will likely be different systems. The client/server designer and the developer must therefore understand several environments, and how to make the connections between them.

There is a right way and a wrong way to design a client/server system. You'll surely fail if you design your client/server computing environment with outdated legacy design techniques. This paper will look at application design methods for the client/server, CASE, server sizing, configuring the client, network and middleware planning, and how to tie them together. I will guide you through issues related to designing client/server systems with the goal of thinking ahead to get it right the first time.

Downsizing not Downgrading

Client/server technology is not a panacea, and if not put to proper use it does not help. Client/server is just a logical concept, a model or a paradigm for information processing. This is a concept that brings a promise of revolutionary changes in the way we process information, but not without a lot of preparation and practical understanding of the technology. The primary reason organizations enter the client/server world is the promise of the technology and the benefits it delivers. Among these benefits are:

  • Potential for Increasing Speed
  • Potential for Decreasing Cost
  • A better User Interface
  • Flexibility

Designing a client/server system is complex because these systems have many more components than their centralized predecessors. The networks, operating systems, database servers, user interfaces, front-end development tools, middleware, processor types, standards, and design methodologies are all in the client/server mix. As a rule of thumb: A client/server system is only as effective as its least effective component. Therefore, when designing a client/server system, all these components require your undivided time and attention. Overlook a component in the planning process and your entire system suffers.

When designing a client/server system it makes sense to divide a system, and thus the client/server design, into four logical layers; the server, the network, the client, and the application. There are many components included in each layer.

The Application

The application is how the user perceives your client/server system. An application exploits the user interface of the client for presentation to the user, and the server for data services and some processing. Client/server applications have certain attributes that make them much different from traditional centralized applications, and therefore require a different design process. The traditional top-down waterfall approach to applications development does not work for client/server development.

Client/server application development often uses an object-oriented (OO) approach to development in leu of a structured one. Object-oriented programming tools, such as Borland C++ and Delphi, allow developers to maximize code reuse, thus making standard interface development and maintenance on distributed systems a bit easier. Of course, object-orientation brings with it a new paradigm, and thus a new way of thinking.

Objects enhance the traditional life cycle by knocking down the barriers between the stages of development, providing a single object-oriented model for use at every stage. Moreover, object technology provides specific features (methodologies, languages, and development environments) that assist in object-oriented development.

Object-oriented analysis is the process of looking at all potential objects within an organization to define their characteristics and relationships. This is a process of building abstract models of what the current situation looks like and items that need improvement. After analysis, an object-oriented system design takes place. Here you understand the system requirements and the actual structure of the system. Usually the analysis and design process uses formal analysis notation as described by a particular methodology such as Peter Coad's and Ed Yourdon's object-oriented analysis method, or Grady Booch's object-oriented analysis design method. Many others are available, even more than in the structured analysis and design world.

The distribution of program code and the user interface have also changed considerably. Traditionally, the interface was about 25% of the application with the code that does the work taking up the remainder of the application. In contrast, a client/server application uses 75% of the application for the interface. Another aspect of application development on client/server systems is the fact that the application divides itself between the client and the server. This is a concept known as "application partitioning," with most client/server systems having the majority of the application logic on the client system (fat client) while the server provides data and some application processing (using stored procedures and triggers). A client/server design methodology, or "methodology facilitator tool," can walk a developer through the partitioning process. This will result in very different designs, depending on the technology employed.

A methodology facilitator tool for client/server design provides a procedure and an automated facility to define and control the various analysis and design activities for your organization. In a sense, they are made up of other methods, including the best portions of the structured analysis and design world, object-oriented methods, data modeling, and even some methods to make technology decisions (networking, computers, operating systems, etc.). These tools show you how to apply each design concept to the process of designing your client/server application. When building a client/server system, there is a greater need for the step-by-step approach to design which these methods offer. Methodologies act as an expediter for a project, guiding the project leader through each stage of the design. Third party vendors and big consulting firms who specialize in client/server development also sell and support these tools. Some tools that offer such services include:

  • Ernst & Young's Navigator Series
  • James Martin's The Client/Server Methodology (TCSM)
  • Andersen Consulting's Foundation

These methodologies and the tools that automate them are expensive. Take time to look at what each tool offers before you make a purchasing decision. Often you may find that developing your own method may be a better fit for your organization, especially if your needs are unique. For instance, many organizations that have special business requirements, hospitals for example, now develop an overall client/server strategy that includes custom methods. Sometimes organizations use a "cherry picking" method. Using this approach, organizations examine all the available methodologies, and pick and chose which pieces work best for them.

There are several features to look for when selecting the methodology that's right for you. First, make sure there is a rich array of built-in methods which will allow the greatest degree of flexibility. The built-in methods should include various structured analysis and design techniques, prototyping features, object-oriented analysis and design, as well as the client/server aspects (application partitioning, networking, database, etc.). You should be able to change the methods to meet the particular needs of your organization. Another key feature is the ability for the tools to communicate with your development environment. Finally, these tools should help the developer select the technology as well as track the progress of the development effort for the project.

Server Sizing

While the application design process is ongoing, someone has to pick a server platform and a database server to run on that platform. Selecting the wrong server platform means you run the risk of exhausting the capacity of the server. There are many pressing questions: What database server meets our requirements? What processor is appropriate? What operating system provides better database server support and performance? Can we increase capacity in the future if needs change? To begin addressing these questions, one should look toward the tried and true concept of capacity planning.

Many organizations respond to the server requirement by overestimating or underestimating the server's capacity. A server that is much too big for the job is a waste of money that takes the savings out of downsizing. A server that is too small will bring overall system performance down to a snail's pace. It really boils down to simple mathematics. For example, if a client request consumes 4 percent of a 10 MIP server's processing capacity, it is logical that the server can handle up to 25 clients. The magic number is the requirement of .04 MIP per request, usually derived through testing. Using this figure, we can make further assumptions if the client load for the design increases. For instance, if we need to support 100 clients, then a server running at 40 MIP is appropriate. Or, if we're supporting a large scale organization that supports 1,000 clients, then it's time to look for a server that does a screaming 400 MIP. Of course technology limits your solutions. In most cases, when you reach a server's capacity, it's time to find another server to share the load. When processing moves to other servers, you'll start the game of relocating or partitioning data to split the processing load between each server. In addition, designers and developers can employ new multiprocessing database servers, such as Sybase's Navigation Server, or Oracle's Parallel Query Option. TP monitors are useful as well. They're able to funnel database server requests, allowing many clients to use only a few database connections, thus reducing the load on the database server.

In the real world, each client/server application has different server processing requirements. Your database server vendor can help you determine the optimal server size. However, your best bet is to make sure your server will not collapse under the stress of your client/server system. Server benchmarking or other testing activities are the keys. Factoring in your own results using your application, your network, and your clients, allows you to more accurately determine how your server will perform when operational.

Database servers such as InterBase provide a high-performance database for client/server systems. InterBase is an SQL-compliant relational database server that provide SQL-92 ANSI standard compliance, simultaneous access to multiple databases, query optimization, and support for Binary Large Object (BLOB) data types. In addition, InterBase provides declarative referential integrity, support for stored procedures and triggers, and advanced transaction management features including an automatic two-phase commit.

Another problem when one attempts to determine server capacity is the fact that any client could potentially consume the majority of the server's resources. On a large system, this can be the most troubling since the threshold capacity required to monopolize the server is lower per-requests. This is one of those things that is difficult to plan for, although it's always a good idea to build in some extra capacity for those inevitable problem clients. After figuring out the correct server size, add 10% to allow for such fluctuation.

After determining the processor power for the server, it's time to look for an operating system to run on the server. Your operating system should provide advanced features that include preemptive multitasking, multi-threading, virtual memory management, and high performance i/o. Unix is king of the server operating systems due to its ability to run on small to very large platforms, as well as its advanced operating features. However, many are finding that database servers work just as well on Novell's NetWare (as a NLM), or on other advanced operating systems such as Microsoft's Windows NT.

Configuring the Client

Many client/server system designers make the mistake of neglecting the client. To further complicate the problem, the clients sit on desktops instead of in computer rooms, which makes them difficult to control. In reality, the client is where the application processing really happens, and the client interface and application are how the user perceives your client/server system. A poorly performing client, or a client with a bad interface makes for an unsuccessful client/server development effort. There are three things to think about when building a client. They are the processor, the operating system, and the user interface.

Client/server by its nature can support a variety of operating systems and a variety of processors at the client level. This makes the client/server architecture very flexible. Since the client should perform most of the processing, it is not a good idea for your organization to recycle those old IBM XTs as clients. If you're using Intel clients, a 486 or Pentium processor is a better fit.

From a support standpoint, it is preferable to use only one operating system in the client community. However, if multiple operating systems are in your future, make sure they can support the processing load the client will require. Most clients use a Graphical User Interface such as Microsoft Windows for DOS systems, or Motif for Unix. Exploiting the interface is a simple matter of selecting the correct front-end development software to drive the application. GUI products such as Delphi, Paradox, and dBase provide a quick and easy graphical development environment as well as built-in database server connections.

The Network and Middleware

Planning and configuring the proper network for your client/server system requires some knowledge of networking in general, and some analysis time. The network component of the client/server system (including the physical network connection, network protocol, and middleware layer) is responsible for moving requests from the client to the server, then transporting the results of the requests back to the client.

When many clients share a network, the traffic between the client and server depends on the number of clients. Without an adequate network, client/server performance suffers. For example, a particular network becomes congested if over 2,000 packets per second move from the client to the server. Therefore we can figure out that if the average transaction size is 20 packets, a system that supports 10 clients and generates one transaction per second will consume 10 percent of the network's capacity. Get the idea? We learn that the network will need some additional capacity if the number of clients reaches 100. This is a simple network sizing calculation.

The key assets that a network provides to a client/server system are reliability, speed, and bandwidth. As with sizing your server, determining the proper network performance is just a matter of mathematics. The topic of network sizing is complex. There are a number books written exclusively on this topic, and automated tools are available to help model your network.

Selecting the correct network to support your client/server system is a logical process. Many additional factors are important to keep in mind. For example, to determine reliability you need to consider the mean time between failures for routers, hubs, network interface cards (NICs) and other networking equipment. You also need to look at your existing network to determine compatibility and interpretability with other systems that you may need to talk to now, or in the future.

An additional aspect of network design is selection of a network protocol. A protocol is just a set of standard rules and procedures that allow computers to speak to one another. Major protocols include NetBIOS, SPX/IPX, APPC, and TCP/IP. When selecting network protocols, you should consider the types of systems you are connecting. For example, if you connect to a database server running under Unix, TCP/IP is already available on the Unix side. If you use a NetWare file server as your database server, then IPX/SPX is your protocol of choice.

You need to consider middleware as well. Middleware provides easy network and operating system access via a common interface mechanism that spans across front-end and back-end processes. Using middleware, the application developers need not understand the underlying network protocol or operating system natives. There are several types of middleware including remote procedures calls (RPCs), message-oriented middleware (MOM), database middleware, and object request brokers (ORBs). Each type of middleware brings its own set of advantages and deficiencies. An example of a solid middleware product is Borland's IDAPI, or Integrated Database Application Programming Interface. Developers can use IDAPI to access various remote database servers using many supported protocols (SPX/IPX, NetBIOS, TCP/IP). IDAPI is the access layer for Delphi and other Borland products to the Borland Database Engine (BDE).

Another aspect of network design is throughput modeling. All shared services experience a phenomena known as queuing delays, or poor throughput by one or more components. A client/server system is a point-to-point system. The slowest component decides the performance of the system whether it is a graphical interface, network card, network, server, protocols, etc.. To determine throughput of an entire system it's a good idea to map things out, taking all components into account.

Putting it all Together

There are many things to consider when designing a client/server system. Now that you have a basic overview of some of the concepts, let's arrange those concepts into a set of activities that can design and construct a client/server system for your organization. Please note that some of these activities can occur concurrently (if it makes sense to do so), while others are dependent upon the completion of other activities.

Select a Method - Before the design process begins you need to select the method you will employ to develop your client/server system. Will it be an off-the-self client/server design method, or will you build your own? A method outlines your procedures and the tools you will use during the design process, including requirements, analysis, design, construction, and some technology decisions.

Gather requirements - Gather all possible information that will help design or re-design the new or existing system. This entails user interviews, documentation review, database structures, examining the current system, etc.. If you're using a client/server design method, you would employ the tools and procedures of the method.

Analysis - Make sense of the information you gathered. What functions are occurring? Who performs these functions? Again, your method of choice dictates how to perform the analysis and what tools to employ.

Design - Using the information from the Analysis stage, it's time to design the application in detail. Screen design, program logic, objects, and the physical database design are typical outputs from these activities.

Server and Client Selection - Armed with the knowledge of the number of clients and the types of applications, you can now select the server and the client that have the capacity to meet the needs of the organization.

Network and Middleware - A network design to handle the overhead placed upon it by the server and the client. Here we're making network hardware, connections, and protocol decisions. We must also make the proper middleware decisions that allow processes running on the clients and servers to communicate with adequate performance.

Performance Modeling - Before finalizing the design, create a performance model that takes all the components into account. It is less expensive to spot problems on paper than it is to correct them after implementation.

This is an step by step process, and you may return to any activity to make changes in that portion of the design for whatever reason. Designing a client/server system is, to a large degree, a matter of trial and error.

Once the above activities are complete, it's time to take your design and build the client/server application. Here we have a few additional decisions, such as development tools or programming languages to build the software. Once the system construction is complete, system testing must occur. During this process, the new front-end and back-end software, the network, clients, and the server are in testing. Any bugs or other problems are identified and corrected. After testing the system thoroughly, legacy data moves to the new system. At this point it may be a good idea to do some parallel testing, just to make sure the old system and the new system return identical results.

If your organization has the available resources, it's also a good idea to create a system pilot. A system pilot tests your design as you go. It entails purchasing some of the hardware, software, and networks you plan to use and putting them through their paces.

Although client/server computing has brought a great deal of power, value, and flexibility to the data processing world, it comes with some baggage. The complexity of this type of system and its reliance on numerous components is a nightmare for system designers from the traditional world of centralized processing. However, client/server design, like any other system design activity, is just a matter of taking everything related to the system into account. Client/server means there is more to account for. Where system designers were once just application development or database development specialists, the client/server designer must be more of a generalist, having knowledge of networking, computer hardware, operating systems, database servers, development tools, etc.. It is not a simple world anymore.


  1. Barbara Bochenski, Implementing Production-Quality Client/Server Systems, New York, NY: John Wiley & Sons, Inc.
  2. Jeri Edwards, Robert Orfali, Dan Harkeys, Essential Client/Server Survival Guide, New York, NY: Van Nostrand Reinhold.
  3. David Linthicum, "Moving Away from the Network, Using Middleware," DBMS, January 1994.
  4. David Linthicum, "Reconsidering Message Middleware," DBMS, March 1995.
  5. David Linthicum, "Client/Server Design Strategy," DBMS, April 1994.
  6. David Linthicum, Windows Connectivity Secrets, San Mateo, CA: IDG Books Worldwide, Inc.
  7. David Vaskevitch, Client/Server Strategies, San Mateo, CA: IDG Books Worldwide, Inc.
  8. Paul Renaud, Introduction to Client/Server Systems - A Practical Guide for Systems Professions, New York, NY: John Wiley & Sons, Inc.