|
An Architecture for NIC Driver
Optimization
A White Paper
Stephen Ricca, Director of LAN Engineering |
Introduction
A network interface card (NIC) combines hardware and software driver
functionality to provide a pipeline between the host computing system and
a communications network. As such, a NIC's primary function is to
move data between the chosen network type and the system, abiding by the
various protocols required on both sides. The NIC hardware typically
connects to the system through a native I/O bus, which is typically specified
by an industry standard. This makes it possible for independent vendors
to build a NIC which operates in a wide range of different systems.
The NIC software driver allows the host OS, and its related applications
and protocols, to communicate with the NIC. Depending on the type
of customer, the driver is delivered in one of two forms: an executable,
binary, plug-and-play form, or a source code form. A unique driver
is typically developed for each target OS. A typical host OS generally
supports several
different protocol stacks. A properly designed driver should
enable multiple protocol stacks to share the NIC hardware and network media
transparently. Regardless of the specific target host computing system
and OS type, the following general, high level model shows how a NIC software
driver fits into the overall system.
General NIC Software Architecture Model
Due to computer system manufacturers' frequent use of industry standard
system I/O buses, a single NIC product is expected to operate across a
very wide range of systems with diverse hardware architectures, and many
different OSs and protocol environments. Hence, each NIC typically
requires many different drivers. This considerably complicates the
NIC software
architecture and driver development process.
In this article we discuss some of the important attributes of a NIC
driver architecture and development methodology. Then we describe
RNS's custom VDA software technology that was developed as a universal
solution to these needs.
Key Architecture and Design Attributes
Establishing a core software architecture and design that promotes software
module reuse across the extremely diverse set of system, OS, protocol and
network environments presents a substantial challenge to the NIC developer.
To further complicate the problem, various OS vendors have independently
developed their own models, modular architectures and software interface
specifications designed to: de-couple the network device drivers
from the protocol stacks; provide a framework for third party NIC developers
to develop network products independent of the particular protocols and
system services; allow multiple network and protocol types to coexist in
the system; provide flexibility to add new network and protocol technologies
as they are
established.
Three of the most prominent models and software interface specifications
used today for NIC drivers are:
Open Data-Link Interface (ODI) developed by Novell Network Driver Interface
Specification (NDIS) developed by Microsoft Data Link Provider Interface
(DLPI) developed by AT&T as part of the STREAMS architecture.
The following table presents a summary characterization of some of the
popular host OSs, and how they utilize the above models and driver interface
specifications. It should be noted that some of these OSs actually
support multiple driver interface types; the one that is listed is the
most commonly used.
Characterization of Operating Systems
Full STREAMS/DLPI-Based STREAMS In Kernel But No DLPI Through Protocol
Stack and Drivers
NDIS-Based
ODI-Based
Other Solaris 2.x Solaris 1.x Windows for WG NetWare VxWorks SVR4 UNIX
HP-UX
Windows NT/95 pSOS+ DG-UX HP-RT 2.x OS/2 HP-RT 1.1 SCO UNIX AIX PDOS
OSF-1
LynxOS 2.x OS-9 UnixWare MacOS Sys. 7 MacOS Sys. 7.5 pSOS+
Open
A successful, high performance NIC driver architecture and design methodology
must possess the following key attributes:
Efficient Portability Across OSs: The NIC driver must be easily
portable across multiple OS types, including those referenced above.
In addition, as
system vendors introduce maintenance releases of existing OSs, major
new versions of existing OSs, and brand new OSs, the driver for an existing
NIC should be easily ported to the new OSs. In order to accomplish
this, all system OS dependencies need to be isolated behind a clearly defined,
generic interface.
Efficient Portability Across NIC Types: The NIC driver must be
portable across multiple system hardware architectures including many different
types of system I/O buses (e.g. VMEbus, PCI Local Bus, etc.)
and host processor technologies. In addition, the driver must be
easily portable across multiple network technologies (e.g. FDDI,
Fast Ethernet, ATM, ISDN, etc.). In order to accomplish this, all system
and network hardware dependencies need to be isolated behind a clearly
defined, generic interface.
Support for Multiple Transport Protocols: The NIC driver must
be designed to be independent of the upper layer protocol stack.
Most OSs these days contain support for multiple transport layer protocol
stacks and can be extended to include newer protocol stacks (e.g.
XTP). The customer should have the flexibility to use the protocol
stack that meets their requirements, without having to make changes to
the NIC driver. In order to accomplish this, all system protocol
stack dependencies need to be isolated behind a clearly defined, generic
interface.
Time-To-Market: Customers demand that NICs support the latest
network technologies, computing systems and OS versions quickly after they
are
announced. Given the proliferation of different system architectures,
OSs and network technologies, meeting customer requirements is becoming
increasingly difficult. To meet these requirements and provide
support for a large number of different target environments, the NIC software
architecture should be modular and scalable so that already tested core
modules can be reused in tact or easily ported across different NIC products
(i.e. different system I/O bus architectures, OSs and network types).
Although based on reused code modules, the quality, performance and functionality
of the end product should not be sacrificed (i.e. the drivers should
perform as well as traditional, custom, hand-crafted drivers).
Ease and Quality of Support and Maintenance: Since a single NIC
hardware device could have ten or more different software drivers, long-term
driver maintenance and customer support become even more critical.
The drivers should be as easy to support and maintain as possible.
In addition, each type of customer has a slightly different set of product
software feature and related documentation and support requirements.
For example, end users typically require a turnkey, binary, plug-and-play
product. OEMs and System Integrators may require this type of product,
but more typically they require a source-code based product since they
are interested in doing their own porting, or customizing the code to meet
their unique needs. Therefore, the drivers should be available in
both binary/plug-and-play and source kit forms.
RNS VDA Software Solution
VDA is RNS's custom Virtual Driver Architecture and core technology,
specifically designed to provide the above attributes. In short,
VDA
provides a core software framework which optimizes the design and porting
of NIC software drivers for different types of customers, independent of
the specific system hardware architecture, OS and protocol types.
The first thing to note about VDA is that it is based on a source code
management and "build" environment which improves the NIC software
maintainability by providing a common, unified framework for managing
the various code modules and related different compilers and linkers required
to create executable drivers for the various target OSs.
VDA can be generically modeled as shown below. It is a modular
architecture with clearly defined and documented, generic interfaces between
each of the modules. The shaded modules are architected to be transferable
intact between various target OS and NIC hardware environments (i.e.
they are port-independent). The design and functionality of the unshaded
modules depends on the specific OS and NIC hardware being used (i.e.
they are port-dependent).
VDA High Level Model
Port-Independent Core Modules:
To accomplish the goal of efficient portability and core module reuse
across OSs and NIC types, the NIC driver is developed around two major,
port-independent, portable, core modules:
The Data Link Protocols (DLP) Module provides the functions required
to transmit and receive data packets between the transport/network level
(ISO-OSI Layers 3 and 4) protocol stack(s) and the NIC hardware, including
facilities for multiplexing data paths between multiple protocols stacks
and multiple NICs, supporting multicast addresses, passing network address
and various network management statistics, etc.. In OS environments
where the Layer 3/4 protocols understand the target network interface (i.e.
understands the Layer 2 packet/header format), the DLP module may do no
more than provide a direct connection between the Protocol Stack Dependent
and NIC Hardware Dependent Modules. In OS environments where the
Layer 3/4 protocols do not understand the target network Layer 2 packet/header
format, the DLP Module includes facilities to reformat the packets and
generate the required MAC/LLC/SNAP headers. For example, some OS
environments still do not understand FDDI packets; in these cases the DLP
module translates between FDDI and Ethernet packet formats.
The Network Management Protocols (NMP) Module provides any local as
well as global Layer 2 node management protocols/functions required by
the target network interface. Example local management functions
include node configuration, initialization, connection/disconnection from
the media, failure detection/recovery, error and management statistics
gathering, etc. Example global management functions include providing a
data exchange channel for management communication with other nodes on
the network and reporting various management information as required.
Some networks such as FDDI and ATM use these protocols to provide facilities
for connecting to, disconnecting from and operating on the network.
The seamless portability of this module is especially important because
as the network specification matures and the protocol standards are enhanced,
the required code changes can be made, tested and verified once in a single
base of code, and then propagated across all the various target NIC and
OS environments.
Port-Dependent Modules:
The general function of the port-dependent modules is to "glue" the
portable, port-independent, core modules to a specific target NIC and system
OS
combination. The details of how this is done is specific to each
different NIC/OS combination.
The Protocol Stack Dependent (PSD) Module translates between the data
structures and other interface requirements of the of the native OS services
and Layer 3/4 protocols, and those of the core DLP module. The PSD
module is an "active" module which drives the core DLP module as a result
of requests from other portions of the system (i.e. native data transfer
applications or system services). Additionally, the PSD module includes
any initialization functions which are required by the specific OS, such
as routines called at boot time to integrate the driver with the rest of
the system. The PSD module is developed by RNS for each supported
target OS.
The Management I/O Dependent (MID) Module translates between the network
management-related data structures and other interface requirements of
the native OS and applications, and those of the core NMP module (if one
is required for the specific network interface). The MID module is
also an "active" module which drives the core NMP module as a result of
requests from other portions of the system (i.e. native network management
applications or system services). Additionally, the MID module includes
any initialization functions which are required by the specific OS.
The MID module is developed by RNS, when required by the network interface,
for each supported target OS.
The PSD and MID software operate together, utilizing functions of the
other software modules and the native operating system, to insure that
the
collective set of software drivers are properly hooked into the specific
system and protocol stacks.
The NIC Hardware Dependent (NHD) Module is developed for a specific,
target NIC hardware implementation. In general it contains four basic
types of facilities: data movement, interrupt management, MAC control
and PHY control. The NHD translates between the data structures and
other interface and signaling requirements of the NIC hardware, and those
of the core DLP and MNP core modules, and also provides a mechanism for
passing data between these various elements. The NHD module allows
the DLP and NMP modules access to the underlying NIC hardware registers
using a standardized, virtual interface which is designed to be consistent
across various NICs (covering variations in both system I/O bus and network
types). The NHD includes knowledge of how to most efficiently access
and utilize the facilities of the NIC hardware. The NHD is generally
a "passive" module which reacts to requests from the core DLP and NMP modules.
The System Dependent Services (SDS) Module utilizes the native facilities
of the target OS to provide functions required by the DLP, NMP and NHD
modules. The SDS module is a passive entity, often implemented as a library
of routines which are "callable" by the other modules. The SDS provides
functions generally provides by the OS (e.g. timers, synchronization
primitives, interrupt handling, memory allocation and mapping for NIC direct
memory access (DMA) transfers, buffer structure, etc.), functions required
for network communications (e.g. manipulating the contents of data
buffers, providing buffers for received network packets), and generic library
support routines (e.g. displaying messages on a console, manipulating
queue structures, etc.).
Module Interfaces:
As with any well designed, modular architecture, standard, generic interfaces
must be clearly defined and documented between the various modules.
These interfaces serve to isolate the OS-specific, hardware-specific and
protocol-specific functions. In order to optimize the flexibility,
reuse and
portability across all the above target environments, these interface
definitions must be well thought out and adhered to. VDA defines
and adheres
to four such interfaces:
The System Dependent Interface (SDI) The Protocol Dependent Interface
(PDI)
The NIC Hardware Dependent Interface (HDI) The Management Input/Output
(MIO)
Interface
Case Study: Windows NT Driver for PCI/FDDI NIC:
To date, RNS has used the VDA framework to develop many different drivers
for many different NICs and OSs. Currently supported NICs include
the 1156 and 1250 Series VMEbus/FDDI NICs, the 2200 Series PCI/FDDI NICs,
and the 2300 Series PCI/Fast Ethernet NICs. Currently supported OSs
include Solaris 1.x and 2.x, HP-UX and RT, DG-UX, NetWare, Windows NT and
95, MacOS System 7.5, SCO UNIX, UnixWare, AIX, OS/2, VxWorks, pSOS+, etc..
This section describes how VDA was used to create the Windows NT driver
for the RNS 2200 Series
PCI/FDDI NIC.
The VDA framework simplified this development task in the following
ways. First, it broke the development into several well defined components
which allowed the engineer working on the Windows NT driver to directly
leverage the efforts of other engineers simultaneously working on other
driver ports. In this fashion, due to the common architectural framework
of VDA, a small software team was able to manage the simultaneous creation
of 2200 drivers across the many different OSs referenced above, in roughly
a ten month period. Second, due to the reuse of core modules and
VDA's clearly specified interfaces, only a few, small code modules had
to be developed which were specific to NT.
Since the FDDI network requires the Station Management (SMT) protocol,
the NMP module contains the upper layer Connection Management and Frame
Services portions of this protocol. The NMP module for the various
2200 drivers is exactly the same piece of code that was previously developed
for the 1250 Series NIC; this core code was reused without modification
in all of the 2200 Series ports. In the particular case of the NT
driver port, the other VDA port independent module, the DLP, was not required
because the native NT protocol stacks provide the LLC and FDDI MAC protocols.
As a result, in NT the PSD module was glued directly above the NHD module.
SCO UNIX and NetWare driver ports for the 2200 NIC had been started
slightly ahead of the Windows NT port, allowing the 2200-specific NHD module
to be taken in tact from those efforts. Thus, the Windows NT port
only consisted of developing the SDS module, the PSD module and the SMT-specific
MIO module. Because each of these modules had well documented interfaces,
the task was simplified even further. In addition, the quality of
the Windows NT driver was increased by leveraging the already completed
debugging efforts of the SCO UNIX and NetWare drivers.
The majority of the code for the Windows NT-specific SDS, PSD, and SMT
MID modules was simple to develop. The PSD module must translate
between Windows NT entry points and the NHD modules entry points.
As and example, the DriverSend function must call the rns_adap_xmit_req
function. The SDS routines are mostly translations from services
which the core VDA modules require, into Windows NT equivalents.
For example, the core files call the SDS routine rns_timeout which translates
the parameters to the Windows NT-specific routine, KeSetTimer().
Everything has been made to sound quite easy in doing a new port using
VDA, and, it essentially is. However, there is one portion of each
port which takes some thinking and that is the virtual message portion.
The virtual message must be custom designed for each port. It is
the virtual message which allows the transmission and reception of messages
to be handled in an OS independent manner, without copying of data.
A custom message structure could have been created in VDA which would not
change from port to port. Although this would simplify the HDI interface
specification, it would also force us to copy all transmit and receive
data from the native OS message structure to our custom message structure,
which would impact performance. Rather, VDA allows the definition of the
virtual message to be dependent on the target OS. In order to allow
the HDI interface to be unchanged between OSs, all accesses to the virtual
message needed to be made via routines supplied by the SDS routines.
Thus, when doing a port, the type definition for the virtual message, and
a set of related routines and macros, must be supplied to manipulate the
virtual message. Virtual message routines include things such as
the ability to get the pointer to the data, set the length of the message,
and copy the message.
Depending on the port, the design of the virtual message can vary greatly.
In the case of NetWare, the virtual message is simply defined to be a union
of the TCB and RCB. Both of these structures contain a user definable
area, which can be used to flag whether the virtual message is a TCB or
an RCB. The reason that the virtual message can be made so simple in NetWare
is that mapping memory for DMA transfers from the NIC is a simple address
translation. (In fact, in some versions there is no translation at
all.)
The virtual message for Windows NT required more complexity than that
of NetWare. The native Windows NT message consists of a PNDIS_PACKET
which points to one or more PNDIS_BUFFERs. Each PNDIS_BUFFER contains
a pointer to a contiguous virtual memory area which could be fragmented
into several pieces of physical memory. In order to transmit the
data associated with the PNDIS_PACKET, a pointer to each physical memory
area must be given to the NIC. The virtual message is used to accomplish
this task without putting Windows NT specific knowledge in the NHD core
module. The virtual message is defined to contain an array of physical
address pointers and a pointer to the PNDIS_PACKET. When the Windows
NT DriverSend routine is called, Windows NT specific routines are called
to load the physical addresses of the PNDIS_PACKET into the virtual message.
The virtual message is then given to the HDI for transmission. The
HDI calls SDS module to get the physical addresses from the virtual message
without needing to be Windows NT specific.
The virtual message for network reception in Windows NT is simpler than
transmit because the Windows NT OS requires all receive buffers to be copied.
The receive virtual messages can be created in a simple format with the
data area pre-allocated and pre-mapped for DMA transfer. After the
message is received, it is copied into a Windows NT message.
|