|
Introduction to Linux
Linux is quite possibly the most important free software achievement
since the original Space War, or, more recently, Emacs. It has developed
into an operating system for business, education, and personal productivity.
Linux is no longer only for UNIX wizards who sit for hours in front of
a glowing console (although we assure you that many users fall into this
category) |
Linux (pronounced with a short i, as in LIH-nucks) is a UNIX operating
system clone which runs on a variety of platforms, especially personal
computers with Intel 80386 or better processors. It supports a wide range
of software, from TeX, to the X Window System, to the GNU C/C++ compiler,
to TCP/IP. It's a versatile, bona fide implementation of UNIX, freely distributed
under the terms of the GNU General Public License (see Appendix C).
Linux can turn any 80386 or better personal computer into a workstation
that puts the full power of UNIX at your fingertips. Businesses install
Linux on entire networks of machines, and use the operating system to manage
financial and hospital records, distributed computing environments, and
telecommunications. Universities worldwide use Linux to teach courses on
operating system programming and design. Computing enthusiasts everywhere
use Linux at home for programming, productivity, and all-around hacking.
What makes Linux so different is that it is a free implementation of
UNIX. It was and still is developed cooperatively by a group of volunteers,
primarily on the Internet, who exchange code, report bugs, and fix problems
in an open-ended environment. Anyone is welcome to join the Linux development
effort. All it takes is interest in hacking a free UNIX clone, and some
programming know-how.
A brief history of Linux
UNIX is one of the most popular operating systems worldwide because
of its large support base and distribution. It was originally developed
at AT&T as a multitasking system for minicomputers and mainframes in
the 1970's, but has since grown to become one of the most widely-used operating
systems anywhere, despite its sometimes confusing interface and lack of
central standardization.
Many hackers feel that UNIX is the Right Thing--the One True Operating
System. Hence, the development of Linux by an expanding group of UNIX hackers
who want to get their hands dirty with their own system.
Versions of UNIX exist for many systems, from personal computers to
supercomputers like the Cray Y-MP. Most versions of UNIX for personal computers
are expensive and cumbersome.
Linux is a free version of UNIX developed primarily by Linus Torvalds
at the University of Helsinki in Finland, with the help of many UNIX programmers
and wizards across the Internet. Anyone with enough know-how and gumption
can develop and change the system. The Linux kernel uses no code from AT&T
or any other proprietary source, and much of the software available for
Linux was developed by the GNU project of the Free Software Foundation
in Cambridge, Massachusetts, U.S.A.
However, programmers from all over the world have contributed to the
growing pool of Linux software.
Linux was originally developed as a hobby project by Linus Torvalds.
It was inspired by Minix, a small UNIX system developed by Andy Tanenbaum.
The first discussions about Linux were on the Usenet newsgroup, comp.os.minix.
These discussions were concerned mostly with the development of a small,
academic UNIX system for Minix users who wanted more.
The very early development of Linux mostly dealt with the task-switching
features of the 80386 protected-mode interface, all written in assembly
code. Linus writes,
After that it was plain sailing: hairy coding
still, but I had some devices, and debugging was easier. I started using
C at this stage, and it certainly speeds up development. This is also when
I started to get serious about my megalomaniac ideas to make `a better
Minix than Minix.' I was hoping I'd be able to recompile gcc under Linux
someday...
Two months for basic setup, but then only slightly
longer until I had a disk driver (seriously buggy, but it happened to work
on my machine) and a small file system. That was about when I made 0.01
available (around late August of 1991): it wasn't pretty, it had no floppy
driver, and it couldn't do much of anything. I don't think anybody ever
compiled that version. But by then I was hooked, and didn't want to stop
until I could chuck out Minix.''
No announcement was ever made for Linux version 0.01. The 0.01 sources
weren't even executable. They contained only the bare rudiments of the
kernel source and assumed that you had access to a Minix machine to compile
and experiment with them.
On October 5, 1991, Linus announced the first ``official'' version of
Linux, which was version 0.02. At that point, Linus was able to run bash
(the GNU Bourne Again Shell) and gcc (the GNU C compiler), but not much
else. Again, this was intended as a hacker's system. The primary focus
was kernel development--user support, documentation, and distribution had
not yet been addressed. Today, the Linux community still seems to treat
these issues as secondary to ``real programming''--kernel
development.
As Linus wrote in comp.os.minix,
Do you pine for the nice days of Minix-1.1,
when men were men and wrote their own device drivers? Are you without a
nice project and just dying to cut your teeth on an OS you can try to modify
for your needs? Are you finding it frustrating when everything works on
Minix? No more all-nighters to get a nifty program working? Then this post
might be just for you.
As I mentioned a month ago, I'm working on
a free version of a Minix-look-alike for AT-386 computers. It has finally
reached the stage where it's even usable (though may not be, depending
on what you want), and I am willing to put out the sources for wider distribution.
It is just version 0.02...but I've successfully run bash, gcc, gnu-make,
gnu-sed, compress, etc. under it.''
After version 0.03, Linus bumped up the version number to 0.10, as more
people started to work on the system. After several further revisions,
Linus increased the version number to 0.95 in March, 1992, to reflect his
expectation that the system was ready for an ``official'' release soon.
(Generally, software is not assigned the version number 1.0 until it is
theoretically complete or bug-free.). Almost a year and a half later, in
late December of 1993, the Linux kernel was still at version 0.99.pl14
-- asymptotically approaching 1.0. At the time of this writing, the current
stable kernel version is 2.0 patchlevel 33, and version 2.1 is under development.
Most of the major, free UNIX software packages have been ported to Linux,
and commercial software is also available. More hardware is supported than
in the original kernel versions. Many people have executed benchmarks on
80486 Linux systems and found them comparable with mid-range workstations
from Sun Microsystems and Digital Equipment Corporation. Who would have
ever
guessed that this "little" UNIX clone would have grown up to take on the
entire world of personal computing?
System features
Linux supports features found in other implementations of UNIX, and
many which aren't found elsewhere.
Linux is a complete multitasking, multiuser operating system, as are
all other versions of UNIX. This means that many users can log into and
run programs on the same machine simultaneously.
The Linux system is mostly compatible with several UNIX standards (inasmuch
as UNIX has standards) at the source level, including IEEE POSIX.1, UNIX
System V, and Berkely System Distribution UNIX. Linux was developed with
source code portability in mind, and it's easy to find commonly used features
that are shared by more than one platform. Much of the free UNIX software
available on the Internet and elsewhere compiles under Linux ``right out
of the box.'' In addition, all of the source code for the Linux system,
including the kernel, device drivers, libraries, user programs, and development
tools, is freely distributable.
Other specific internal features of Linux include POSIX job control
(used by shells like csh and bash), pseudoterminals ( pty devices), and
support for dynamically loadable national or customized keyboard drivers.
Linux supports virtual consoles that let you switch between login sessions
on the same system console. Users of the screen program will find the Linux
virtual console implementation familiar.
The kernel can emulate 387-FPU instructions, and systems without a math
coprocessor can run programs that require floating-point math capability.
Linux supports various file systems for storing data, like the ext2
file system, which was developed specifically for Linux. The Xenix and
UNIX System V file systems are also supported, as well as the Microsoft
MS-DOS and Windows 95 VFAT file systems on a hard drive or floppy. The
ISO 9660 CD-ROM file system is also supported. We'll talk more about file
systems in chapters 2 and 4.
Linux provides a complete implementation of TCP/IP networking software.
This includes device drivers for many popular Token Ring and Ethernet cards,
SLIP (Serial Line Internet Protocol) and PPP (Point-to-Point Protocol),
which provide access to a TCP/IP network via a serial connection, PLIP
(Parallel Line Internet Protocol), and NFS (Network File System). The complete
range of TCP/IP clients and services is also supported, which includes
FTP, telnet, NNTP, and SMTP.
The Linux kernel is developed to use protected-mode features of Intel
80386 and better processors. In particular, Linux uses the protected-mode,
descriptor based, memory-management paradigm, and other advanced features.
Anyone familiar with 80386 protected-mode programming knows that this chip
was designed for multitasking systems like UNIX. Linux exploits this functionality.
The kernel supports demand-paged, loaded executables. Only those segments
of a program which are actually in use are read into memory from disk.
Also, copy-on-write pages are shared among executables. If several instances
of a program are running at once, they share physical memory, which reduces
overall usage.
In order to increase the amount of available memory, Linux also implements
disk paging. Up to one gigabyte of swap space may be allocated on disk
(upt to 8 partitions of 128 megabytes each). When the system requires more
physical memory, it swaps inactive pages to disk, letting you run larger
applications and support more users. However, swapping data to disk is
no substitute for physical RAM, which is much faster.
The Linux kernel also implements a unified memory pool for user programs
and disk cache. All free memory is used by the cache, which is reduced
when running large programs.
Executables use dynamically linked, shared libraries: code from a single
library on disk. This is not unlike the SunOS shared library mechanism.
Executable files occupy less disk space, especially those which use many
library functions. There are also statically linked libraries for object
debugging and maintaining ``complete'' binary files when shared libraries
are not installed. The libraries are dynamically linked at run time, and
the programmer can use his or her own routines in place of the standard
library routines.
To facilitate debugging, the kernel generates core dumps for post-mortem
analysis. A core dump and an executable linked with debugging support allows
a developer to determine what caused a program to crash.
Software features
Virtually every utility one would expect of a standard UNIX implementation
has been ported to Linux, including basic commands like ls, awk, tr, sed,
bc, and more. The familiar working environment of other UNIX systems is
duplicated on Linux. All standard commands and utilities are included.
Many text editors are available, including vi, ex, pico, jove, and GNU
emacs, and variants like Lucid emacs, which incorporates extensions of
the X Window System,
and joe. The text editor you're accustomed to using has more than likely
been ported to Linux.
The choice of a text editor is an interesting one. Many UNIX users prefer
``simple'' editors like vi. (The original author wrote this book with vi.)
But vi has many limitations due to its age, and modern editors like emacs
have gained popularity. emacs supports a complete, Lisp based macro language
and interpreter, powerful command syntax, and other extensions. There are
emacs macro packages which let you read electronic mail and news, edit
directory contents, and even engage in artificially
intelligent psychotherapy sessions (indispensible for stressed-out
Linux hackers).
Most of the basic Linux utilities are GNU software. GNU utilities support
advanced features that are not found in the standard versions of BSD and
UNIX System Vprograms. For example, the GNU vi clone, elvis, includes a
structured macro language that differs from the original implementation.
However, GNU utilities are intended to remain compatible with their BSD
and System V counterparts. Many people consider the GNU versions to be
superior to the originals.
A shell is a program which reads and executes commands from the user.
In addition, many shells provide features like job control, managing several
processes at once, input and output redirection, and a command language
for writing shell scripts. A shell script is a program in the shell's command
language and is analogous to a MS-DOS batch file.
Many types of shells are available for Linux. The most important difference
between shells is the command language. For example, the C SHell (csh)
uses a command language similar to the C programming language. The classic
Bourne SHell sh uses another command language. The choice of a shell is
often based on the command language it provides, and determines, to a large
extent, the qualities of your working environment under Linux.
The GNU Bourne Again Shell (bash) is a variation of the Bourne Shell
which includes many advanced features like job control, command history,
command and filename completion, an emacs-like interface for editing command
lines, and other powerful extensions to the standard Bourne Shell language.
Another popular shell is tcsh, a version of the C Shell with advanced functionality
similar to that found in bash. Other shells include zsh, a small Bourne-like
shell; the Korn Shell (ksh); BSD's ash; and rc, the Plan 9 shell.
If you're the only person using the system and refer to use vi and bash
exclusively as your editor and shell, there's no reason to install other
editors or shells. This ``do it yourself'' attitude is prevalent among
Linux hackers and users.
Text processing and word processing
Almost every computer user needs a method of preparing documents. In
the world of personal computers, word processing is the norm: editing and
manipulating text in a What-You-See-Is-What-You-Get'' (WYSIWYG) environment
and producing printed copies of the text, complete with graphics, tables,
and ornamentation.
Commercial word processors from Corel, Applix, and Star Division are
available in the UNIX world, but text processing, which is quite different
conceptually, is more common. In text processing systems, text is entered
in a page-description language, which describes how the text should be
formatted. Rather than enter text within a special word processing environment,
you can modify text with any editor, like vi or emacs. Once you finish
entering the source text (in the typesetting language), a separate program
converts the source to a format suitable for printing. This is somewhat
analogous to programming
in a language like C, and ``compiling'' the document into printable
form.
Many text processing systems are available for Linux. One is groff,
the GNU version of the classic troff text formatter originally developed
by Bell Labs and still used on many UNIX systems worldwide. Another modern
text processing system is TeX, developed by Donald Knuth of computer science
fame. Dialects of TeX, like LaTeX, are also available.
Text processors like TeX and groff differ mostly in the syntax of their
formatting languages. The choice of one formatting system over another
is based upon what utilities are available to satisfy your needs, as well
as personal taste.
Many people consider groff's formatting language to be a bit obscure
and use find TeX more readable. However, groff produces ASCII output which
can be viewed on a terminal more easily, while TeX is intended primarily
for output to a printing device. Various add-on programs are required to
produce ASCII output from TeX formatted documents, or convert TeX input
to groff format.
Another program is texinfo, an extension to TeX which is used for software
documentation developed by the Free Software Foundation. texinfo can produce
printed output, or an online-browsable hypertext ``Info'' document from
a single source file. Info files are the main format of documentation used
in GNU software like emacs.
Text processors are used widely in the computing community for producing
papers, theses, magazine articles, and books. (This book is produced using
LaTeX.) The ability to process source language as a text file opens the
door to many extensions of the text processor itself. Because a source
document is not stored in an obscure format that only one word processor
can read, programmers can write parsers and translators for the formatting
language, and thus extend the system.
What does a formatting language look like? In general, a formatted source
file consists mostly of the text itself, with control codes to produce
effects like font and margin changes, and list formatting.
Consider the following text:
Mr. Torvalds:
We are very upset with your current plans to implement
post-hypnotic suggestions in the Linux terminal driver code. We feel this
way for three reasons:
1.Planting subliminal messages in the terminal
driver is not only immoral, it is a waste of time;
2.It has been proven that ``post-hypnotic suggestions''
are ineffective when used upon unsuspecting UNIX hackers;
3.We have already implemented high-voltage electric
shocks, as a security measure, in the code for login.
We hope you will reconsider.
This text might appear in the LaTeX formatting
language as the following:
\begin{quote}
Mr. Torvalds:
We are very upset with your current plans to implement
{\em post-hypnotic suggestions\/} in the {\bf Linux} terminal driver code.
We feel this way for three reasons:
\begin{enumerate}
\item Planting subliminal messages in the kernel
driver is not only
immoral, it is
a waste of time;
\item It has been proven that ``post-hypnotic
suggestions''
are ineffective
when used upon unsuspecting UNIX hackers;
\item We have already implemented high-voltage
electric shocks, as
a security measure,
in the code for {\tt login}.
\end{enumerate}
We hope you will reconsider.
\end{quote}
The author enters the text using any text editor and generates formatted
output by processing the source with LaTeX. At first glance, the typesetting
language may appear to be obscure, but it's actually quite easy to understand.
Using a text processing system enforces typographical standards when writing.
All the enumerated lists within a document will look the same, unless the
author modifies the definition of an enumerated list. The goal is to allow
the author to concentrate on the text, not typesetting conventions.
When writing with a text editor, one generally does not think about
how the printed text will appear. The writer learns to visualize the finished
text's appearance from the formatting commands in the source.
WYSIWYG word processors are attractive for many reasons. They provide
an easy-to-use visual interface for editing documents. But this interface
is limited to aspects of text layout which are accessible to the user.
For example, many word processors still provide a special format language
for producing complicated expressions like mathematical formulae. This
is text processing, albeit on a much smaller scale.
A not-so-subtle benefit of text processing is that you specify exactly
which format you need. In many cases, the text processing system requires
a format specification. Text processing systems also allow source text
to be edited with any text editor, instead of relying on format codes which
are hidden beneath a word processor's opaque user interface. Further, the
source text is easily converted to other formats. The tradeoff for this
flexibility and power is the lack of WYSIWYG formatting.
Some programs let you preview the formatted document on a graphics display
device before printing. The xdvi program displays a ``device independent''
file generated by the TeX system under X. Applications like xfig and gimp
provide WYSIWYG graphics interfaces for drawing figures and diagrams, which
are subsequently converted to text processing language for inclusion in
your document.
Text processors like troff were around long before WYSIWYG word processing
was available. Many people still prefer their versatility and independence
from a graphics environment.
Many text-processing-related utilities are available. The powerful METAFONT
system, which is used to design fonts for TeX, is included in the Linux
port of TeX. Other programs include ispell, an interactive spelling checker
and corrector; makeindex, which generates indices in LaTeX documents; and
many other groff and TeXbased macro packages which format many types of
technical and mathematical texts. Conversion programs that translate between
TeX or groff source to a myriad of other formats are also available.
A newcomer to text formatting is YODL, written by Karel Kubat. YODL
is an easy-to-learn language with filters to produce various output formats,
like LaTeX, SGML, and HTML.
Programming languages and utilities
Linux provides a complete UNIX programming environment which includes
all of the standard libraries, programming tools, compilers, and debuggers
which you would expect of other UNIX systems.
Standards like POSIX.1 are supported, which allows software written
for Linux to be easily ported to other systems. Professional UNIX programmers
and system administrators use Linux to develop software at home, then transfer
the software to UNIX systems at work. This not only saves a great deal
of time and money, but also lets you work in the comfort of your own home.
(One of the authors uses his system to develop and test X
Window System applications at home, which can be directly compiled
on workstations elsewhere.) Computer Science students learn UNIX programming
and explore other aspects of the system, like kernel architecture.
With Linux, you have access to the complete set of libraries and programming
utilities and the complete kernel and library source code.
Within the UNIX software world, systems and applications are often programmed
in C or C++. The standard C and C++ compiler for Linux is GNU gcc, which
is an advanced, modern compiler that supports C++, including AT&T 3.0
features, as well as Objective-C, another object-oriented dialect of C.
Besides C and C++, other compiled and interpreted programming languages
have been ported to Linux, like Smalltalk, FORTRAN, Java, Pascal, LISP,
Scheme, and Ada (if you're masochistic enough to program in Ada, we aren't
going to stop you). In addition, various assemblers for writing protected-mode
80386 code are available, as are UNIX hacking favorites like Perl (the
script language to end all script languages) and Tcl/Tk (a shell-like command
processing system which has support for
developing simple X Window System applications).
The advanced gdb debugger can step through a program one line of source
code at a time, or examine a core dump to find the cause of a crash. The
gprof profiling utility provides performance statistics for your program,
telling you where your program spends most of its execution time. As mentioned
above, the emacs text editor provides interactive editing and compilation
environments for various programming languages. Other tools include GNU
make and imake, which manage compilation of
large applications, and RCS, a system for source code locking and revision
control.
Finally, Linux supports dynamically linked, shared libraries (DLLs),
which result in much smaller binaries. The common subroutine code is linked
at run-time. These DLLs let you override function definitions with your
own code. For example, if you wish to write your own version of the malloc()
library routine, the linker will use your new routine instead of the one
in the libraries.
The design and philosophy of
Linux
New users often have a few misconceptions and false expectations about
Linux. It is important to understand the philosophy and design of Linux
in order to use it effectively. We'll start by describing how Linux is
not designed.
In commercial UNIX development houses, the entire system is developed
under a rigorous quality assurance policy that utilizes source and revision
control systems, documentation, and procedures to report and resolve bugs.
Developers may not add features or change key sections of code on a whim.
They must validate the change as a response to a bug report and subsequently
``check in'' all changes to the source control system, so that the changes
may be reversed if necessary. Each developer is assigned one or more parts
of the system code, and only that developer can alter those sections of
the code while it is ``checked out'' (that is, while the code is under
his or her control).
Organizationally, a quality assurance department runs rigorous tests
on each new version of the operating system and reports any bugs. The developers
fix these bugs as reported. A complex system of statistical analysis is
used to ensure that a certain percentage of bugs are fixed before the next
release, and that the operating system as a whole passes certain release
criteria.
The software company, quite reasonably, must have quantitative proof
that the next revision of the operating system is ready to be shipped;
hence, the gathering and analysis of statistics about the performance of
the operating system. It is a big job to develop a commercial UNIX system,
often large enough to employ hundreds, if not thousands, of programmers,
testers, documenters, and administrative personnel. Of course, no two commercial
UNIX vendors are alike, but that is the general picture.
The Linux model of software development discards the entire concept
of organized development, source code control systems, structured bug reporting,
and statistical quality control. Linux is, and likely always will be, a
hacker's operating system. (By hacker, I mean a feverishly dedicated programmer
who enjoys exploiting computers and does interesting things with them.
This is the original definition of the term, in contrast to the connotation
of hacker as a computer wrongdoer, or outlaw.)
There is no single organization responsible for developing Linux. Anyone
with enough know-how has the opportunity to help develop and debug the
kernel, port new software, write documentation, and help new users. For
the most part, the Linux community communicates via mailing lists and Usenet
newsgroups. Several conventions have sprung up around the development effort.
Anyone who wishes to have their code included in the ``official'' kernel,
mails it to Linus Torvalds. He will test and include the code in the kernel
as long as it doesn't break things or go against the overall design of
the system.
The system itself is designed using an open-ended, feature-minded approach.
The number of new features and critical changes to the system has recently
diminished, and the general rule is that a new version of the kernel will
be released every few weeks. Of course, this is a rough figure. New release
criteria include the number of bugs to be fixed, feedback from users testing
pre-release versions of the code, and the amount of sleep Linus Torvalds
has had this week.
Suffice it to say that not every bug is fixed, nor is every problem
ironed out between releases. As long as the revision appears to be free
of critical or recurring bugs, it is considered to be stable, and the new
version is released. The thrust behind Linux development is not to release
perfect, bug-free code: it is to develop a free UNIX implementation. Linux
is for the developers, more than anyone else.
Anyone who has a new feature or software application generally makes
it available in an alpha version--that is, a test version, for those brave
users who want to hash out problems in the initial code. Because the Linux
community is largely based on the Internet, alpha software is usually uploaded
to one or more Linux FTP sites (see Appendix B), and a message is posted
to one of the Linux Usenet newsgroups about how to obtain and test the
code. Users who download and test alpha software can then mail results,
bug fixes, and questions to the author.
After the initial bugs have been fixed, the code enters a beta test
stage, in which it is usually considered stable but not complete. It works,
but not all of the features may be present. The software may also go directly
to a final stage, in which the software is considered complete and usable.
Keep in mind that these are only conventions--not rules. Some developers
may feel so confident of their software that they decide it isn't necessary
to release alpha or test versions. It is always up to the developer to
make these decisions.
You might be amazed at how such an unstructured system of volunteers
who program and debug a complete UNIX system gets anything done at all.
As it turns out, this is one of the most efficient and motivated development
efforts ever employed. The entire Linux kernel is written from scratch,
without code from proprietary sources. It takes a huge amount of work to
port all the free software under the sun to Linux. Libraries are written
and ported, file systems are developed, and hardware drivers are
written for many popular devices--all due to the work of volunteers.
Linux software is generally released as a distribution, a set of prepackaged
software which comprises an entire system. It would be difficult for most
users to build a complete system from the ground up, starting with the
kernel, adding utilities, and installing all of the necessary software
by hand. Instead, many software distributions are available which include
everything necessary to install and run a complete system. There is no
single, standard distribution--there are many, and each has its own advantages
and disadvantages.
Differences between Linux and
other operating systems
It is important to understand the differences between Linux and other
operating systems, like MS-DOS, OS/2, and the other implementations of
UNIX for personal computers. First of all, Linux coexists happily with
other operating systems on the same machine: you can run MS-DOS and OS/2
along with Linux on the same system without problems. There are even ways
to interact between various operating systems, as we'll see.
Why use Linux?
Why use Linux, instead of a well known, well tested, and well documented
commercial operating system? We could give you a thousand reasons. One
of the most important, however, is that Linux is an excellent choice for
personal UNIX computing. If you're a UNIX software developer, why use MS-DOS
at home? Linux allows you to develop and test UNIX software on your PC,
including database and X
Window System applications. If you're a student, chances are that your
university computing systems run UNIX. You can run your own UNIX system
and tailor it to your needs. Installing and running Linux is also an excellent
way to learn UNIX if you don't have access to other UNIX machines.
But let's not lose sight. Linux isn't only for personal UNIX users.
It is robust and complete enough to handle large tasks, as well as distributed
computing needs. Many businesses--especially small nes--have moved their
systems to Linux in lieu of other UNIX based, workstation environments.
Universities have found that Linux is perfect for teaching courses in operating
systems design. Large, commercial software vendors have started to realize
the opportunities which a free operating system can provide.
Linux vs. MS-DOS
It's not uncommon to run both Linux and MS-DOS on the same system. Many
Linux users rely on MS-DOS for applications like word processing. Linux
provides its own analogs for these applications, but you might have a good
reason to run MS-DOS as well as Linux. If your dissertation is written
using WordPerfect for MS-DOS, you may not be able to convert it easily
to TeX or some other format. Many commercial applications for MS-DOS aren't
available for Linux yet, but there's no reason that you can't use both.
MS-DOS does not fully utilize the functionality of 80386 and 80486 processors.
On the other hand, Linux runs completely in the processor's protected mode,
and utilizes all of its features. You can directly access all of your available
memory (and beyond, with virtual RAM). Linux provides a complete UNIX interface
which is not available under MS-DOS. You can easily develop and port UNIX
applications to Linux, but under MS-DOS you are limited to a subset of
UNIX functionality.
Linux and MS-DOS are different entities. MS-DOS is inexpensive compared
to other commercial operating systems and has a strong foothold in the
personal computer world. No other operating system for the personal computer
has reached the level of popularity of MS-DOS, because justifying spending
$1,000 for other operating systems alone is unrealistic for many users.
Linux, however, is free, and you may finally have the chance to decide
for yourself.
You can judge Linux vs. MS-DOS based on your expectations and needs.
Linux is not for everybody. If you always wanted to run a complete UNIX
system at home, without the high cost of other UNIX implementations for
personal computers, Linux may be what you're looking for.
Linux vs. The Other Guys
A number of other advanced operating systems have become popular in
the PC world. Specifically, IBM's OS/2 and Microsoft Windows have become
popular for users upgrading from MS-DOS.
Both OS/2 and Windows NT are full featured multitasking operating systems,
like Linux. OS/2, Windows NT, and Linux support roughly the same user interface,
networking, and security features. However, the real difference between
Linux and The Other Guys is the fact that Linux is a version of UNIX, and
benefits from contributions of the UNIX community at large.
What makes UNIX so important? Not only is it the most popular operating
system for multiuser machines, it is a foundation of the free software
world. Much of the free software available on the Internet is written specifically
for UNIX systems.
There are many implementations of UNIX from many vendors. No single
organization is responsible for its distribution. There is a large push
in the UNIX community for standardization in the form of open systems,
but no single group controls this design. Any vendor (or, as it turns out,
any hacker) may develop a standard implementation of UNIX.
OS/2 and Microsoft operating systems, on the other hand, are proprietary.
The interface and design are controlled by a single corporation, which
develops the operating system code. In one sense, this kind of organization
is beneficial because it sets strict standards for programming and user
interface design, unlike those found even in the open systems community.
Several organizations have attempted the difficult task of standardizing
the UNIX programming interface. Linux, in particular, is mostly compliant
with the POSIX.1 standard. As time goes by, it is expected that the Linux
system will adhere to other standards, but standardization is not the primary
goal of Linux development.
Linux vs. other implementations
of UNIX
Several other implementations of UNIX exist for 80386 or better personal
computers. The 80386 architecture lends itself to UNIX, and vendors have
taken advantage of this.
Oher implementations of UNIX for the personal computer are similar to
Linux. Almost all commercial versions of UNIX support roughly the same
software, programming environment, and networking features. However, there
are differences between Linux and commercial versions of UNIX.
Linux supports a different range of hardware than commercial implementations.
In general, Linux supports most well-known hardware devices, but support
is still limited to hardware which the developers own. Commercial UNIX
vendors tend to support more hardware at the outset, but the list of hardware
devices which Linux supports is expanding continuously.
Many users report that Linux is at least as stable as commercial UNIX
systems. Linux is still under development, but the two-pronged release
philosophy has made stable versions available without impeding development.
The most important factor for many users is price. Linux software is
free if you can download it from the Internet or another computer network.
If you do not have Internet access, you can still purchase Linux inexpensively
via mail order on diskette, tape, or CD-ROM.
Of course, you may copy Linux from a friend who already has the software,
or share the purchase cost with someone else. If you plan to install Linux
on a large number of machines, you need only purchase a single copy of
the software--Linux is not distributed with a ``single machine'' license.
The value of commercial UNIX implementations should not be demeaned.
In addition to the price of the software itself, one often pays for documentation,
support, and quality assurance. These are very important factors for large
institutions, but personal computer users may not require these benefits.
In any case, many businesses and universities have found that running Linux
in a lab of inexpensive personal computers is preferable to running a commercial
version of UNIX in a lab of workstations. Linux can provide workstation
functionality on a personal computer at a fraction of the cost.
Linux systems have travelled the high seas of the North Pacific, and
manage telecommunications and data analysis for an oceanographic research
vessel. Linux systems are used at research stations in Antarctica. Several
hospitals maintain patient records on Linux systems.
Other free or inexpensive implementations of UNIX are available for
the 80386 and 80486. One of the best known is 386BSD, an implementation
of BSD UNIX for the 80386. The 386BSD package is comparable to Linux in
many ways, but which one is better depends on your needs and expectations.
The only strong distinction we can make is that Linux is developed openly,
and any volunteer can aid in the development process, while 386BSD is developed
by a closed team of programmers.
Because of this, serious philosophical and design differences exist
between the two projects. The goal of Linux is to develop a complete UNIX
system from scratch (and have a lot of fun in the process), and the goal
of 386BSD is in part to modify the existing BSD code for use on the 80386.
NetBSD is another port of the BSD NET/2 distribution to several machines,
including the 80386. NetBSD has a slightly more open development structure,
and is comparable to 386BSD in many respects.
Another project of note is HURD, an effort by the Free Software Foundation
to develop and distribute a free version of UNIX for many platforms. Contact
the Free Software Foundation (the address is given in Appendix C) for more
information about this project. At the time of this writing, HURD is still
under development.
Other inexpensive versions of UNIX exist as well, like Minix, an academic
but useful UNIX clone upon which early development of Linux was based.
Some of these implementations are mostly of academic interest, while others
are full fledged systems.
|