This is a rough draft
Design features and decisions which adversely impact Windows/NT reliability
and ease of use
This white paper is an analysis of some design features and design decisions
which Microsoft's software engineers made while building Windows/NT 4.0.
These design decisions adversely affect the reliability of Windows/NT 4.0:
they make it an unreliable operating system. I will argue that an
unreliable operating system is fundementally hard to use. Until many
of the fundemental design decisions are changed, Windows/NT is doomed to
crashes, hangs, and blue screens of death (BSOD).
My objective is to give Information Technology (IT) workers the information
they need to help make an informed decision when it comes time to select
a system to implement. I am trying to answer the question that so
many non-IT managers ask me: "Why not Microsoft?"
I am a certified Microsoft professional - I have been trained on Windows/95,
Windows/NT 3.51 and Windows/NT 4.0. I use Microsoft systems daily
in my work and at home. I'm also a shareholder of Microsoft's, because
my wife thinks that they will continue to make money (and I am inclined
to agree - remember VHS and Beta?). However, I also use UNIX and
Linux and VMS - all of these operating systems are better than Windows/95
and Windows/NT.
I am interested in a dialog on these issues - please contact me at jeff@www.jeffsilverman.ddns.net.
Reliability and ease of use
Any system (and here I am talking about any system not just computer systems)
which is unreliable is by definition hard to use. Why? For
several reasons:
-
Users have to take the system out of service and either make repairs themselves
or hire somebody to make repairs for them
-
Users have to figure out work arounds to deal with the failures
-
Unreliable systems frequently require redundancy - each additional redundant
system increases the administration label
-
Unreliable systems frequently adversely affect other systems around them
- the effects of failure propogate unless measures are taken to contain
the effects of the failure.
Consider an unreliable luxury car - example
1. Now software is not a car. But try getting warranty
service on your software!
Reliability is a worthwhile design goal. We know how to make software
reliable - see if Microsoft does any of these things:
-
Requirements should be clearly understood. Requirements are of a
technical nature and not driven by business needs or political realities.
Requirements should be testable.
-
Source code should be written to coding standards which include meaningful
comments and airtight configuration control. Source code should be
available for inspection by the customer.
-
Documentation should be clear and readily available in a wide variety of
formats.
-
Software should be written in a reliable programming language such as Ada
or Java. Unreliable programming languages, such as "C" and FORTRAN
should be avoided. Assembler should be avoided like poison.
Note that it is possible to write reliable code in C, and there are some
tricks you can use to make the software more reliable (such as passing
arguments as members of a struct instead of in an argument list)
and process things should should use to make your software reliable (run
lint).
C and Perl are ubiquitous - that doesn't mean they are good, that means
they are there.
-
Version numbers should reflect Major changes in design, Minor changes functionality,
and bug fixes (e.g. 3.5.2 would be the 3rd major change in design, the
5th set of minor changes to the 3rd major change, and the 2nd set of bug
fixes). Version numbers should never be driven by marketing considerations
- they should be used solely for configuration management
-
Never make hidden changes to software
-
Never change shared code unless you have a reliable mechanism for transmitting
the fact of that change to all interested software
-
Never write self-modifing code
-
backwards compatibility with previous versions is a worthy design goal,
because it allows you to take advantage of all the experience you've had
with prior versions. All of the test cases, all of the test software
and test procedures with prior versions can be applied to the current version.
-
Configuration data should be in a read-mostly database, and the process
for changing the configuration data should be different than the process
for accessing the configuration data. That way, the program can't
shoot itself in the foot. The configuration change program can be
made smaller and more reliable.
-
There should be clear separation between the operating system and the applications
which run under the operating system. Why? An OS is the program
with final authority over a computer system. So the applications
can depend on the OS. If the OS depends on an application (or several
applications) then the reliability of the OS is dependent on the reliability
of the apps. But we know applications are unreliable, which is why
they are applications.
-
To make a system reliable, keep it small and simple. That's trivial:
the bigger it is, the more complicated it ism the less reliable it will
be.
-
The nomenclature of critical subsystems should not change. Otherwise,
confusion arises - what is what?
-
Open software development leads to reliable software - you have lots of
people looking over your shoulder to make sure that you've done it right.
Design Decisions
This section discusses some of the design decisions that Microsoft made
in the design of Windows/NT 4.0. In many respects Windows/NT 4.0
actually represents a step backwards from Windows/NT 3.51, and I know of
several shops which are still using NT 3.51 in 1998.
Spaces in file names
The problem
The traditional Microsoft microcomputer file name rules, also known as
8.3 filenames, was perfectly adequate for small microcomputers. Other
microcomputer systems that debuted in the late 1970s and early 1980s (RT-11,
RSX-11, CP/M, TRS-80) also had restrictive file name rules. However,
modern languages (Java and Ada) require the use of long files with mixed
upper and lower case.
One solution
The UNIX filesystem stores file names as arbitrary length (up to some limit,
UNIX variant dependent) ASCII strings. The VMS filesystem stores
file name as arbitrary length (up to 39 characters with a 39 character
file type) strings; and uses a hash algorithm to deal with mixed case issues.
Both file systems allow any character which is also a delimiter to appear
in a filename, if suitably escaped, however, nobody in their right mind
does this: it's too confusing and it breaks scripts.
Advantages
These schemes support the needs of modern languages such as Java and Ada,
and also make it easier for humans to identify the contents of files.
For example, WH981110.DAT is a lot more cryptic than WareHouse_1998_11_10.DATA
(and also the latter scheme is Y2K compliant).
Disadvantages
These schemes are less efficient. The savings in ease of use make
the performance penalty acceptable. The VMS filesystem solution has
virtually no penalty because the filenames are already hashed using a 16
bit code called Radix50.
The Microsoft solution
The Microsoft solution not only allows long names, which I have no problem
with, it also forces spaces in filenames (e.g. program files and
my documents), which is my objection. It poses fits for
software which take arguments through the command line. To see some
programs that do that, just look at the file type menu. In Windows
Explorer, go to the view menu, select
Options.... select file
types. Your machine is different than my machine, so you may
have to poke at a file type or two, but eventually, you will come to something
that has open("%1") in it. The double quotes are required syntax
to protect the line parser from spaces in the filenames.
Advantages
This is easier for human beings, (compare
WareHouse_1998_11_10.DATA
with
WareHouse 1998 11 10.DATA
). If you are a normal human being (I suspect that only geeks
like me are reading this far) the spaces are much easier to understand
than the underscores.
Disadvantages
If you have a utility that runs in line mode (from command.com ) then filenames
with spaces may break it. For example, if you have a script that
compares two files, say,
diff %1% %2%
That script will break if the first file, %1%, is C:/program
files/test.exe or C:/my documents/test.txt . The workaround
is to enclose those in quotes:
diff "%1%" "%2%"
but it's a feature that makes it more difficult to test scripts and
has a tendency to break a lot of existing software. It also
makes it more difficult to test your scripts. Your script may run
fine with a file called MyFile.txt but may break with a file called
My
File.txt. This is an example of a data dependent error.
Data dependent errors are hard to identify and find. Programmers
who worry about such things try to code in ways that do not generate data
dependent errors. Operating system designers who worry about such
things come up with names that are guaranteed to be safe, such as program_files
and my_documents.
The registry
The problem
The registry is a database of the configuration of the operating system
and any software that wants to use it. As implemented, it is a binary
flat database. Although generally not done, there are interactive
applications to modify the database, and applications to backup and restore
the registry, and applications to repair the registry.
One solution
In UNIX, configuration files are stored as ASCII text, generally as lists
of variable=value pairs interspersed with comments.
They can be changed with a simple editor (vi) or with a script; in some
cases, the configuration files are scripts themselves, and do things like
loop or make decisions. Generally, programs can't modify their own
configuration files.
Advantages
The text file approach is actually very easy to use. Because the
files are generally read-only, the program can't change its own configuration
and get itself into trouble. The comments are important, too; they
can give a history of the file, discuss what problems the different settings
caused and how they got cured, and provide tutorial information.
It is very hard to mess up a UNIX configuration file in such a way
that the system is unbootable (it can be done - but you have to have privileges
and you really have to know what you're doing to even find the file you
have to mess up).
Disadvantages
Text files suggests a low level of automation - it is harder to write a
wizard program that generates a configuration (although wizards are appearing
- they are written in Perl and frequently, they emit comments along with
the configuration details. A good example of a wizard is h2n, the
hosts to named converted).
Because the files are generally read-only, the program can't change
its own configuration and get itself out of trouble.
The Microsoft solution
The registry has all of configuration information in one place (well, actually
two places, but that's hidden) - the registry.
Advantages
Under Windows/NT 3.51 and Windows/3.1, the .ini files were limited to 64Kbytes
in size - about 32 pages of text. The registry gets around this limitation.
Because the file is binary, it is compact and parsing it is trivial.
Because there are only two files, it is easy to find them.
Disadvantages
64 Kbytes is a lot of configuration information, so the 64 Kbyte limit
is not much of a limit.
There are cautions all over the registry editor application which say
that changing some settings (they never say which ones) will cause the
machine to become unbootable. Those cautions are correct, by the
way - it is possible to break Windows/NT by changing the registry.
It is possible to set off a series of cascading failures in the .DLL files.
Further, it is possible for one application to break another application.
Both Microsoft and Netscape, for example, offer to change registry settings
to make their browser the default browser. This offer carries with
it the risk of breaking applications which rely on the default browser.
I do not fault Netscape for doing this because they are competing with
their operating system vendor and that can't be much fun.
Another disadvantage is that it is difficult to deal with parts of the
registry. See the DHCP story, below. For
example, if you want to backup and restore just part of the registry, you
have to use the registry editor; you can't use the backup software.
Why the disadvantages are unimportant
Moving the Video device drivers into the Kernel (NT only)
For purposes of this discussion, the microprocessor runs in 2 modes: user
mode and kernel mode. Kernel (not Kernal) mode is more privileged
and is used by the operating system and only the operating system to do
critical functions. User mode is less privileged and is used by both
the operating system and the applications. The microprocessor enters
Kernel mode when an interrupt occurs or when an application calls a system
service. The microprocessor leaves Kernel mode on return from an
interrupt or system service call. It takes some time, a few milliseconds,
to make the transition from user mode to Kernel mode and back again.
Device drivers are programs that drive devices. Depending on
the system, the device driver might be part of the Kernel or part of the
user space.
The problem
One of the criticisms leveled at Windows/NT 3.51 is that it is slow.
One of the reasons why it is slow is that the video device drivers, which
are made by the video card manufacturers, were in the user space.
So each time the video device driver wanted to do something, it had to
call the operating system, incurring the transition delay twice each call.
Doing a context switch (going from User mode to Kernel mode or back) is
relatively slow on CISC (Complicated Instruction Set Computers) machines.
One solution
Three acceptable solutions present themselves:
-
is to move more the driver code into the hardware itself. This is
approach taken by the TI TMS34xxx chips and some graphic accelerator chips,,
which actually are specialized graphics processors in their own right.
-
Develop new, faster OS primitives which reduce the number of system service
calls
-
Live with the problem. Windows/NT is intended for business environments
where reliability is a key performance parameter. Windows/98 is designed
for the home market, which is where the gamesters live. People doing
serious computer graphics work will probably use hardware graphic systems.
Advantages
Moving the driver code into the hardware itself would require more sophisticated
and more expensive hardware. Most people don't think graphics performance
is so key an issue that they are willing to spend the extra bucks to get
the additional pixels/second or triangles/second. For those people
who do, they are of course at liberty to do so. In either case, operating
system mediates between the drivers and hardware.
Redesigning the OS to make the interface more efficient would result
in a better operating system. Direct3D was a system which would do
that, but then microsoft enveloped and killed it, replacing it with ActiveX.
Disadvantages
Moving functionality from the software to the hardware changes graphics
performance from an OS issue into a system issue. With faster hardware,
NT graphics will be faster. But so would Linux graphics, and by a
comparable amount.
Developing new faster OS primitives would require a major R&D effort,
and there are limits as to how fast you can make a graphics primitive.
Why the disadvantages are unimportant
Graphical performance is of greatest concern to gamers. Windows/NT
is not intended as a gamer OS - Windows/95 is. Games do strange and
wonderful things with the hardware that the operating system doesn't like.
In a home machine where the object is to have fun, that's fine. In
a business machine, fun is not an issue.
The Microsoft solution
Microsoft required that the device driver become part of the kernel.
Advantages
It's faster because the CPU doesn't have to do lots of context switches
when doing graphical I/O. Since the kernel is required to be in 32
bit mode, you don't have the overhead of thunking between 16 bit and 32
bit modes (although at this late date, that's kinda moot - nobody writes
16 bit code anymore).
Disadvantages
This required all of the vendors who were selling into the Windows/NT market
to rewrite their drivers. If the driver had an error, it had the
potential of crashing the machine . Microsoft's response to complaints
was that it wasn't their (Microsoft's) responsibility, that the source
of the problem was the buggy driver. The problem with the response
is that, by opening up the OS to code written by others, Microsoft sacrificed
reliability.
Using the GUI for everything
Every application has to have some way of communicating with whatever started
it, in order to find out what it needs to do. There are four kinds
of interfaces:
-
An Application Program Interface (API). The input comes through subroutine
calls or message passing from other programs.
-
a command line interface. The program inputs come from switches with
optional values. This is the way most UNIX utilities work.
-
an interactive, terminal oriented interface. This is the way a lot
of programs work, for example, the vi and emacs editor on UNIX, or the
Lynx web browser. One of the reasons why DEC invented the VT100 terminal
was to provide support for these kinds of applications - they are still
very effective and very fast.
-
A Graphical User Interface (GUI). The input comes through a graphical
screen which in turn generally requires a pointing device (a mouse or similar).
The problem
Every one of the interfaces described above has its advantages and disadvantages.
Ordinary, computer phobic people, for example, prefer GUIs. Most
modern text processors use GUIs, so that people can see what they are going
to get as they work. The command line interface lends itself to incorporation
into scripts; so that clever users can develop their own automated techniques
for dealing with problems. The termiinal interface is good for applications
which require a lot of fast, interactive data entry (for example, airline
reservation terminals, bank teller stations). Note that X-terminals
use this approach, and the window manager provides a cut and paste capability;
but the programs that run under the window manager are unaware of the cutting
and pasting going on. The API is important because it is a different
way of accomplishing automation goals, this time by software engineers.
One solution
The UNIX solution uses a combination of interface techniques. Sure
there is a CLI, and there is also a GUI based on X. Or, you can use
curses or assume X3.64 (VT-100 or xterm) capabilities and build a text
mode application but which still has screen layout capabilities.
Advantages
The CLI is useful for remote management, for scripting, and for systems
which are cost sensitive. You can run linux on a machine with a CGA
video card and a wildly cheap CGA monitor; you can run linux on a machine
with no video at all (although this requires some tricks to LILO).
Disadvantages
Sometimes, there is something you just can't do through the GUI but you
can do through the CLI; less often, there is something you can't do through
the CLI but you can do through the GUI.
X-windows has multiple window managers and the window managers provide
a different "look and feel" for how the windowing system works. Consequently,
different UNIXes feel different to their users, which makes training more
difficult.
Why the disadvantages are unimportant
The different levels of capabilities through the different UIs is a function
of the vendor commitment to be able to do everything using any method.
There might be a good (although perhaps unstated) reason for not implementing
something in one of the UIs.
The different window managers is a problem. Part of the flexibility
of UNIX comes at the price of having to learn more.
The Microsoft solution
Just about everything is done using a GUI.
Advantages
It's easy for first time users to do small, simple things.
Disadvantages
The GUI interface doesn't scale well to do big things.
This was driven home to me one day when the DHCP
(Dynamic Host Configuration Protocol) server database became corrupted,
and we could not uncorrupt it because it was part of the registry instead
of a separate database.
The GUI interface doesn't work well in a crisis
Suppose that your server is chugging along busily serving and the video
card fails. If you replace the video card with an incompatible video
card, your machine might not boot. You can work around the problem
by booting into VGA 640x480 mode, but that gives you limited functionality;
and of course, once you fix your little problem, you have to reboot
to get into full blown operational mode. What is really ironic is
that you don't (or shouldn't) need a video card in a server machine!!!!
The GUI doesn't work well remotely
"A picture is worth a thousand words". That's true, but if it takes
a million words to transmit a picture, is that effective? No, especially
if the picture of the dynamic sort and you spend a lot of time waiting
for non-functional animation to transmit (for example, the animation when
you copy files or when you are searching for something - that's a killer
when done over the 'net). Remember: you want to administer your servers
both when things are going well and when things are malfunctioning.
So, for example, if some hacker has broken into your server and is dumping
your critical data files as fast as the router will route them, then will
you want to wait while those pretty screen displays update?
Microsoft's response is that you can you use their remote management
software to manage your servers remotely, so that you don't have to send
a lot bandwidth.
My response to Microsoft's response is that yes, you can do that, but
if and only if your servers implement the Microsoft proprietary remote
management controls. Otherwise, you have to use something like Symantec
PCanywhere or Microsoft Terminal Server. By way of contrast, in the
UNIX world, any application that uses the command line or uses a VT-100
style (ANSI X3.64) interface is a candidate for remote administration.
That's just about everything.
DLLs (Dynamic Link Libraries)
One of the innovations Microsoft developed during the development of windows
was the Dynamic Link Library, or DLL. A DLL file allows a programmer to
share code, data, and other information between several programs.
The problem
Much has been written about the high cost of software and the need to share
software so that it only gets written once and then reused over and over
again. Examples of shared code include such plebian things as the sine
and square root subroutines.
The next step, of course, is to share the routines in such a way that
only one copy of the code need be in physical RAM at anytime. With virtual
memory machines, that's easy to do: all you have to do is tell the linker
that this code is going in a specific spot in virtual memory and that it
is shared. The linker then passes that information to the image activator,
which remembers where the shared pages are.
One solution
In the UNIX world, shared memory is implemented but only for executable
code. They are read-only.
Advantages
Disadvantages
Why the disadvantages are unimportant
The Microsoft solution
The DLLs not only contain read only code, but they contain read only data
and read/write data. There is this complicated intertwining scheme
using indexes to get from the entry points to the actual information.
So finding the symbols that the DLL defines or references is a hassle.
Advantages
One of the hot topics in computer science, for the past 20 years or so,
has been the ideal of reusable code. We look at how productive EEs
are, and we'd like our software development to be just as productive.
It is an inferiority complex.
Well, guess what: the reason why EEs are so productive is that whenever
they come to something complicated, they implement it in software!
Unfortunately, the kids at Microsoft had only been exposed to software
- they generally get hired right out of school and they haven't had any
exposure to the real world, yet. So they know how, but they don't
know why. So they implemented DLLs as a way to handle anything that
needs to be shared.
Disadvantages
Some things shouldn't be shared. Sometimes, if take something that's
shared, and change it, you break things for somebody else. For example,
suppose you and your S.O. share a car. The transmission needs replacing
and you decide to replace the automatic trany with a stick shift.
Now, the car is perfectly good for you, but your S.O. finds it unusable.
Oops.
Now, suppose you did that without telling your S.O. Does that
sound like something Microsoft would do?
A Response to Microsoft's UNIX services for Windows/NT
See http://www.microsoft.com/ntworkstation/compare/singleDesktop/default.asp
for more details.
Price/performance: the great red herring
Everybody talks about price/performance as if dollars spent on CPU cycles
were the be all and end all of all thinigs computing; even though we all
know that this isn't so. There are other factors to consider:
-
Reliability - which is both Mean Time Between Failures (MTBF) and Mean
Time to Repair (MTTR)
-
Ease of use
-
Scalability
-
Ability to remotely manage
-
interoperability
-
Maturity of people and systems
Ponder this anecdote
My son has a 200 MHz Pentium II MMX with 48 Mbytes of RAM. I have
a 75 MHz 80486 with 24 Mbytes of RAM. His machine takes about 3 minutes
to boot, and my machine takes about 3.5 minutes to boot. Both of
us run the same version of Windows/95 and have about the same number of
items in our startups. Even though his machine is demonstably faster
at tasks like simulation, VRML, and games; the boot times for our machines
are roughly similar. Why?
Both our machines have the same IDE disk drives, and the boot process
is I/O driven. If he upgraded his 200 MHz Pentium II to a 400 MHz
Pentium II, his boot time would probably decrease by perhaps 10%, even
though his machine is now twice as fast. The problem is that the
CPU isn't the rate determining step: disk I/O is.
Conclusion:
Studies of price performance ratios that only look at CPU power will miss
the point and allow one to draw erroneous conclusions
Reliability
Windows/NT, while more reliable than Windows/95, is not as reliable as
UNIX. It can't be because of the design decisions that got in the
way of a reliable system.
Ease of use
Under ideal conditions, which is most of the time, a GUI is easier to use
than a command line interface. Even in the UNIX world, vi
has been more popular than ed or teco.
Interoperability
You'll notice that the Microsoft sales literature says that NT accounts
are exportable to UNIX; but the reverse isn't available. In particular,
they could have elected to include a Kerberos authentication client (the
code is publicly available) or an NIS client, but didn't. Instead,
they have some software which exports out the NT security database on the
PDC to /etc/passwd.
The fact that you can export accounts from NT to UNIX but not import
accounts from UNIX to NT suggests that Microsoft is much more interested
in migrating to NT rather than interoperating with NT.
They want you to migrate control of the UNIX systems to NT. They
aren't interested in controlling the NT systems from UNIX. In my
mind, interoperability means peaceful coexistance. When Microsoft
asks me where I want to go today, I always ask for an NIS client for NT
and Windows/95 and Windows/98.
Scalability
UNIX scales. NT doesn't. NT will only run on an Intel microprocessor
or a DEC alpha chip. Very few people run NT on Alphas, because if
you have a chip that hot, you wouldn't want to cripple it with a high overhead
low reliability OS. By way of contrast, UNIX (Linux) will run quite
nicely on an Intel 80486 with 16 Mbytes of RAM. UNIX will run on
a PDP-11 with 16 Kwords of RAM - because it was written on a PDP-11 with
16 Kwords of RAM! Try running Windows/NT 4 on a 80486 - it can be
done, eventually. So UNIX scales better on the low end. But
UNIX also scales nicely at the high end. Since UNIX is written in
C, if you invent a new chip that is faster than a B-2 bomber, then all
you need to run UNIX on it is a "C" compiler. The "C" compiler is
written in "C", of course, and you can even cross compile. Do you
have a Cray supercomputer? There's a UNIX for it.
It gets more interesting. Part of scalability is the ability
to move large amounts of data through I/O pipes. Suppose you invent
a new kind of I/O device. It is far, far, easier to write a UNIX
device driver then an NT device driver. Even Microsoft admits that,
in the Halloween memo http://www.opensource.org/halloween2.html.:
NatBro points out:
An important attribute to note which has led to volume drivers is the ease
with which you can write
drivers for linux, and the relatively powerful debugging infrastructure
that linux has. Finding and installing
the DDK, and trying to hook up the kernel debugger and do any sort of interaction
with user-mode
without tearing the NT system to bits is much more challenging than writing
the simple device-drivers
for linux. Any idiot could write a driver in 2 days with a book like "Linux
Device Drivers" -- there is no
such thing as a 2-day device-driver for NT
Why? In part, the UNIX designers wanted to keep it simple, because
they new device drivers would be written by volunteers.
Examples
Example 1: the Unreliable Luxury Car
Consider an unreliable luxury car. Supposing that the seat heater
breaks while the car is under warranty. So you take the car into
the shop and they tell you that they can fix it at no cost to you and they
will have the parts FedEx'd overnight and the car will be ready tomorrow.
But is this really at no cost to you? Your car has been out of commission
for 2 days for a minor repair. Assuming your luxury car cost $36500
and you amortize it over 5 years, that will still cost you $40 just for
the value of the car that you didn't get to take advantage of. 3
weeks later, the tape deck breaks. Again, the car is still under
warranty, and of course, the tape deck is a completely different subsystem
from the seat heater. Again, you lose 2 days worth of value of the
car. Your car may be fun to drive, easy to park, have lots of legroom
in the backseat, plenty of power - but you can't take advantage of it because
the car is constantly breaking. In fact, if you need your car,
you might buy a second car just to have one when your luxury car breaks.
But now you need a friend to drive to the garage to drop off and retreive
the car.
Compare and contrast the experience of owning a luxury car with owning
a Volkswagen or a bicycle. The Volkswagen, at least the old ones,
were deliberately kept lightweight and simple. That means that VW
could use a smaller engine, which didn't need a radiator. Which means
that you don't need to worry about freezing, antifreeze, leaks, maintenance
of the radiator and water pump, etc. VWs are very popular in Mexico
because of their simplicity - they don't require a lot of expensive infrastructure
to maintain them. In the United States, we have been unable to perceive
the true costs of luxurious, sophisticated cars.
Bicycles represent an even more extreme solution to the problem of urban
transportation. Bicycles are even cheaper to maintain and operate
than a Volkswagen. Most repairs can be done by the owner with relatively
simple tools in a modest shop (no electronic tools). In an urban
setting with heavy traffic, it is frequently as fast to bicycle as it is
to drive (have you noticed the large number of bicycle messengers?) and
it is easier and faster to fix a bike than a car.
Example 2: name service
The name service consists of two parts: a name resolver, and a name server.
The name resolver is part of the operating system and it translates names
into IP addresses and other minor chores of similar nature (Email addresses,
hardware configuration details, serivce ports, that sort of thing).
The name resolver is very simple.
The name server, by way of contrast, is more complicated. It
gets queried by the name resolvers and does the actual translation work.
It also deals with the issues of not knowing everything by organizing recursive
inquiries far and wide across the net to learn what the resolver is asking.
It caches the information for speedy retreival in the future. The
name server is complicated, so it is not part of the operating system,
but rather a task which runs under the operating system. The name
server can fail, and the name service protocols have a mechanism for dealing
with that failure.
Name service is an abject lesson in system design - keep the system
critical functions small and minimal, and move the complexity to user space.
So how did Microsoft become so successful anyway?
Given that so much of their software is so bad, how did Microsoft become
one of the most valuable companies and how did Bill Gates and Paul Allen
become the richest men in the world?
Bill Gates had a truly brilliant idea
Bill Gates is a very intelligent man and was a gifted coder back in the
days when he wrote code. He had a genuinely brilliant idea: sell
the operating system at a deep discount in exchange for the customer (the
computer manufacturer) buying a copy for every machine built. Remember
that the cost per copy of software is almost zero, especially because Gates
either allowed or required each manufacturer to provide the documentation,
which was a rehash of Microsoft's documentation. Once Gates had assembled
a large user base, he provided an upgrade path with new Microsoft operating
systems that were (more or less) compatible with what had gone on before.
The situation became ugly when Microsoft deliberately put code in Windows
3.1 which would break Digital Research's DR-DOS. DR-DOS was a compatible
operating system that would do everything MS-DOS would, and more.
It cost less, and it passed every validation suite and ran every application
as fast and as well as MS-DOS did. Until Windows 3.1 came out.
It turns out that Microsoft deliberately put code in Windows 3.1 to break
DR-DOS. At the time, everybody thought that DR-DOS an done a poor
job of emulating MS-DOS. Now, it has come out under subpoena that
Microsoft deliberately subverted a competitor by technical means (See
PC Week, August 31st 1998 page 3).
The situation is not unprecedented
Believe it or not, the situation is not unique. Unfortunately, most
computer jocks are too busy studying computers (like I am supposed to
be, I'm actually engaging in avoidance behavior) to study history.
So let me give you some examples
The railroad story
When railroads first became big businesses in the 1860s, there was a period
from about 1870 until about 1900 or so when their wealth and power and
ability to control were greatly feared. The railroads could (and
did) punish their enemies and help their friends through all sorts of tricks.
For example, here in Washington state, the first railroad line to Puget
Sound went from Portland to Tacoma. A branch line was built from
Tacoma to Seattle, which was the largest town in the state then as well
as now. However, the train to Seattle from Tacoma was scheduled to
leave about an hour before the train to Tacoma from back east was due to
arrive. Thus, it took an extra 24 hours to get to Seattle than to
Tacoma, even though Seattle was only 30 miles away. This kind of
hanky panky was the direct cause of the Granger movement, which is the
foremost political effort of American farmers even today. In response,
the federal government created the Interstate Commerce Commission, or ICC,
which regulated railroads and later trucking and later airlines.
The problem with the ICC of course, is that it went too far in the
other direction. Railroads were not allowed to innovate, were not
allowed to drop unprofitable services, were not allowed to branch into
new markets. Since the end of government regulation in the early
1980s, the railroads have had a renaissance, with more traffic than ever
before. They know that if they become too pernicious, the government will
step in again and mess everything up.
The airline story
In the late 1920s, The Boeing Company became part of a corporation that
made airplanes (Boeing), engines for airplanes (Pratt and Whitney), trained
pilots and then used those pilots to fly airplanes (United Airlines).
In the 1930s, the Roosevelt administration forced the company to split
into the airline, engine manufacturer and airframe manufacturer.
For a while, it seemed like outrageous interference in private enterprise,
especially by a bunch of liberal democrats.
In turned out to be a Good Thing. Boeing became free to sell
airplanes to other airlines besides United, and it sold clippers, for example,
to Pan Am. Pratt and Whitney became free to sell engines to other
airframe makers and it became the foremost aeroengine manufacturer in the
world. United became free to buy airplanes from other builders, especially
Douglas. Boeing, Pratt and United became dominant companies in their
fields even without being married to one another; perhaps because they
divorced one another.
Your point being....
A split Microsoft might be a stronger group of independent companies than
an integrated Microsoft.
-
The OS division would be free to concentrate on making their OSes really
rock solid, because then they can optimize the OS instead of the combination
of the OS and the apps that run on top the OS.
-
The applications division would be free to concentrate on making their
applications run on lots of different platforms. Why not a Word for
Linux? Word for Solaris? Word for HP-UX? Word for MVS-XA?
-
The games division would be free to concentrate on making games for a wider
target audience. Why not a Flight Simulator for the Nintendo 64?
Other people feel the same way
See, for example,
-
Diomidis Spinellis.
A
critique of the Windows application programming interface.
Computer
Standards & Interfaces, 20:1-8, November 1998.
-
Barton
P. Miller, David Koski, Cjin Pheow Lee, Vivekananda Maganty, Ravi Murthy,
Ajitkumar Natarajan and Jeff Steidl
CS-TR-95-1268
Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and
Services UW Madison Computer Sciences Technical Reports. Unfortunately,
this report is in Postscript and not in HTML so it is hard to find and
hard to view.
-
NT
Religious Wars: Why Are DARPA Researchers Afraid of Windows NT? Mark
Berman.
-
Is
NT paranoid or is Unix out to get it? We explain our theory about the timely
advantages of Unix in graphical terms by By Nicholas Petreley
-
Will
Windows NT develop into a super-OS or an unmanageable disaster? By
Nicholas Petreley
-
http://apostols.org/tools.html
is some tools that will crash Windows by remote control. In some cases,
firewalls, routers or patches will prevent the problem - are you safe from
these attacks? (don't run this while doing critical work!!!).
-
The Internet knows all: check out the Operating
System Sucks-Rules-O-Meter which uses Digital's
AltaVista to measure what computer experts the world over think about
various operating systems. There's no escaping it: windows sucks, and linux
rocks.
-
John Tibbetts and Barbara Bernstein wrote this side-by-side comparison
of windows/NT and Linux in the June
28, 1999 issue of Information Week
Summary and conclusion
The current situation, where Microsoft has the planet by the scrotum is
clearly untenable and will end soon, either by Government fiat or simply
because IS managers will start looking for Anybody But Microsoft solutions
(this has already begun). The trade press keeps reporting on the
death of the Network Computer and the death of Java; in fact, reports of
the demise of these Microsoft alternatives are vastly exaggerated. Even
the Redmondians are
worried about Linux, as well they should: Linux is better.
Return to Jeff's Home Page
see new web server .web server .web server .