When computers were in their infancy, a common frustration was the error messages they delivered instead of results. One frequent error that I and many of my engineering colleagues made (though we had no excuse to repeat it as often as we did) was to input a number as a simple integer when the computer was looking for a number with a decimal point, or vice versa. Unlike us humans, who knew that if we typed 1 we meant 1.00, the computer did not recognize that a dollar was the same as 100 pennies. So the machine would give us a cryptic error message, which left us to figure out that we had omitted the decimal point.

Today's computers are more tolerant, with even the cheapest toy calculator understanding that when we punch in 1 we do mean 1.00. The most clever computer programs can even correct some of our errors. They can tell us the weather in washington, even if we do not capitalize it. They can automatically change a "teh" into "the" without requiring us to so much as backspace. But they won't warn us that we have mistakenly added an extra zero to the square footage of our basement rec room, and that the computer's calculation for the cost of new vinyl flooring is thus 10 times higher than it should be.

As computers have become more and more sophisticated, we have tended to eliminate or reduce the extent of human intervention. But as the Y2K problem illustrates, there is no substitute for common sense--unlike a computer, no one living in the late 20th century would assume that 00 stands for 1900 instead of 2000. The human mind would immediately question whether anyone's rec room (except Bill Gates's, perhaps) could be 5,000 square feet. But the computer doesn't know a basement from a basketball court. So, based on the erroneous calculation, we might decide to forgo the new vinyl flooring, and wash and wax those old linoleum tiles instead.

Computers are idiot savants. They do everything by the numbers, but if we do not give them exactly the right numbers for the problem at hand, they may not give us anything close to the right answer. Had we used an older model of portable computer known as a slide rule, which ran on human brain power, we would not likely have miscalculated how many tiles were needed to cover a floor. The slide rule required us to provide the decimal point or the right number of zeros.

If an extra zero or a misplaced decimal point can lead to a wrong calculation, the transposition of numbers or letters can lead to a wrong location. My son was scheduled to fly from Champaign, Ill., to Portland, Maine, and back, making connections in Detroit each way. The electronic ticket issued to him had his itinerary as CMI-DTW-PWM-DTW-MCI. A clerk's typo had transposed the airport code for Champaign (CMI) to that for Kansas City (MCI) on the final leg of his trip. Of course, the computer did not "suspect" that he was being misrouted. Fortunately, the error was caught in time by my wife--not from the electronic ticket, which existed only in the computer, but from a printout on the receipt that arrived only days before my son's trip was to begin.

If computerized ticketing can book us places we don't want to go, computerized library catalogs can bring us books we don't want to read. Many of those books are kept in off-site storage, so it can take a day or two before we learn we have called for the wrong book. It is little consolation for the librarian to tell us that clerks have transposed numbers and bibliographical data in the process of converting the old card catalog into an electronic version. This has resulted in library users who rely on the electronic database for citations--rather than on the physical books--propagating errors that may lead future scholars still further astray.

Some human errors, aided and abetted by computers, have much more serious consequences than living with an ugly basement floor or losing a book in a library. Because computers control so much of the operation of the European-built A-320 Airbus, it is known as a fly-by-wire airplane. In 1992, as an Airbus was nearing Strasbourg, France, the pilot determined that a setting of 3.3 degrees down from the horizontal was appropriate for the landing approach. Investigators have speculated that the pilot might have punched the wrong key and entered a 3,300-foot-per-minute descent mode into the computer. This showed up on the display as 33, since such descent modes were displayed without the final zeros. Looking at the display, the pilot could have misread the 33 as 3.3--seeing what he thought he had entered--and so assumed that everything was in order. The incorrect setting caused the plane to descend too fast. It crashed into the mountains short of the runway, killing 87 of the 96 passengers and crew members on board, including the two pilots.

In structural engineering, it is important to add together the effects of forces as diverse as gravity, traffic, water, wind, temperature and earthquakes. In the late 1970s, a flaw was discovered in a computer program that had been used for years to measure the ability of nuclear power plants to withstand such forces. The computer program was discovered to contain a minus sign where a plus sign belonged. Thus, where stresses were to be added, they were subtracted, giving a lower total stress than would be the case if a real earthquake struck.

The implication was, of course, that the safety of all plants certified by this program was called into question. Fortunately, the error proved not to be critical enough to make a difference in overall safety, but it is reasonable to ask if other such errors lurk undiscovered. If so, they will not be caught by computers, but by some engineer who becomes skeptical of a particular calculation and looks into the computer program that performed it.

We can't blame all accidents on computers, of course. Sometimes we humans have no one to blame but ourselves. The post-accident investigation of the circumstances leading to the fatal 1986 launch of the space shuttle Challenger provided some insight into the nature of engineering thinking. It soon became clear that those engineers most directly involved with the technical performance of the booster rockets warned against launching the shuttle in the cold January weather. They pointed to problems that had been experienced with O-rings, and a correlation of the problems with temperature. There was, however, no specific mathematical formula, physical law or computer simulation that the engineers could present to prove that Challenger would fail if launched in the cold, and the engineers also could not prove to themselves or to others that the booster rocket and external fuel tanks would definitely leak and explode.

The managers, on the other hand, felt forced by their position to take into account more than technical details. They felt that they had to worry not only about the engineers' concerns, but about the political implications of postponing the launch of a space shuttle mission scheduled to be mentioned in President Reagan's imminent State of the Union address. The first teacher to be sent into space was aboard the Challenger, and so there was more than the usual interest in keeping the flight on schedule. Besides, so many managers and politicians must have thought, there had been two dozen successful shuttle flights and the technology appeared to be routine.

In the final analysis, the untrammeled optimism of the managers prevailed over the considered pessimism of the engineers, and the tragic outcome resulted. The subtle design errors that had been manifesting themselves in the burned O-rings--and which had given the engineers pause--were only fully understood after the fatal accident proved the detail to be a critical one.

It is unlikely under the circumstances that any computer model of the booster rockets could have been more persuasive to the managers than the engineers were. The space program's computers may have guided us to the moon and back, but the Challenger engineers brought the human element to their conclusions: they based their judgments on experience, common sense and intuition. Unfortunately, the managers did not get the message.

Computers do not make errors in judgment, because they do not make judgments in the human sense. Neither do they judge our errors; they merely flag those that they recognize as such. We are on our own when it comes to catching truly human errors. Computers are careful and obedient partners in an advanced technological society, but they cannot be relied upon to catch all of our mistakes or to do all our thinking for us. They will do what we tell them, but they will seldom do more.

We must rely upon ourselves to be sure that we are giving our computers the right input and to figure out for ourselves the correctness of what they give us back. A computer given an extra zero of input will invariably deliver an extra zero of output. If we forget such elementary rules about computer use, we are likely to make the biggest mistake of all--expecting error messages that will never be delivered.

Henry Petroski is A.S. Vesic Professor of Civil Engineering at Duke University. He is the author of "To Engineer Is Human" and other books. His latest, "The Book on the Bookshelf" (Knopf), will be published in September.