THROUGH FOUR decades of accelerated change, the computer industry has known one constant: the switch. From the mechanical relay to the vacuum tube to today's million-transistor chips, the binary switch -- on/off, yes/no, 0/1 -- has been the foundation of the entire structure.

By making those switches ever smaller, we have enormously increased the speed and power of individual chips, reducing the distance that signals have to travel and the amount of current required. But now the science of miniaturization is reaching the limits of conventional technology. New methods will be necessary to fabricate super-fast, multi-million-transistor chips to meet the computing needs of the '90s and to create the next generation of billion-transistor devices expected just after the turn of the century.

The growing demands of complex image-processing, communications, data management and scientific research will require unprecedented numbers of switches operating in the fastest possible time. For example, one reason for the current confusion over global warming is that even our best supercomputers aren't up to the job. Models of Earth's environment are so complex, and the variables so numerous, that they exceed existing capacities, forcing scientists to work with oversimplified simulations. But increasingly compact devices will also be needed to build the industrial equipment and consumer goods crucial to an information-based society: wristwatch telephones and pocket-sized computers that receive and transmit messages; dashboard screens providing drivers with detailed maps of cities; 3-D images for medicine and manufacturing; and new display technologies for the arts and education, to cite only a few.

In the future, as in the past, miniaturization will involve three fundamental problems: deciding what to make smaller; learning how to make it smaller; and making it work. Small Is Beautiful The easiest way to make a smaller integrated circuit is to shrink an existing design (see illustration), thus making its switches faster because the electrons have a shorter distance to travel. Since the early '70s, engineers have used a method called "scaling" to achieve ever-greater proportional reductions in size. As a result, sophisticated chips now have line widths narrower than 1 micron (a millionth of a meter or about 1/75th the thickness of a human hair). The question vexing the industry today is: How far can this steady progress continue?

One difficulty involves keeping individual transistors insulated from one another. When line widths are reduced to around one-tenth of a micron, the insulator thickness scales down to 25 angstroms, or about seven atomic layers. At those dimensions, electrons can tunnel directly through the insulator, degrading operation.

Reducing the impurities in the transistor's channel, as required to maintain the proper electric field intensity in the device, can also prove problematic. At a certain point, the number of impurities will become so small that random fluctuations in that number -- which are unavoidable even under the strictest manufacturing process controls -- are bound to make some of the millions of transistors inoperable, which could render the entire chip useless.

Ohm's law might also become an obstacle, since it decrees that as the cross section of a conductor decreases, its resistance increases. The higher resistance, in turn, slows down the signals propagating through the chip, reducing performance.

Can these and other problems be avoided -- or at least mitigated -- as circuits get smaller? Definitive solutions are not yet at hand, but various ideas and techniques are under investigation. For example, in addition to silicon, new materials, such as gallium arsenide, are being developed to use both metallic interconnection lines with lower resistance and insulators that can withstand higher electric fields.

Another potentially attractive option is operating the circuits at liquid nitrogen temperature (77 Kelvin or -322

F). The reduction in thermal vibrations of the atoms increases the speed with which charge carriers can move in a device, and reduces the resistance in the metallic conductors. Low temperature operation has the additional virtue of decreasing extraneous "noise" in the circuits.

Finally, new circuit and device designs can lead to higher densities by overcoming other impediments to miniaturization. The electrical charges needed to activate switches have to be stored nearby; but that takes up valuable horizontal chip "real estate." Among the innovative approaches are "trench" capacitors, which make use of the device's internal vertical side walls to store a charge, as well as the use of multiple layers to run the metallic interconnection lines -- thus taking advantage of the third dimension to save space.

If past experience is a reliable guide, these obstacles are not insuperable. "In spite of the three decades through which it has endured," says a leading physicist, "one feels that there must be a limit to miniaturization of semiconductor devices. However, all forecasts of a limit, or even of a slowing down of the rate of miniaturization, have proved too conservative. Nevertheless, a device must surely contain more than one atom."

But physical limits are only half of the story. It is equally important to overcome economic and technological limits in order to translate laboratory results into practical, large-scale manufacturing. The Next Wave How do you fabricate sub-micron devices which feature components measuring a few thousandths the width of a human hair? How extendible are the present-day semiconductor manufacturing processes?

Though there are more than 200 steps involved in making a chip, those involving lithography -- the process of transferring a circuit pattern onto a silicon wafer, much like a stencil -- are pivotal. The most important criteria are resolution, overlay, throughput and yield.

Resolving power determines the smallest feature size that can be defined on a wafer. Resolution is limited primarily by the wavelength of the light used, because it is diffracted -- spread out -- as it passes through the transparent openings in the mask. {See illustration.}

An integrated circuit is built layer by layer, so it is crucial that subsequent layers be accurately aligned with those beneath. Overlay accuracy is a measure of this alignment, and it is no less critical than resolution.

Historically, optical lithography has been the mainstay of semiconductor manufacturing. Refinements in optical components and mask-making, the use of shorter-wavelength (ultraviolet) light, and the development of improved photoresists and etching techniques have repeatedly extended the life of optical lithography. The process has evolved from 25-micron line widths in the early 1960s to handle the 0.7-micron lines in the latest four-megabit chips.

As in all questions of limits, the exact feature size at which optical lithography will become impractical is not universally agreed upon, but 0.25 micron is probably a reasonable number.

The finest lines to date have been created with electron-beam tools, which expose the resist with a fine column of electrons scanned across the wafer, thus obviating the need for a mask. In 1988, experimental circuits with 0.1-micron lines were successfully fabricated using e-beam lithography. The devices were able to switch in 13 picoseconds (trillionths of a second), making them not only the smallest but also the fastest silicon transistors ever made.

E-beam lithography, however, for all its virtues, is slow compared to full-wafer projection. It traces each line separately rather than projecting an entire pattern at once, and hence is unsuitable for large-volume production of, for example, memory chips. (A related technique that focuses beams of ions -- atoms that have lost or gained electrons -- suffers from the same problem.) Nonetheless, in situations where small quantities of many different designs must be rapidly processed, e-beam is the tool of choice, and will likely remain unchallenged in the foreseeable future.

But the most promising successor to optical lithography, according to many in the industry, is X-ray lithography. {See box.} X-rays are an extremely energetic form of light with shorter wavelengths than visible or even deep ultraviolet light. In practice, this difference means that an X-ray lithographic system bears little resemblance to the tools used for optical lithography.

The major difference is the light source itself. X-rays can be produced in a variety of ways, but lithography requires a relatively intense beam of X-rays to expose the resist rapidly and achieve high throughput. An exemplary source is an electron storage ring, also known as a synchrotron, an instrument originally developed for elementary particle research. The X-rays are the serendipitous products of the electrons accelerating around the ring. Since 1979, IBM scientists have been carrying out experiments at the Brookhaven National Laboratory to prove the practicality of X-ray lithography for mass-producing future generations of chips. And within a couple of years, IBM will have completed construction of a production synchrotron at its East Fishkill, N.Y., facility.

This is the only production X-ray lithography facility in the United States, with an estimated cost of $1 billion by the time it reaches full production. In contrast, Japan has some 20 such projects underway, relying on nearly a dozen synchrotrons. The Japanese efforts are cooperative ones involving major Japanese semiconductor manufacturers and the government. Europe as well, through its Joint European Submicron Silicon Initiative (JESSI), is also working cooperatively and investing heavily in this vital technology.

Meanwhile, the tools-and-materials sector of the U.S. semiconductor industry has declined dramatically over the past decade. Indeed, Japanese R&D investment in semiconductor research is now substantially outpacing the United States annually.

The United States needs to institute a national X-ray lithography program; to that end, IBM has offered to share its East Fishkill center with government and industry.

Competition in the computer industry remains inextricably linked with miniaturization. The companies and the countries that achieve the breakthroughs are the ones that will prosper. Both American industry and the U.S. government have their work cut out for them if we are to stay in the race.

John Armstrong is IBM's vice president for science and technology.