Сдам Сам

ПОЛЕЗНОЕ


КАТЕГОРИИ







Научно-технической литературы





Хрестоматия по переводу

Научно-технической литературы

 

Утверждено редакционно-издательским советом института
в качестве методических указаний

Москва 2006


Digital Products May Drive Recovery

By Junichi Miura

Yomiuri Shimbun Staff writer

 

Thin televisions, digital versatile disc recorders and digital cameras - referred to as the three most important digital products - are expected to play the role of savior of the electronics industry as domestic home appliance makers face tough foreign competition.

Sales of products based on state-of-the-art digital technology, including television sets with liquid crystal or plasma displays have been increasing rapidly.

Shipments of thin televisions are expected to reach 2.5 million units this year, marking a more than twofold increase on last year's 1.2 million units. Sales of DVD recorders, which not only play-back, but also record on digital discs, also are expected to rise. Sales on the recorders are forecast to rise to 1.25 million units this year, up from 620,000 in the previous 12 months.

Demand for both products jumped during last year's soccer World Cup finals.

Demand for the devices is expected to get a further boost with the start of terrestrial digital broadcasting in Tokyo, Nagoya and Osaka in December.

Meanwhile, global sales of digital cameras rose to 24 million units last year, compared with sales of 23 million units for conventional cameras. It was the first time that digital cameras have outsold film-based models.

The market for digital cameras has been boosted by the large number of models now available.

One factor common to all these digital products is that they are based on technological standards that are exclusive to Japan and tougher than those used anywhere else in the world.

For the manufacture of plasma displays, for example, highly sophisticated fluorescent screens with a tolerance of less than one millimeter per three square kilometers are required.

The country's expertise in traditional cameras and lenses also gave it a head start in developing digital cameras. And in the market for DVD recorders, Japanese firms enjoy a strong position in the production of the optical pickups essential for the reading and recording of digitally stored data.

These factors have helped domestic firms to play a leading role in the market for such products. It also means digital services are seen as the industry's rising stars, replacing conventional products such as refrigerators and washing machines, imports of which are increasing.

The three digital durables also have been contributing to the sharp recovery of profits. Matsushita Electronic Industrial Co. and Sharp Corp., which both began making thin TVs and other products earlier than most firms, posted double-digit increases in profits for the April-June period this year.

But there are still some causes for concern. Thin televisions with 50-inch screens used to cost nearly ¥1 million. But the price of the sets has been falling rapidly, with the cost of some sets dropping to as little as ¥10,000per inch. There are also single-lens-reflex digital cameras prices at less than ¥200,000. Lower prices means sales of these products are rising, but margins are falling. For many companies this is becoming a high-volume, low-profit business.

In addition, companies such as Toshiba Corp. and Hitachi Ltd., which held off from entering the market for many digital products, are expected to join the competition this autumn. New products from these companies means competition is certain to become more intense.

Profits from the export of digital products may also not be as high as was hoped. This is due to differences in the speed of digitalization among export markets.

Digital technology of this kind may not transform everyday life, even if it does open up new possibilities for the manipulation of visual images. Its impact may not be as dramatic as was the advent of television itself.

That makes it all the more important for manufacturers to appeal to customers in terms of the higher quality offered by digital products over their conventional counterparts.


Cell Phones: Physical

 

The camera cell phone tops many teen's wish lists. That's because it's handy and discreet for snapping and transmitting images to the Internet or other cell phones. Unfortunately, this gadget is generating worldwide concerns of privacy loss. Camera phones have caused outcries from unsuspecting people who found their images - many intrusive - posted on the Web.

To make a snapshot, the phone houses a tiny digital camera. Instead of film, a digital camera uses a computer chip called a Charged Coupled Device (CCD). A matrix sits on the CCD. "It's like a chess-board with many rows and columns of tiny dots," explains Saswato Das of Bell Laboratories. On each dot, or pixel, is a photosensitive (light-sensitive) element. When the lens zeros in on, say, your dog, the pixels react to light bouncing off "spot" by generating an electrical signal. "Basically, each pixel records one part of the object, and each dot generates a different intensity of charge," says Das. "But everything adds up to form a picture." To send the image, the bundles of electrical charges are converted into a digital signal, or a series of on-off pulses. The information is then transmitted via radio waves.

To stop sneaky shutterbugs, many health clubs have banned cell phones from their changing rooms. The South Korean government's tactics: New-model phones must emit a signal when a picture is snapped.

Do you think camera cell phones should be banned in some public places? If so, why?

 

HOW DO CELL PHONES WORK

Digital cell phones are small two-way radios. When you speak into the phone, sound waves cause a disk inside the phone's microphone to vibrate. Electrical components "read" the disk's vibrations and turn words into an electrical signal. A microchip then converts the electrical signal into a digital signal - a series of on-off pulses. Then …

SEND

The cell phone's antenna transmits the digital signal via radio waves - invisible light waves - that move through the air. Cell phones use a specific range of radio frequencies. (Frequency measures the number of waves per second.)

CONNECT

The radio waves travel to a base station, or radio tower, which serves one geographical region called a cell. Each radio tower is "tuned in" to the frequencies cell phones use, so it "hears" calls as they come in.

RELAY

The base station forwards the signal to the closest switching office, where powerful computers route calls. The area code tells computers where to direct the call. If it's going to a wired phone, the switching office sends the call to the local phone network over phone lines. If the call is going to another cell phone, the switching office forwards the call to another base station or to the satellite via radio waves.

BEAM

Communications satellites in low orbit around Earth serve as a "constellation" of base stations and switching offices in the sky. The satellites pass calls to other satellites, to ground stations, or directly to high-powered cell phones.

RING, RING

The satellite passes the call back to a switching office on Earth. Computers locate the receiving cell phone by sending out a signal to each cell using a specific frequency. When the phone picks up the signal, it rings.

(Science World)


Microelectronics Grows Up

One-step wonders

The most straightforward approach is that of Shoji Maruo and Koji Ikuta, at Nagoya University in Japan. They have taken an existing technique called stereolithography, and miniaturised it. Stereolithography works by steering laser beams through a photosensitive liquid that solidifies when it is exposed to enough light. One way to control this process is to use two beams that, when they cross, double the amount of light in a spot, and thus cause the liquid to solidify faster. When this scanning is complete, the 3D structure is released by pouring out the remaining liquid.

The problem with stereolithography for small things is that the liquid tends to solidify even when it is not at the most intense part of the beam. To get round this, Dr Maruo and Dr Ikuta picked a material whose molecules solidify only when hit simultaneously by two photons from different lasers.

Using this technique, the two researchers have managed to build 3D structures with features as small as a wavelength of light (a few hundred nanometres). Electronic and optical applications are restricted by the properties of the material used, but there should be jobs to do in micromechanics. At a microelectromechanical systems conference that is taking place at Interlaken, Switzerland, from January 21st-25th, the researchers will describe their latest creations, which include moving microparts that are driven by light.

At Oxford University, meanwhile, Andrew Turberfield and Bob Denning are using crossed laser beams to create so-called photonic crystals. These are employed to trap photons so that they can be exploited efficiently in, for example, low-power lasers. They consist of two interlaced materials that have different optical properties—in particular, different refractive indices, a measure of how much a substance bends light. The boundaries between the materials create reflective surfaces within the crystal. These can be used to shepherd photons around and keep them under control. Such crystals might serve as components of optical switches which would, among other things, fulfill the role played by transistors in conventional electronics.

In order to make a photonic crystal, though, the materials have to be organised into structures no more than a wavelength of light apart. And layer upon layer of such organisation is required. The technique that Dr Turberfield and Dr Denning are using to do this has several stages. First, a block of material whose chemistry can be altered by light is made. Four laser beams are shone into this block from different directions. Where the beams overlap, they interfere with one another. This is because, in the wacky world of quantum physics, light is wavy as well as particulate. Where the crests or troughs of the waves of two or more beams coincide, bright spots are created; where a crest falls on a trough, a dark spot results.

The material responds to the pattern of spots, undergoing chemical changes at the bright ones. The block is then bathed in a liquid that dissolves the chemically altered part of the material in a way reminiscent of conventional microlithography, while leaving the rest intact. If the second material of the crystal is intended to be air, that is enough. Otherwise, it can be bathed in a liquified form of the second material until all of the voids created by the laser are filled in.

A third approach to 3D nanofabrication is being pioneered by a collaboration between Junichi Fujita at NEC's Fundamental Research Laboratories and Fujio Shimizu of the University of Tokyo. In this case, too, interference patterns are the tool of choice. However, it is not waves of light that are made to interfere with one another, but waves of matter.

Just as light can be thought of as both particles and waves, so can atoms. That means that holograms - 3D "photographs" usually made with light - can, in principle, be made with atoms. And making a 3D structure at the atomic scale is what nanofabrication is about.

In conventional holography, two beams of light interfere with one another. One beam is reflected from the object that is having its picture taken; the other is a "clean" reference beam. When the two meet, the interference pattern produced, if captured on photographic film, has the ability to change a new reference beam into a new object beam. Thus an image of the object can be reconstructed in light.

Since holography was invented in 1948, there have been two major developments in the field. The first was the realisation that a hologram could be synthesized mathematically in a similar way to the process used to generate computer graphics. It does not, in other words, need to be the image of an actual object. The second was the realization that holograms do not have to be recorded on film. In the right circumstances, they can be created by manipulating the refractive indices of materials in the way that Dr Turberfield and Dr Denning are doing. Or, in the case of beams of atoms, by manipulating electric fields to achieve the same effect.

Dr Fujita and Dr Shimizu have combined these two advances to create a holographic projection system for beams of atoms. By passing such beams through a screen of tiny electrodes that act as holographic "pixels" (the equivalents of the grains in photographic film) a 3D image, in atoms, of the desired object can be built up. If the structure is too complicated to create using a single hologram, then the pattern of charge on the electrodes can be changed to add extra features.

This system, which is still in its earliest stages, has several drawbacks: not least, that not all atoms can easily be patterned in this way, and that the whole process has to take place at very low temperatures. But if it could eventually be industrialised, it would be a triumphant demonstration that even the weirdest scientific arcana can sometimes have practical applications.

(The Economist)


Abstract

The success of the Internet has caused a significant change in the way of literature supply. Not only traditional libraries, booksellers and publishing houses have begun to enter the new digital world and offer their services world-wide, completely new information providers have been developed such as digital libraries, delivery services, citation services, and bibliographic databases. The amount of knowledge and information available world-wide grows every day. However, very few people will be able to access all the information that is available, since the information is distributed among a large number of computers and databases. People need assistance to gain the whole utility from the new situation. The idea presented in this paper is to integrate library and information services worldwide, so that they build a large virtual library with different services, offering the user a uniform access to the system.

Introduction

In our modern society, information has become one of the most important and valuable goods. Many professions depend on the steady supply with actual information. At the same time, the success of the Internet has caused a significant change in the way of literature supply. Now computers all over the world can be linked together, and information can be distributed and dealt over the Internet.

Many traditional libraries, booksellers and publishing houses took the challenge to "go online" and now offer their services world-wide over the Internet. But also completely new kinds of information providers have been developed in the recent years, such as digital libraries, delivery services, citation services, and bibliographic databases. The amount of knowledge and information available world-wide grows every day. However, very few people will be able to access all the information that is available all over the world, because this information is distributed among a large number of computers and databases. The sheer number of different libraries and information services is that large that it is nearly not possible to survey. In general, a person looking for information will not be able to find the most appropriate information sources for his/her demand.

Even if a person knows some information sources by chance, there is no easy way to estimate and compare them. They may differ in content, quality of services, accessibility, and, not to forget, prices. And also concerning a single information source, it may be difficult to use this source for the advantage of the user. Not only the services itself, also the user interfaces differ, requiring a learning process for the user, before he/she can use the source completely. Different conditions, media formats, and language barriers have an additional effect. Moreover, in a typical scenario a person looking for some information needs to access several different information sources sequentially. This way, information search will at least use a large amount of time. Together with the commercialization of the Internet, information search may also use a large amount of money. People need assistance to gain the whole utility from the new situation. However, every approach dealing with this problem must face the distribution and heterogeneity of the existing libraries and information services. Each plan carrying together the knowledge of the world will fail because of technical, economical, legal, and political reasons. Also the creation of new information services is not an alternative, as long as these new services can not be used together with the existing services.

Search and meta-search engines provide a way to find services and send queries to several information sources in parallel However, they do not give assistance in the evaluation and combination of services and in the interpretation of results. Most activities in the process of literature search and delivery still have to be done by the user manually. The idea presented in this paper is to integrate different services for literature services worldwide, so that they build a large virtual library with a manifold of different, value-added services. This system will provide a uniform interface to the user, which can be adapted to his/her needs. Literature search world-wide will be not more difficult than entering the local library. The work presented in this paper is supported by the German Research Foundation (DFG) as a part of the national German research initiative "Distributed Processing and Delivery of Digital Documents (V3D2)". We continue as follows: In section 2, we want to introduce some approaches related to our own idea of an integration environment. In section 3, we will point out the challenges of our approach. In section 4, we will introduce the realization of our system and show how we have solved the challenges.


Conclusion

In the initial stage of 300mm manufacturing in China, it is more advantageous for foundries to follow an IDM model by offering few products in large volumes. However, as 300mm manufacturing systems mature, foundries in China should become more flexible in processing large-diameter wafers, and that will lead to greater diversity in technology offered in 300mm fabs. Cost pressures and the need for greater capacity will influence a growing number of advanced IC players to consider manufacturing investments in China. Thus, China's IC industry will move even closer to the forefront of advanced technology and manufacturing capability by decade's end.

(Solid State Technology)


Semiconductor Fabrication

Semiconductor devices have long been used in electronics. The first solid-state rectifiers were developed in the late nineteenth century. The galena crystal detector invented in 1907, was widely used to construct crystal radio sets. By 1947, the physics of semiconductors was sufficiently understood to allow Bardeen and Brattain to construct the first bipolar junction transistor. In 1959, Kilby constructed the first integrated circuit, ushering in the era of modern semiconductor manufacture.

The impediments to manufacturing large quantities of reliable semiconductor devices were essentially technological, not scientific. The need for extraordinarily pure materials and precise dimensional control prevented early transistors and integrated circuits from reaching their full potential. The first devices were little more than laboratory curiosities. An entire new technology was required to mass produce them, and this technology is still rapidly evolving.

This chapter provides a brief overview of the process technologies currently used to manufacture integrated circuits.

SILICON MANUFACTURE

Integrated circuits are usually fabricated from silicon, a very common and widely distributed element. The mineral quartz consists entirely of silicon dioxide, also known as silica. Ordinary sand is chiefly composed of tiny grains of quartz and is therefore also mostly silica.

Despite the abundance of its compounds, elemental silicon does not occur naturally. The element can be artificially produced by heating silica and carbon in an electric furnace. The carbon unites with the oxygen contained in the silica, leaving more-or-less pure molten silicon. As this cools, numerous minute crystals form and grow together into a fine-grained gray solid. This form of silicon is said to be polycrystalline because it contains a multitude of crystals. Impurities and a disordered crystal structure make this metallurgical-grade polysilicon unsuited for semiconductor manufacture.

Metallurgical-grade silicon can be further refined to produce an extremely pure semiconductor-grade material. Purification begins with the conversion of the crude silicon into a volatile compound, usually trichlorosilane. After repeated distillation, the extremely pure trichlorosilane is reduced to elemental silicon using hydrogen gas. The final product is exceptionally pure, but still polycrystalline. Practical integrated circuits can only be fabricated from single-crystal material, so the next step consists of growing a suitable crystal.

Crystal Growth

The principles of crystal growing are both simple and familiar. Suppose a few crystals of sugar are added to a saturated solution that subsequently evaporates. The sugar crystals serve as seeds for the deposition of additional sugar molecules. Eventually the crystals grow to be very large. Crystal growth would occur even in the absence of a seed, but the product would consist of a welter of small intergrown crystals. The use of a seed allows the growth of larger, more perfect crystals by suppressing undesired nucleation sites.

In principle, silicon crystals can be grown in much the same manner as sugar crystals. In practice, no suitable solvent exists for silicon, and the crystals must be grown from the molten element at temperatures in excess of 1400°C. The resulting crystals are at least a meter in length and ten centimeters in diameter, and they must have a nearly perfect crystal structure if they are to be useful to the semiconductor industry. These requirements make the process technically challenging.

The usual method for growing semiconductor-grade silicon crystals is called the Czochralski process. This process, illustrated in Figure 2.1, uses a silica crucible charged with pieces of semi-grade polycrystalline silicon. An electric furnace raises the temperature of the crucible until all of the silicon melts. The temperature is then reduced slightly and a small seed crystal is lowered into the crucible. Controlled cooling of the melt causes layers of silicon atoms to deposit upon the seed crystal. The rod holding the seed slowly rises so that only the lower portion of the growing crystal remains in contact with the molten silicon. In this manner, a large silicon crystal can be pulled centimeter-by-centimeter from the melt. The shaft holding the crystal rotates slowly to ensure uniform growth. The high surface tension of molten silicon distorts the crystal into a cylindrical rod rather than the expected faceted prism.

Fig.2.1. Czochralski process for growing silicon crystals.

 

The Czochralski process requires careful control to provide crystals of the desired purity and dimensions. Automated systems regulate the temperature of the melt and the rate of crystal growth. A small amount of doped polysilicon added to the melt sets the doping concentration in the crystal. In addition to the deliberately introduced impurities, oxygen from the silica crucible and carbon from the heating elements dissolve in the molten silicon and become incorporated into the growing crystal. These impurities subtly influence the electrical properties of the resulting silicon. Once the crystal has reached its final dimensions, it is lifted from the melt and is allowed to slowly cool to room temperature. The resulting cylinder of monocrystalline silicon is called an ingot.

Since integrated circuits are formed upon the surface of a silicon crystal and penetrate this surface to no great depth, the ingot is customarily sliced into numerous thin circular sections called wafers. Each wafer yields hundreds or even thousands of integrated circuits. The larger the wafer, the more integrated circuits it holds and the greater the resulting economies of scale. Most modern processes employ either 150mm (6") or 200mm (8") wafers. A typical ingot measures between one and two meters in length and can provide hundreds of wafers.

 

Wafer Manufacturing

The manufacture of wafers consists of a series of mechanical processes. The two tapered ends of the ingot are sliced off and discarded. The remainder is then ground into a cylinder, the diameter of which determines the size of the resulting wafers. No visible indication of crystal orientation remains after grinding. The crystal orientation is experimentally determined and a flat stripe is ground along one side of the ingot. Each wafer cut from it will retain a facet, or flat, which unambiguously identifies its crystal orientation.

After grinding the flat, the manufacturer cuts the ingot into individual wafers using a diamond-tipped saw. In the process, about one-third of the precious silicon crystal is reduced to worthless dust. The surfaces of the resulting wafers bear scratches and pockmarks caused by the sawing process. Since the tiny dimensions of integrated circuits require extremely smooth surfaces, one side of each wafer must be polished. This process begins with mechanical abrasives and finishes with chemical milling. The resulting mirror-bright surface displays the dark gray color and characteristic near-metallic luster of silicon.


A Million Points of Light

 

The electronics industry has traditionally relied exclusively on electrons transmitted through metal wires to process signals on-chip, and for most off-chip applications as well. The primary exception has been in the communications industry, where lightwaves (i.e. photons) are often used to transmit data through fiber optic cables.

All that is rapidly changing. Not only are photons being used in inventive new devices - so much so that the old term "optoelectronics" no longer seems adequate - but they may soon be used to communicate signals on the chip itself, through tiny optical waveguides.

Optoelectronics, of course, has been around for decades, consisting primarily of relatively simple III-V-based devices, such as photodetectors and solid-state lasers.

Although some integrated optoelectronic devices have been produced - where digital signal processing and optics coexist on the same chip - these have not been overly successful because it's difficult to marry III-Vs and silicon, and III-V digital processors are comparatively expensive to produce.

What's new are devices that manipulate the photons while they're still in a light format. These include microelectromechanical systems (MEMS) that switch light with mirrors, and planar lightwave circuits that multiplex/demultiplex light signals through optics.

According to market researcher Cahners In-Stat, leading applications for MEMS have traditionally included pressure sensors, accelerometers, inkjet printer nozzles, and read/write heads for hard disk drives. But what's leading the charge now? MEMS-based photonic switching! In 1999, there was no such thing, but by 2006 MEMS switches are expected to become the first MEMS device to surpass the $1B mark.

Presenting even more potential are planar lightwave circuits, which are based on optical waveguides created using manufacturing processes similar to those used to produce semiconductor devices (i.e. fine-line lithography, etching, doping, thin-film deposition, etc.).

The best example of this is DWDM (dense wavelength division multiplexing) devices, but planar technology can also be used to create variable optical attenuators, optical switches, and possibly complete optical add/drop multiplexers - all on one chip. What's especially interesting is that these circuits can be active in the sense that signals can be applied through electrodes to change their optical properties - a far cry from the optoelectronics of yesterday.

Photons are also poised to make their way onto the chip. At first, it's likely that they will be used primarily for clocking on high-speed microprocessors. Here, the light will be generated off the chip and then "piped" on and distributed throughout the chip with optical waveguides made of silicon and silicon dioxide.

Beyond that, it's possible that photons will be used for all on-chip communication - what some in the industry have referred to as "a million points of light." If that sounds implausible, given silicon's historical inability to generate light, consider some new research out of the University of Surrey, in which researchers demonstrated luminescence from silicon at room temperatures with standard manufacturing techniques (see Semiconductor International, May 2001, p. 36).

Even further down the road, it's possible that optical computing - long touted but as yet unrealized - might well become a reality.

So what does all this mean? For one, you can expect to see more coverage of these kinds of topics within the pages of Semiconductor International. Expect a bunch of new equipment introductions aimed at this new and rapidly growing market. And, most important of all, expect your colleagues to be boning up on optics technology. If you want to join them, keep on reading SI, of course. We've seen the light!

(Semiconductor International)


Managing Complexity

During the past 10 years, the industry has been moving toward distributed computing. At first, mission-critical corporate applications stayed primarily on mainframes while fast-turnaround applications made their mark on departmental systems. Lately however, mission-critical, bet-your-business applications are being written and rewritten as distributed applications. While these business applications may continue to connect to mainframe data, they are now designed to work across an organization. But companies are now discovering that these critical applications are more difficult to manage in their distributed form than in their centralized form. Because the transition to distributed computing has not always been well-planned, organizations may begin to regress to centralized computing.

Why are distributed applications presenting so many challenges? Part of the problem is that organizations typically focus on how to design and develop applications. They do not plan for application deployment and management, which are difficult parts of the process but vital to the success of distributed computing. Successful deployment and management of distributed applications depends on a solid strategy that includes the following components: software distribution, performance management, troubleshooting, and code management.

Software Distribution. If an application is to be distributed, an organization must understand how code will physically get to each user or group that needs it. Is there staff that can manage this process? Are systems in place to manage the load? Does software exist that will allow applications to be distributed from a central site? Will the new code conflict with code that is already installed? These issues may seem insignificant when developers are hard at work writing code, but they can kill a project if you don't address them early.

Performance Management. Most complex systems begin as pilot or proof-of-concept projects. While such initial projects may demonstrate or highlight the capabilities of developers to write code, they often do not deal with scalability issues. Distributed systems applications must be architected so that they will perform well as the code base and the number of users grow.

Troubleshooting. I do not mean debugging here. Complex problems can occur after debugging has been completed and an application is deployed. More and more organizations are creating applications that are intended to work in tandem with other applications. This involves using a messaging or remote procedure call mechanism to move information among applications - not a simple task. And, as the importance of the Internet in creating interoperability among applications grows, this problem will become more complex. How can you track what is happening when one part of an application interacts with another? My guess is that most development organizations have no plan in place to deal with the inevitable problems that can and probably will arise in such an environment.

Code Management. As development organizations begin to use component libraries to extend the life of systems and expand functionality, problems will surely arise. Developers will have to be able to trace dependencies among libraries. They will also have to understand the performance implications of these changes, which is especially important if code is distributed across several different systems. Issues such as the synchronization of code across multiple systems will also be a challenge.

Concepts of ESD Control

 

With product performance, reliability and quality at stake, controlling electrostatic discharge (ESD) in the electronics environment can seem a formidable challenge.

However, in designing and implementing ESD control programs, the task becomes somewhat simpler and more focused if it is approached with just four basic concepts of control. When approaching this task, we also need to keep in mind the ESD corollary to Murphy's Law, "No matter what we do, static charge will try to find a way to discharge."

The first control concept is to design products and assemblies to be as immune as is reasonable to the effects of electrostatic discharge. This involves such steps as using less-static-sensitive devices or providing appropriate input protection on devices, boards, assemblies and equipment. For engineers and designers, the paradox is that advancing product technology requires smaller and more complex geometries that often are more susceptible to ESD.

Knowing that product design isn't the whole answer, the second concept of control is to eliminate or reduce the generation and accumulation of electrostatic charge in the first place. It's fairly basic: If there is no charge, there will be no discharge.

We begin by reducing as many static-generating processor materials as possible from the work environment. Some examples are friction and common plastics. Keeping other processes and materials at the same electrostatic potential is also an important factor. Electrostatic discharge does not occur between materials that are kept at the same potential or are kept at zero potential. In addition, by providing a ground path, charge generation and accumulation can be reduced. Wrist straps, flooring and work surfaces are all examples of effective methods that can be used to get rid of electrostatic charges by grounding.

We simply can't eliminate all generation of static in the environment. Thus, the third concept of control is to safely dissipate or neutralize those electrostatic charges that do exist. Again, proper grounding plays a major role.

For example, workers who "carry" a charge into the work environment can rid themselves of that charge when they attach a wrist strap or when they step on an ESD floor mat while wearing ESD control footwear. The charge goes to ground rather than being discharged into a sensitive part. To prevent damaging a charged device, the rate of discharge can be controlled with static-dissipative materials.

However, for some objects, such as common plastics and other insulators, grounding does not remove an electrostatic charge because there are no conduction pathways. To neutralize charges on these types of materials, ionization, either for localized sections or across the whole area, may prove to be the answer. The ionization process generates negative or positive ions that are attracted to the surface of a charged object, thereby effectively neutralizing the electrostatic charge.

The final ESD control concept is to prevent discharges that do occur from reaching susceptible parts and assemblies. One way to do this is to provide parts and assemblies with proper grounding or shunting that will "carry" any discharge away from the product. A second method is to package and transport susceptible devices in appropriate packaging and materials-handling products. These materials effectively shield the product from electrostatic charge, as well as reduce the generation of charge caused by any movement of product within the container.

While these four concepts may seem rather basic, they can aid in the selection of appropriate materials and procedures to use in effectively controlling electrostatic discharge. In most circumstances, effective programs will involve all four of these concepts. No single procedure or product will be able to do the whole job. In developing electrostatic discharge control programs, we need to identify the devices that are susceptible and to what level. Then, we must determine which of the four concepts will protect these devices. Finally, we can select the combination of procedures and materials that will fulfill the concepts of control.

In future columns, these concepts will be discussed in greater detail, focusing on various materials and procedures that meet protection goals. Other related ESD topics, such as auditing, training and failure mechanisms, will also be covered.

(Circuits Assemblies)


Needed: An admiral of the "Nano Sea"?

To accelerate the exploitation of structures with dimensions <100nm, the governments of the US, Japan, and the European Union have established Nanotechnology Initiatives. The claim is that matter has radically different properties when its characteristic dimensions are between 1 and 100nm and that this new realm offers all sorts of opportunities. Taxpayer funding, of course, will be needed for precompetitive R&D to realize this potential, and nanotechnology promoters have been busily courting government bureaucrats.

At a nanotechnology conference last summer, ethicist George Khushf of the U. of South Carolina pointed something out: If the nano realm really is so different, we cannot rationally evaluate either the opportunities or the dangers of exploring it. So how can we decide whether a society should support such an effort or not? If nanotechnology were just an extension of some known trend, then we could extrapolate the known costs, benefits, and risks. But the funding needed for such incremental progress should come from those who have reasonable expectation of benefit, and dramatic new government initiatives wouldn't be needed. On the other hand, if nanotechnology is radically different, then the risks are unknowable and the precautionary principle militates against any exploration at all.

According to Prof. Khushf, all discussions of exploring the radically new devolve into this paradox: Claiming that something really is different implies that its dangers are unknown and necessarily scares someone into bitter opposition. Claiming that the dangers are acceptable implies that there is nothing really new to find. A comfortable, risk-adverse and consensus-seeking culture just cannot rationally decide to explore!

Listening to Prof. Khushf, I thought of the prototypical government-funded exploration program, one that has produced immense benefits for some, but which remains controversial 500 years later: the funding of Christopher Columbus by the new government of Spain, supposedly to find an alternate route to the Orient. How could that have been rationally justified in 1492?

Of course it wasn't. The entire project was a gamble based on misrepresentation, both by Columbus and by his sponsors. Most Americans know the legend of Columbus, but have not thought through its context and inconsistencies. Much of what actually went on is lost in the mist of time. However, it is clear that Columbus thought the world was round, and was navigator enough to estimate its circumference (in sailing time) using a sextant and the method demonstrated by Eratosthenes of Alexandria in the 3rd century B.C. (For a sphere, the circumference pole-to-pole is the same as around the equator.)

Marco Polo and the Arab traders had a fair idea how far east they had traveled to get to Asia. Subtracting that from Eratosthenes' estimate left leagues and leagues more ocean than could be traversed by Columbus's poorly provisioned little fleet. In order to get funded, he had dramatically underestimated the distance west to Asia! Why? Because he had confidence there was something valuable out there, even if didn't know quite what! Columbus had very carefully gotten himself named viceroy of any lands discovered as well as admiral of the Ocean Sea. He hoped to return with wealth and honor, no matter what, but he had to claim he was sailing into the known, not the unknown!

The sponsors had their own reasons to invest in the enterprise. For one thing, they wanted the government to control it, rather than let restive New Christian bankers claim lucrative colonies and trade routes. The Portuguese had done better than Spain in the exploration business; funding Columbus to go west seemed an imaginative way to propitiate Spain's exploration lobby. Then there was the potential for converts to Christianity….

What happened? Well, there were islands out there all right, with Indians, but they knew nothing of India or China or Japan! Columbus found new foods and smokable herb, but no gold or nutmeg. The flagship sank. At least forty sailors were left behind, to be found dead on the second expedition. The returning sailors probably brought syphilis to Europe. Columbus lived in denial and controversy for the rest of his life. Still, Spain found itself in possession of a great empire. Eventually, gold was found, as well as chocolate, chili peppers, and a really long route to the Spice Islands.

Would the world have been better off if Columbus had been denied funding? It depends; the Aztec and Inca elites certainly would have been happier. Could the outcome have been improved by rational ethical discussion beforehand? No way!


Atomic Force Microscopy 1. INTRODUCTION.The new technology of scanning probe microscopy has created a revolution in microscopy, with applications ranging from condensed matter physics to biology. This issue of ScienceWeek presents only a glimpse of the many and varied applications of atomic force microscopy in the sciences.The first scanning probe microscope, the scanning tunneling microscope, was invented by G. Binnig and H. Rohrer in the 1980s (they received the Nobel Prize in Physics in 1986), and the invention has been the catalyst of a technological revolution. Scanning probe microscopes have no lenses. Instead, a "probe" tip is brought very close to the specimen surface, and the interaction of the tip with the region of the specimen immediately below it is measured. The type of interaction measured essentially defines the type of scanning probe microscopy. When the interaction measured is the force between atoms at the end of the tip and atoms in the specimen, the technique is called "atomic force microscopy". When the quantum mechanical tunneling current is measured, the technique is called "scanning tunneling microscopy". These two techniques, atomic force microscopy (AFM) and scanning tunneling microscopy (STM) have been the parents of a variety of scanning probe microscopy techniques investigating a number of physical properties. [Note: In general, "quantum mechanical tunneling" is a quantum mechanical phenomenon involving an effective penetration of an energy barrier by a particle resulting from the width of the barrier being less than the wavelength of the particle. If the particle is charged, the effective particle translocation determines an electric current. In this context, "wavelength" refers to the de Broglie wavelength of the particle, which is given by L = h/mv, with (L) the wavelength of the moving particle, (h) the Planck constant, (m) the mass of the particle, and (v) the velocity of the particle.] ATOMIC FORCE MICROSCOPY IN BIOLOGY C. Wright-Smith and C.M. Smith (San Diego State College, US) present a review of the use of atomic force microscopy in biology, the authors making the following points: Since its introduction in the 1980s, atomic force microscopy (ATM) has gained acceptance in biological research, where it has been used to study a broad range of biological questions, including protein and DNA structure, protein folding and unfolding, protein-protein and protein-DNA interactions, enzyme catalysis, and protein crystal growth. Atomic force microscopy has been used to literally dissect specific segments of DNA for the generation of genetic probes, and to monitor the development of new gene therapy delivery particles.Atomic force microscopy is just one of a number of novel microscopy techniques collectively known as "scanning probe microscopy" (SPM). In principle, all SPM technologies are based on the interaction between a submicroscopic probe and the surface of some material. What differentiates SPM technologies is the nature of the interaction and the means by which the interaction is monitored.Atomic force microscopy produces a topographic map of the sample as the probe moves over the sample surface. Unlike most other SPM technologies, atomic force microscopy is not dependent on the electrical conductivity of the product being scanned, and ATM can therefore be used in ambient air or in a liquid environment, a critical feature for biological research. The basic atomic force microscope is composed of a stylus-cantilever probe attached to the probe stage, a laser focused on the cantilever, a photodiode sensor (recording light reflected from the cantilever), a digital translator recorder, and a data processor and monitor.Atomic force microscopy is unlike other SPM technologies in that the probe makes physical (albeit gentle) contact with the sample. The cornerstone of this technology is the probe, which is composed of a surface-contacting stylus attached to an elastic cantilever mounted on a probe stage. As the probe is dragged across the sample, the stylus moves up and down in response to surface features. This vertical movement is reflected in the bending of the cantilever, and the movement is measured as changes in the light intensity from a laser beam bouncing off the cantilever and recorded by a photodiode sensor. The data from the photodiode is translated into digital form, processed by specialized software on a computer, and then visualized as a topological 3-dimensional shape. ON SCANNING PROBE MICROSCOPY The invention and development of scanning probe microscopy has taken the ability to image matter to the atomic scale and opened fresh perspectives on everything from semiconductors to biomolecules, and new methods are being devised to modify and measure the microscopic landscape in order to explore its physical, chemical, and biological features.In scanning tunneling microscopy, electrons quantum mechanically "tunnel" between the tip and the surface of the sample. This tunneling process is sensitive to any overlap between the electronic wave functions of the tip and sample, and depends exponentially on their separation. The scanning tunneling microscope makes use of this extreme sensitivity to distance. In practice, the tip is scanned across the surface, while a feedback circuit continuously adjusts the height of the tip above the sample to maintain a constant tunneling current. The recorded trajectory of the tip creates an image that maps the electronic wave functions at the surface, revealing the atomic landscape in fine detail.The most widely used scanning probe microscopy technique, one which can operate in air and liquids, is atomic force microscopy. In this technique, a tip is mounted at the end of a soft cantilever that bends when the sample exerts a force on the tip. By optically monitoring the cantilever motion it is possible to detect extremely small chemical, electrostatic, or magnetic forces which are only a fraction of those required to break a single chemical bond or to change the direction of magnetization of a small magnetic grain. Applications of atomic force microscopy have included in vitro imaging of biological processes. In general, the various techniques of scanning probe microscopy have now been applied to high-resolution spectroscopy, the probing of nanostructures, measurements of forces in chemistry and biology, the production of deliberate movements of small numbers of atoms, and the use of precision lithography as a tool for making nanometric-sized electronic devices. The authors conclude: "The scanning probe microscope has evolved from a passive imaging tool into a sophisticated probe of the nanometer scale. These advances point to exciting opportunities in many areas of physics and biology, where scanning probe microscopes can complement macroscopically averaged measurement techniques and enable more direct investigations. More importantly, these tools should inspire new approaches to experiments in which controlled measurements of individual molecules, molecular assemblies, and nanostructures are possible." (The Scientist)

SiGe Milestones

The history of germanium is as long as the history of integrated circuits. In 1948, Bill Schockley, one of the inventors of the transistors, suggested Ge heterojunctions in his original patent.

Just a few years later Kroemer tried to build a SiGe HBT. Shortly thereafter, IBM began its work in SiGe. Its first breakthrough was in 1987. The upside of the discovery was the cost structure and economies of scale are similar to silicon-wafer processing, but the downside was an increase in manufacturing complexity and it took almost another ten years to overcome that hurdle.

In 1996, it qualified the world's first manufacturable SiGe HBT process. Since then its goal has been to show silicon with germanium can achieve the high-frequency performance of any III-V chip and more.

IBM promotes both its processes, tooling, and SiGe know-how, including the Unaxis UHV-CVD epitaxy system. "It is the only UHV-CVD non-selective technique which enables the growth of device quality layers at low temperature to allow almost arbitrary composition", says Harame.

The first commercial applications of SiGe were power amplifiers and RF analog applications in the nineties. Today it is showing up optical networking chips, measurement tools, wireless, high-speed local area networks, and global positioning chips.

SiGe is compatible with CMOS processes and intellectual property, enabling more leverage of existing know-how. This compatibility with CMOS enables SiGe to really break the cost barriers.

SiGe BiCMOS sales totaled $320million in 2001. Of that total, 80% was made by IBM and used Unaxis tooling. In addition, IBM holds 80% of the $600 million a year global market for early SiGe transistors.

The SiGe market is projected to grow to $2.7 billion by 2006 according to the "2002 McClean Report", published by research firm IC Insights Today.

Performance

The fastest circuit in any technology is a SiGe HBT Ring Oscillator. It can even beat InP with 55% less power and 15% lower swing.

There is no further reason to use III-V now that SiGe has matured. It is no longer true to say III-V has the fastest circuits - SiGe is easily as fast as III-V. A SiGe HBT also has the fastest dynamic frequency divider in any technology. SiGe frequency output continues to improve. Alternatively, frequency can be a trade-off for power gains.

The potential for integration in SiGe is tremendous. Harame shows a reference design belonging to an anonymous IBM customer who is using the IBM 0.18 micron SiGe BiCMOS process for a wireless chip that includes 6,000 HBTs, 7 million CMOS transistors, a noise isolation technology and a host of capacitors. "It is our most complicated chip with an RF analog system-on-chip that operates at 2 GHz", asserts Harame.

IBM has selected the Unaxis SiGe epitaxy because it is a mature technology which has been manufactured since 1996. Its UHV-CVD non-selective technique grows device quality layers. The defects are very low, with few failures. "In fact, it doesn't get any better!" exclaims Harame.

Now IBM is making a "smart investment" in bandgap engineering as a result of using Unaxis process tools. IBM has selected the Unaxis SIRIUS® as the "tool of choice" because the equipment has been tested and is fully functional.

CMOS roadmap in trouble

There are a couple of ways to achieve the continuous improved processing power and cost advantages that Moore's Law demands. Scaling is one of them. Scaling means to continuously shrink the size of integrated circuits, increasing the number of transistors in a sliver of silicon while increasing the size of the wafers they are processed on. This has worked well in the past, but scaling is leading to an almost unmanageable level of complexity.

Not only is complexity increasing but the industry is reaching its physical limits as oxide thicknesses shrink to only a few atomic layers of thickness.

The leveling of scaling power also has to do with costs. The capital expenditures required to achieve these increments is becoming prohibitively expensive. A new fab has a price tag of $2 to 3 billion attached to it these days.

An alternative to scaling to achieve performance gains and fulfill the demands of Moore's Law is to use advanced materials processes, such as compound semiconductors which boost performance because of their physical properties (e.g. SiGe's molecular structure enables electrons to pass across the circuits faster, plus the signals gain energy better than Si alone).

Implicit to Moore's Law is that it features performance benefits, and innovations do not cost more than the market is willing to bear. An economy of scale must be in place. For the past 30 years, transistor scaling has been enabling the leap every eighteen months without significant changes in the CMOS manufacturing process as demanded by the industry roadmap.

However, the economics of CMOS scaling are no longer valid. Gross margins are narrowing at the same time as capital expenditures required to build our next-generation fabs is climbing dramatically.

Strained silicon provides a path. "I suggest that strained silicon CMOS is the answer to the limits described," says Harame. It has been shown that pulling silicon crystals apart or straining the silicon enables electrons to move throughout the circuits much faster. It improves electron flow by 70% and chip performance by 35%.

A few issues remain to be overcome for the use of strained silicon - once wafer costs are addressed, the next consideration for this technology is the ease of fabrication of strained Si devices and circuits.

Such chips are manufacturable today, however the straining process does introduce defects. There is still much to be understood about relaxation mechanisms and control dislocation densities and surface morphology.

(Chip)


With SiGe:C and Poly-SiGe

 

The successful use of poly-SiGe as the extrinsic base layer in a self-aligned process with non-selective epitaxy of SiGe:C for the intrinsic base.

 

The performance of silicon-based high-speed bipolar transistors has greatly improved over the last few years. Recently, a transistor with a record cut-off frequency of 210 GHz was presented by IBM [1]. The basis of this technology is an epitaxially grown SiGe base, making it possible to engineer the band gap and achieve a narrower base than ever before. The inevitable boron out-diffusion from the base layer can be minimized by the addition of carbon.

Because of its relative simplicity non-selective epitaxy is commonly used for this type of device. One drawback is the subsequent non-self-aligned patterning of the layers necessary to build up the emitter and the connections to the base [1, 2]. So far, only a rather complicated process flow has been demonstrated, which includes conversion of poly-Si to oxide for the manufacture of self-aligned transistors from a non-selective epitaxially grown base [3].

Another approach starts with a selectively grown base layer in the emitter window [4]. However, selective epitaxy is known to suffer from severe loading effects. This means the epitaxial parameters will need to be tuned for each layout with a different device density. Moreover, the selective process is very difficult to control, which easily leads to voids and poor base contacts.

A common problem in all self-aligned double-poly processes is related to the subsequent removal of the silicon used for the extrinsic base inside the emitter opening without etching down into the underlying monocrystalline silicon. This becomes more severe for a process involving an epitaxial base: since the base layer is formed prior to the emitter window etch as opposed to a process where the base formed by ion implantation through the etched emitter opening.

Many solutions have been suggested in literature. In the case ot SiGe-based non-selective epitaxy, the etching problem has recently been addressed [5]. Here, a boron silicate glass (BSG) layer was used both as an etch stop and as a diffusion source for the electrical link-up between the external and the internal parts of the base.

This article is based on a recently presented conference paper [6]. All SiGe-depositions have been performed in a Unaxis SIRIUS® UHV-CVD system. The modular concept used for the extrinsic base can also be applied to a more conventional double-poly bipolar process flow which uses an implanted base.

Device manufacture

The fabrication of the device follows an earlier process scheme up to the formation of the collector contact [7]. A nitride and silicon seed layer are then deposited and patterned prior to a non-selective SiGe:C epitaxy of the intrinsic base. This is followed by the deposition of a bi-layer of poly-SiGe and poly-Si for the extrinsic base layer. Before the deposition of an oxide, the extrinsic base region is implanted with a high dose of boron. The implanted boron will later be out-diffused, thereby forming the extrinsic base connection. The stack is then patterned and etched to form the emitter window. Subsequent processing follows a conventional double-poly bipolar process flow.

(Chip)


Seeking a Comprehensive Automated Wafer Inspection
Solution for 300mm

The growth rate in the number of 300mm fabs is expected to continue at a feverish pace in the coming years as more and more manufacturers realize the significant cost savings over 200mm wafer manufacturing. With so many 300mm fabs sprouting up around the world, new problems have arisen. Very few equipment suppliers made the decision to "bite the bullet" at the beginning of the 300mm race by investing in the development necessary to enter the market. They are only now starting to see a return on their money. In fact the 300mm fab race today would be even more intense if it were not for the high startup costs and the relative scarcity of good, solid 300mm process, test and inspection equipment.

This challenge leads to another: 300mm fab automation requires a lot of floor space. When floor space costs are so astronomical for a 300mm fab, the use of multiple tools to accomplish related tasks is a luxury most fabs cannot afford. This is especially true of wafer inspection equipment.

The growing trend toward flipchip packaging requires new inspection technology for the 300mm fab. For this process a complete inspection of the bump is required. This has required one tool to measure the bump height and one tool to measure the bump diameter, position and damage. When a manufacturer is producing devices that will be assembled in both the traditional wire bond package and the flipchip package, additional inspection systems would be required. This is a waste of precious budget dollars for floor space, facilitization, engineering support and spare parts stock. In short, it presents a larger Cost of Ownership.

The higher costs of 300mm tools make maximized throughput an imperative. In the past, a test engineer was resigned to the fact that it was necessary to gather different but related inspection data on different tools, often from different vendors. This is not acceptable for 300mm wafer manufacturing. Throughput should not be sacrificed to run wafers through additional process steps.

It is widely recognized that operator-based manual inspection is greatly inefficient for 300mm wafer inspection. No 300mm fab can afford to lose hours per lot inspecting on even a sampled basis. If a lot should require a 100% inspection of every wafer, the loss of time increases from hours to days. There is also a loss of yield from inaccurate and non-repeatable inspections that result from human variance from operator to operator, shift to shift. Clearly a reliable comprehensive automated alternative is needed.

There is now available a single system that can meet all of these needs. The WAV 1000 from Semiconductor Technologies and Instruments, Inc. (STI) successfully combines 100% 2D & 3D bump, probe mark, active die and ink dot inspection, at high inspection rates, into a single tool. Unlike other systems, available today, these inspections are accomplished on a wafer lot in a single pass. STI's patented Genius™ Self Teach software makes recipe setup a simple process. The end result is an increase in good solid inspection data, an increase in wafer throughput, and a decrease in costly Engineering setup time. STI provides a comprehensive automated inspection solution to the 300mm manufacturing community with the WAV 1000. STI has the largest install base for 300mm for post-fab.

(Semiconductor Technologies &Instrumen







Система охраняемых территорий в США Изучение особо охраняемых природных территорий(ООПТ) США представляет особый интерес по многим причинам...

ЧТО ТАКОЕ УВЕРЕННОЕ ПОВЕДЕНИЕ В МЕЖЛИЧНОСТНЫХ ОТНОШЕНИЯХ? Исторически существует три основных модели различий, существующих между...

Конфликты в семейной жизни. Как это изменить? Редкий брак и взаимоотношения существуют без конфликтов и напряженности. Через это проходят все...

ЧТО ПРОИСХОДИТ, КОГДА МЫ ССОРИМСЯ Не понимая различий, существующих между мужчинами и женщинами, очень легко довести дело до ссоры...





Не нашли то, что искали? Воспользуйтесь поиском гугл на сайте:


©2015- 2024 zdamsam.ru Размещенные материалы защищены законодательством РФ.