From Data to Decisions

Yield Management
Vo l u m e 8 I s s u e
1 S p r i n g 2 0 0 6 $ 5 . 0 0 US
S O L UYield Acceleration
T Strategies
I O
N
S
for the Semiconductor Industry
SPECIAL FOCUS:
Data to Decisions: Moving up the Knowledge
Hierarchy for Enhanced Metrology Decision-making
15
6
COVER STORY —
From
rom Data
ata to
to Decisions
ecisions
Optimizing
ptimizing Fin
inFET Structures
tructures with
with Design
esign-based
based Metrology
etrology
20
25
When
hen to
to Raise
aise the
the Red
ed Flag
lag
警告を出すタイミングを見極める
49
Automating
utomating Investigation
nvestigation
Line
ine Width
idth Roughness
oughness
of
of
C O N T E N T S
6 Optimizing FinFET Structures with
Design-based Metrology
60 In-chip Overlay Metrology in 90-nm
Production
20
When to Raise the Red Flag
64 Reliable, Repeatable Wafer and Tool
Dispositioning in 300 mm Fabs
For model-based biasing of a FinFET structure, design-based metrology (DBM) can be your methodology.
Disposition progressive mask defects before they
impact the process window.
25
警告を出すタイミングを見極める
プロセスウィンドウに影響が出る前に進行性マス
In-die target insertion is an insurance policy against
process issues, enabling in-die troubleshooting and
potentially improving lot dispositioning.
Scrap or rework? An automated wafer and tool
dispositioning system can deliver the fastest, most accurate go/no-go decisions
ク欠陥を取り除いてください。
33 Bridging the Gap Between
Design and Mask
Reducing design respins through full-chip process
window verification.
42 Opening the Window to Higher
Parametric Yield at 32 nm
Strategies using design for manufacturability (DFM)
and multi-variate advanced process control (APC)
can be a powerful opponent against process window limitations.
49 Automating Investigation of
Line Width Roughness
Full spectral analysis trumps the standard deviation in measuring and monitoring line width roughness. Cover image by Carlos Hueso and Inga Talmantiene, KLA-Tencor
and www.picturequest.com
56
Focusing on the Drifts
2
Cover
Compared with top-down CD SEM tools, spectroscopic ellipsometry provides a more effective way to generate insightful data for litho
and etch process control. Spring 2006
Yield Management Solutions
Stor y
15
From Data to Decisions
It’s all about the data. The right data leads you to decisions that drive process improvement in your fab.
S p r i n g
23
!LLCRITICALDEFECTSAREHIGHLIGHTEDINRED
2 0 0 6
44
50
B
A
Product
76 SpectraCD-XT
Cost-effective Optical CD Metrology
76 TeraScan.ONCRITICALDEFECTS
STARlight-2
AREHIGHLIGHTEDIN
Cost-effective
Reticle Contamination
Inspection
YELLOWORGREEN
76 Viper 2435
Automated Wafer and Tool Dispositioning System
WEB Exclusives
Predicting Line Edge Roughness through
a Mechanistic Model
Because the molecular nature of photoresist
materials can give rise to yield-limiting issues,
considering this nature can help in the prediction of certain excursions, such as line edge roughness.
News
77 2367
UV Line Monitor for Rapid, Low Cost of Ownership Yield Ramp
77 eS32
C
e-Beam Inspection
for Faster Decision-making
77 DesignScan
Lithography-aware Design Inspection
78 Candela CS20
High Brightness LED Production Monitor
78 P-16 and P-16OF
Contact Stylus Profilers
79 MRW3 Quasi-static Tester
Measurement System for MRAM and HDD Industries
79 KT Analyzer
Parametric Analysis Solution
www.kla-tencor.com/magazine
Leveraging
Scatterometry to Enhance
STI Etch Process Development
Detecting across-chip variations is crucial Yield Management Solutions is
published by KLA-Tencor Corporation.
To receive Yield Management Solutions,
subscribe online at:
www.kla-tencor.com/magazine
to rapid STI etch process development. Scatterometry enables a robust, uniform, high-yielding STI process through multiple
profile parameter measurements.
For literature requests, visit:
www.kla-tencor.com/company/inquiry
MG -YMSSPR- 02/06
Sections
4
For information, visit:
www.kla-tencor.com
Editorial: And the Show Goes On
30 KLA-Tencor Rings the NASDAQ Closing Bell
©2006 KLA-Tencor Corporation.
All rights reserved. Material may not be
reproduced without permission from
KLA-Tencor Corporation.
Products in this document are identified
by trademarks of their respective
companies or organizations.
46 Award: HongSun So, Samsung Best Engineer Honoree
Spring 2006
www.kla-tencor.com/magazine
3
Editorial
S
e
c
t
i
o
n
s
And the Show Goes On
Well, the show was an astounding success. The
curtains rose on time, the players were brilliant,
and each scene had us holding our breath. No,
of course I’m not referring to the latest Broadway
sellout, but to the recent Consumer Electronics
Show. Dual-core processors are here to stay—
turning us all into road warriors with on-the-go
access to entertainment. It seems that every gizmo
and gadget is all about power, performance, and
price. Creative Zen Vision: M, touted as having
the goods to give the iPod a run for its money,
walked away with CNET’s Best in Show award.
This 30GB, 2.5-inch screen media powerhouse—
which also supports album art, simultaneous photo
viewing, and music playback, with at least four
hours of battery life—costs a paltry $330. Power,
performance, price. All in one little package.
And so my thoughts turn to SPIE Microlithography
2006. If CES is pure theatre, the SPIE community
sits in the director’s chair. Showcasing both innovation and strategy, SPIE drives important decisions
for our industry that are ultimately behind many
of the products showcased at CES. Decisions that
address questions like, which lithography process
should I plan for on my roadmap: 193-nm immersion, EUV, e-beam, optical maskless, imprint or
something entirely different? What XRET is best
for a particular design? Is the design manufacturable? How forgiving is the process window?
4
Spring 2006
Yield Management Solutions
What’s the best strategy to control line edge
roughness? What data is required to ascertain if
the device will achieve performance specs?
Making knowledgeable decisions squarely rests on
the quality of data and information one has. With
this in mind, the focus of this issue is on enabling
the most effective decision-making for patterning
process control. “From Data to Decisions,” our
cover story by litho guru Chris Mack, describes
a conceptual framework for using metrology to
systematically improve the progression from data
to decision.
Progressive mask defects are an industry-wide
reliability problem, particularly when the defects
approach the critical state where the mask needs to
be pulled out of production and sent for cleaning
or repair. Promos Technologies’ “When to Raise
the Red Flag” puts forth a new methodology for
effective dispositioning of defective masks.
The concerns of leakage current at 65-nm is driving
the adoption of FinFET structures. The decreasing
size of these structures makes it particularly important to obtain good 2D and 3D pattern fidelity in
lithography and etching. In “Optimizing FinFET
Structures with Design-based Metrology,” IMEC
examines the characterization of a detailed 2D layout and creation of a complete model of the lithographic process using design-based metrology.
Yield Management
solutions
E d i t o r - i n -C h i e f
Uma Subramaniam
Managing Editor
Christine Young
Contributing Editors
David Moreno, Lisa Garcia
Advanced fabs today require accurate and rapid disposition decision-making
during manufacturing, as well as a quick assessment of tool and process module
output. KLA-Tencor’s “Reliable, Repeatable Wafer and Tool Dispositioning
in 300 mm Fabs,” argues the benefits of automated disposition versus manual
disposition as a way to accelerate yield learning and improve fab productivity.
The extreme reticle enhancement technologies used at 65-nm and beyond
increase the risk of design-related reticle defects. OPC verification before the
mask-making step can help ensure design-intended device integrity. UMC’s
“Bridging the Gap between Design and Mask” proposes a new methodology
for full-chip process window monitoring.
I encourage you to carefully examine each article in this issue of Yield
Management Solutions. Collectively, they address issues that are of grave
concern to the lithography community.
Production Editors
Siiri Hage, Vidya Kumaravel
A r t D i r ec t o r a n d
Production Manager
Harry Wichmann, Inga Talmantiene
Design Consultants
Carlos Hueso, Jovita Rinkunaite,
Inga Talmantiene
Circulation Editor
Nancy Williams
KLA-Tencor
Worldwide
Corporate Headquarters
KLA-Tencor Corporation
160 Rio Robles
San Jose, California 95134
408.875.3000
I n t e r n a t i o n a l O f f i ce s
KLA-Tencor France SARL
Evry Cedex, France
33 16 936 6969
See you at SPIE Microlithography 2006.
KLA-Tencor GmbH
Munich, Germany
49 89 8902 170
KLA-Tencor (Israel) Corporation
Migdal Ha’Emek, Israel
972 6 6449449
KLA-Tencor Japan Ltd.
Yokohama, Japan
81 45 335 8200
KLA-Tencor Korea Inc.
Seoul, Korea
822 41 50552
Uma Subramaniam
Editor-in-Chief
KLA-Tencor (Malaysia) Sdn. Bhd.
Johor Bahru, Malaysia
607 557 1946
KLA-Tencor (Singapore) Pte. Ltd.
Singapore
65 6367 6788
KLA-Tencor Taiwan Branch
Hsinchu, Taiwan
886 3 552 6128
KLA-Tencor Limited
Wokingham, United Kingdom
44 118 936 5700
Spring 2006
www.kla-tencor.com/magazine
5
Lithography
M
e
t
r
o
l
o
g
y
Optimizing FinFET Structures
with Design-based Metrology
Tom Vandeweyer, Christie Delvaux, Johan De Backer, and Monique Ercken, IMEC
Gian Lorusso, Radhika Jandhyala, Amir Azordegan, Gordon Abbott, and Zeev Kaliblotzky, KLA-Tencor Corporation
Considering the engineering challenges in developing a reliable high-k gate stack that limits leakage current for planar
transistors, fin field effect transistor (FinFET) structures may actually be needed at the 65-nm node. The decreasing sizes
of FinFETs make it particularly important to obtain good 2D and 3D pattern fidelity in lithography and etching. This
article examines characterization of a detailed 2D layout and creation of a complete model of the lithographic process using
design-based metrology (DBM). This model can be used for model-based biasing of the FinFET structure.
Introduction
The characterization of fin field effect
transistor (FinFET) structures, or other
two-dimensional (2D) designs, becomes
important with the decreasing sizes in
future technologies. Robust measuring
methods are therefore needed to characterize
the changing shape of the structure during
the different process steps. A good metrology
approach is also important for the creation
of robust simulation models. These models
predict how a design will be patterned in resist.
Some typical concerns for FinFETs that need
characterization are: the rounded corner
(top-down view); fin width variation through
pitch and as function of fin length; line edge
roughness (LER); and sidewall roughness.
They all have an impact on the performance
of the FinFET device. The magnitude of the
rounded corner decreases the final length (or,
source-drain distance) of the FinFET (Figure
1A). One of the problems stemming from
this rounding phenomenon is the significant
increase in fin width W when the fin length
L is decreasing (Figure 1B). Variation in fin
width, due to the rounding of the fin opening, will impact short channel effects. This
effect increases for shorter fin lengths.1
As for most structures, critical dimension
(CD) variations through pitch (Figure 1C)
6
Spring 2006
Yield Management Solutions
and as a function of fin length are undesirable, because
they render the devices non-reliable.1
Since LER and sidewall roughness have an influence on
electrical behavior, it’s also important to control them
and keep them as low as possible.2,3 (LER and sidewall
roughness will not be addressed in this paper).
Different methods can be used to reduce some of these
effects.4 For example, adding serifs in combination with
a conventional illumination, or applying strong off-axis
illumination settings, like annular, will reduce the
rounding of the corners. But, what will happen with
the proximity behavior? The annular exposure setting
will deteriorate the fin width variation through pitch,
while the effect of using conventional illumination on the
through pitch behavior will be smaller. Many variables
play a role in the optimization of a 2D pattern, some
with a larger effect than others. Two simple exposure
settings will be tested in this first case: a conventional
one and an annular one (the latter in combination with
some basic serif introduction).
Since a full characterization of the 2D structure is wanted,
design-based metrology (DBM) is introduced to decrease
the effort that the creation of the measurement job takes.
DBM creates an automatic CD scanning electron microscope (SEM) job with hundreds of sites, starting from the
design in GDS format. This development takes approximately one hour, whereas an engineer will spend several
hours behind the SEM to create the job manually. DBM
is used here in combination with an an off-line measure-
M
NM
B
i˜Ãi
t
r
o
l
o
g
y
For the baseline technology integration work (front-end
of line (FEOL)), a 193 nm resist from JSR, AR237J at
230 nm film thickness (FT), is used on Brewer Science
ARC29a organic bottom anti-reflective coating (BARC),
FT = 77 nm. The stack for FinFET patterning (or, the
active layer) is 65 nm silicon on 150 nm buried oxide
(or, silicon-on-insulator (SOI) stack). A 60 nm TEOS
oxide hard-mask (HM) is used during the patterning
process for two reasons: to provide etch resistance for
the silicon etching and to enable CD (HM) trimming.
A binary mask (BIM) is used to print an active pitch
of 350 nm; the CD at mask level is 120 nm. The litho
target is set at 100 nm. This target is chosen to have
acceptable process latitudes (CD control) in litho.
A
NM
e
܏>Ìi`
C
Figure 1: The effect of the rounded corner: a) on the length of the FinFET.
Illustrated with a 0.63NA, conventional 0.89s exposure (upper-left), and a
0.75NA, annular 0.89 outer-s and 0.65 inner-s (upper-right); b) as a function of the decreasing length of the FinFET; c) due to proximity effects, isolated features
are printed smaller than the dense. In the FinFET structure, this proximity effect is
seen between the inner (dense) and outer (semi-isolated) fins.
ment tool, enabling image-based measurements on
SEM images, to further simplify the process. For two
exposure settings, the 2D behavior of the FinFETs is
studied intensively to build a resist model that will be
used to optimize future reticle designs at IMEC.
Experimental setup
All exposures are performed on an ASML PAS5500/1100
step-and-scan system, interfaced with a TEL Clean Track
Act8. Maximum numerical aperture (NA) is 0.75. The
total system is charcoal-filtered to prevent airborne base
contamination. Top-down CD SEM metrology is done
using a KLA-Tencor eCD-2 CD SEM tool.
Two different modules are used in the experiments.
On the one hand, different actual FinFET devices are
explored for a full characterization. On the other hand,
regular Mentor Graphics test pattern structures are used
to build the resist model. Two parameters in the FinFET
device are fixed: the width of the fins and the pitch,
120 nm and 350 nm, respectively (both 1X on reticle).
Three other parameters in the FinFET device are varied
throughout the experiment. The first one is the length
of the fin: the shortest is 180 nm and the longest is
1,45 µm (see Figure 6). The second variable is the biasing of the width of the outer fin in a multiple fin structure (see Figure 7). The outer fin width is varied from
90 nm to 150 nm in steps of 10 nm. The last variable
is the placement of the serifs (See Table 1 and Figure 8).
The size of the serif is in all cases 90 nm by 90 nm. Two
kinds of placements are present. In the first, the placement consists of the serif symmetric with respect to the
corner (called OPC2), i.e. overlap in x- and y-direction
is the same. The overlap is 75 nm, meaning that 15 nm
of the original design is removed in both directions.
In the non-symmetric case (called OPC1), the overlap
in y-direction is decreased; 40 nm is removed from the
original design.
OPC0
No serifs
OPC1
OPC2
90 nm X 90 nm
Non-symmetric
(with respect to the corner)
90 nm X 90 nm
Symmetric (with
respect to the corner)
Table 1: Sizes and placement of serifs on the FinFET design.
Two exposure conditions are studied in more detail:
a 0.63NA conventional 0.89s and a 0.75NA annular
0.89 outer s and 0.65 inner s.
Spring 2006
www.kla-tencor.com/magazine
7
M
e
t
r
o
l
o
g
y
Methodology
DBM methodology
The DBM tool used here was developed by KLA-Tencor.5
As input, it requires the GDS of the design and the
coordinates of the measurement sites. With the coordinates of the sites of interest in the design, the DBM
tool creates the pattern recognition templates for each
site (Figure 2). The 2D pattern on the clip is compared
with the 2D pattern on the wafer until an overlap
between the two is found (the pattern recognition).
A
B
C
Figure 3: a) minimum gap; b) maximum gap; c) rounded corner.
At this point, the DBM tool has all it needs to create the
automatic CD SEM measurement: coordinates of the
positions and pattern recognition templates. If a measurement is needed, the tool defines by itself which measurment algorithm is used. There is also the possibility to
acquire SEM images at each site for further analysis, as
an off-line measurement on SEM images is available.
results to build a first resist model to optimize future
FinFET designs.
A useful feature in this off-line measurement tool is the
ability to measure several similar sites in batch mode.
Creation of the resist model
The software package used for the model building in this
paper is Calibre WorkBENCH from Mentor Graphics.
The 216 sites in the Mentor Graphics line test module
are automatically measured by combining the two tools
described previously. For every change in exposure setting, resist and/or substrate stack, new measurements
are needed for the calibration of the model.
Figure 2: Diagram of DBM tool creating a CD SEM job. As input, only a design
in GDS format and coordinates of the measurement sites are needed.
The off-line measurement tool
KLA-Tencor has developed an off-line measurement tool
that enables indirect measurements based on the saved
SEM image. Possible measurement algorithms are:
minimum or maximum gap width, the rounded corner
algorithm, a contact-hole algorithm, and a line-width
algorithm. All algorithms can measure multiple structures on an image, as shown in Figure 3. The minimum
gap width algorithm is used later in the work to determine the smallest fin width. This approach is preferred
because the standard line-width algorithm gives an average value of the line-width in the chosen measurement
box. This means that any rounding along the length of
the fin is not taken into account. The maximum gap
width algorithm is applied to measure the fin length.
The rounded corner algorithm determines the magnitude of the rounding (top-down) of the corners in the
FinFET device. Moreover, the goal is to use all these
8
Spring 2006
Yield Management Solutions
First a setup file is created, including the information
of exposure setting and substrate. Then a default resist
model is used to simulate the 216 sites of interest to
compare them with the measured data. Immediately,
the Model Flow Tool of Calibre creates a new resist
model. A few iterations are needed to define the best
agreement. Between two iterations, it is useful to check
which sites have a good or bad correlation by using the
Model Center of Calibre. The bad ones can be removed
if there is doubt on the measured value. Finally, a bestfit resist model is defined.
Results and discussion
In total, 364 sites of interest are defined in the chip design
to characterize the 2D behavior of the FinFET and to
build the resist model. The CD SEM measurement was
made by using the DBM tool since, as indicated before,
it takes a lot of time to create the CD SEM measurement
parameters manually.
Characterization of the FinFET structure
In this section, the FinFET will be described through
discussion on: the rounded corner (top-down view),
the fin width versus length, and fin width versus pitch.
M
One of the main concerns with decreasing the size of
the fins is the magnitude of the rounded corner, since it
has an impact on the effective length and width of the
shorter fins. The rounding of the fin is characterized by
the difference in area between the edge of the FinFET
and the surrounding rectangle. The result for the four
corners is added up. This can be done automatically
by using the rounded corner algorithm on the off-line
measurement tool.
By moving towards off-axis illumination settings such
as annular or quasar, the corners become more squared
(Figure 1A).4 As noted previously, two exposure conditions are used: a 0.63NA, conventional 0.89s and a
0.75NA, annular 0.89/0.65 so/sI. The magnitude of
the rounded corners is smaller for the fins shorter than
750 nm when an annular exposure setting is used. This
is not the case for the longer fins; both illumination
settings give a similar result. The magnitude of the
rounding increases with increasing fin length, independent of the exposure setting (Figure 4). The reason why
a larger rounding is observed for the longer fin is that
these corners are smoother. They have a longer tail
compared to the shorter fins (Figure 5).
r
l
o
g
y
7AFER
/0#
/0#
/0#
Figure 6: The FinFETs with and without serifs on design, on reticle and printed in
resist with a 0.75NA, annular 0.89/0.65 so/sI illumination setting.
/0#!NNULAR
/0#!NNULAR
/0#!NNULAR
i˜}̅ʭ˜“®
Figure 7: The magnitude of the rounded corner versus the fin length L for the
different OPC versions (a 0.75NA, annular 0.89/0.65 so/sI).
/0#!NNULAR
o
2ETICLE
$ESIGN
/0##ONV
i˜}̅ʭ˜“®
Figure 4: The magnitude of the rounding versus the fin length L for OPC0 (no serifs
present). Two exposure settings are compared: a 0.63NA, conventional 0.89s
and a 0.75NA, annular 0.89/0.65 so/sI.
>}˜ˆÌÕ`iʜvÊ,œÕ˜`i`Ê
œÀ˜iÀÊ­>Õ®
>}˜ˆÌÕ`iʜvÊ,œÕ˜`i`Ê
œÀ˜iÀ
t
Introduction of serifs decreases the magnitude of rounded corner. This can be seen immediately and visually in
Top Down SEM (TD-SEM) (Figure 6). Analysis with the
rounded corner algorithm results in the same conclusion
(Figure 7). As before, the magnitude of the rounded
>}˜ˆÌÕ`iʜvÊ,œÕ˜`i`Ê
œÀ˜iÀÊÊ­>Õ®
Rounded Corner
e
/0#!NNULAR
/0##ONV
i˜}̅­˜“®
Figure 8: The magnitude of the rounded corner versus the fin length L for OPC1.
Figure 5: Rounding is more pronounced for the longer fins due to a longer tail
Two exposure settings are compared (0.63NA, conventional 0.89s and 0.75NA,
from one edge center to the other center.
annular 0.89/0.65 so/sI ). The same effect is observed for OPC2.
Spring 2006
www.kla-tencor.com/magazine
9
M
e
t
r
o
l
o
g
y
corners is smaller for the fins shorter than 750 nm when
an annular exposure setting is used (Figure 8). But for the
longer fins, both exposure settings give the same result
when serifs are used, as has been seen when there are no
serifs used. The use of off-axis illumination settings, like
annular, helps in filtering out the part of the light falling
in the NA pupil that is not relevant to the imaging of the
densest structures. This improves the contrast, and thus
the imaging, of smaller pattern details, like corners.
Width versus length of the fin
Another concern is the width variation as a function of
varying fin length induced by optical proximity effects.
The length and width of the fin can be measured with a
standard line/space width CD measurement algorithm
or with the newly developed minimum/ maximum gap
width algorithm.
Fins with a length below 530 nm are chosen to be
checked for fin width variation caused by varying
length. The same region will be used to characterize the
width variation caused by the difference in pitch, seen
in FinFET devices as the width variation between inner
(dense) and outer (semi-isolated) fin.
As previously shown, introduction of serifs has a
positive effect on the rounded corners. But how will it
influence the width variation induced by the length of
the fin? The length of the fin increases when serifs are
used in the design. The increase is most pronounced for
OPC1 (or, the non-symmetrically placed serifs).
The influence of the serifs on the width variation
induced by the length of the fin is large. For example,
together with conventional exposure, the width variation caused by the length has completely disappeared
in the region of interest (Figure 10). So, this confirms
what was stated before, “A part of this width difference
is due to the impact of the rounded corner on the width
for shorter fins”. In the case of OPC2 (or, symmetrical
serifs) in combination with conventional exposure, the
shorter fins become even smaller than the longer ones.
Also, when annular exposure is used, a large improvement is seen (Figure 10), but a width variation of 20 nm
is still observed.
7IDTHNM
First, the influence of varying length on the width of
the fins is checked. The range in width variation in the
region of interest (180-530 nm length) is 60 nm for
the conventional exposure and 71 nm for the annular
exposure (Figure 9). Part of this difference is due to the
impact of the rounded corner on the width for shorter
fins. The larger range of the annular exposure setting
(10 nm) is probably due to the larger proximity effect
when an off-axis exposure setting is used.
,ENGTHNM
/0##ONV
Comparing the lengths of the fins for the two different
exposures shows that for the annular setting the difference with the designed length is smaller.
/0##ONV
/0##ONV
A
/0##ONVENTIONAL
7IDTHNM
/0#!NNULAR
7IDTHNM
,ENGTHNM
/0#!NNULAR
/0#!NNULAR
/0#!NNULAR
B
$ESIGNED,ENGTHNM
Figure 9: Fin width as function of the designed length (OPC0). The change is larger
for 0.75NA, annular 0.89/0.65 so/sI than for 0.63NA, conventional 0.89s.
10
Spring 2006
Yield Management Solutions
Figure 10: Smaller proximity effect for both OPC1 and OPC2, with: a) 0.63NA,
conventional 0.89s; b) 0.75NA, annular 0.89 0.89/0.65 so/sI.
M
)NNERWIDTH/UTERWIDTH
t
r
o
l
o
g
y
For both exposure settings, the necessary bias is decreased when serifs are placed. For the conventional
exposure, no bias is needed. Also, for the annular setting, the bias is decreased to 20 nm bias (an outer fin
of 140 nm). The effect for the non-symmetric and
symmetric serifs is approximately the same.
Building a resist model
/UTERWIDTHDESIGNNM
/0##ONV
/0##ONV
/0##ONV
A
)NNERWIDTH/UTERWIDTH
e
The combination of DBM with the off-line measurement
tool is powerful for characterizing 2D patterns. It is also
useful in retrieving the needed measurements for the
creation of a resist model. On the used chip design to
pattern the FinFET devices, there is also a Mentor
Graphics line test pattern available. This pattern consists of 216 different sites: isolated lines, dense line
patterns, and end-of-line structures (Figure 12). These
structures have also been used to build a resist model.
/UTERWIDTHDESIGNNM
/0#!NNULAR
/0#!NNULAR
/0#!NNULAR
B
A
B
C
D
Figure 11: Ratio of inner and outer fin width versus the designed width of the outer
fin: a) 0.63NA, conventional 0.89s; b) 0.75NA, annular 0.89 0.89/0.65 so/sI.
Width versus pitch
Known proximity effects for standard line and space
patterns will also play a role here and will become larger
when off-axis illumination settings are used. In this
paper, the pitch of the FinFET device is fixed at 350
nm, but the proximity effect is seen as a difference
between inner and outer fin (Figure 1C). This result
can be presented in two ways. The first approach is to
plot the fin widths versus length. If the same width
on design is used for inner and outer fin, the inner fin
is printed larger than the outer fin. This is more pronounced for the shorter fins. A second representation
is to plot the ratio between inner and outer fin width
(Figure 11). In this way, it’s possible to define the best
bias needed for the outer fin to print all fins on target
(ratio = 1). When using conventional exposure, a bias
of 10 nm on the outer fin (an outer fin of 130 nm) is
enough to compensate for the optical proximity. When
the annular setting is used, a bias of 30 nm is needed
to compensate for this effect.
Figure 12: Example of the different patterns in the Mentor Graphics line test
module: a) dense lines; b) dense line ends; c) an isolated line end; and
d) an inverse line end.
As mentioned before, it is not really user-friendly to
define the CD SEM measurement needed for this type of
work manually. Thus, the DBM tool is used to automate
it. The sites are grouped per similar design, because for
most similar sites it is possible to do the analysis with
the off-line measurement tool in batch mode. The CD
data together with the coordinates and specifications
of the sites are put in a data file format, readable as a
sample file within Calibre.
Spring 2006
www.kla-tencor.com/magazine
11
M
e
t
r
o
l
o
g
y
Unfortunately, for every change in exposure setting,
resist, and substrate stack, new measurements are
needed for the recalibration of the model. The fitting
is done using the Model Flow Tool and the Model
Center Tool of Calibre WorkBENCH, as explained
previously. Once a final model is defined, it is verified
with the experimental data via two methods: top-down
comparison of the model with the SEM image (a visual
comparison) and a comparison of the actual measurements on wafer with measurement results retrieved
from the simulated clips.
As an example, the results for 0.63NA, conventional
0.89s are shown (Figure 13). The correlation between
the measured data and the model is 0.976.
A conventional illumination combined with serifs seems
to be the best choice for the chosen design (120/350
nm - width/pitch). This exposure condition is beneficial
for the decrease in width variation and to provide a less
rounded corner. When smaller FinFETs are needed in
the future (not only in width, but also in density), an
annular setting will give better resolution and a bias
will be needed to tackle the proximity effect.
The combination of DBM and off-line image-based
measurements is also useful to retrieve the input needed
for the creation of a resist model. It is shown that a
resist model indeed can predict the patterning in resist
(etch is not included in this paper).
The simulated 2D profiles overlap very well with the
SEM images. A slight necking is observed on the simulated edge contour plots for the longer fins, but it is not
present on the SEM images. This overlap illustrates the
usefulness of this approach to check the accuracy of a
resist model.
Acknowledgments
As a second check, actual top-down views on wafer are
compared with simulated clips. The magnitude of the
rounded corner gives similar results for both cases. For
the longer fin lengths alone, the magnitude of the rounded
corner is slightly smaller on the simulated clips than on
the actual CD SEM images. This occurs since the tail
(seen before on longer fins, Figure 5) on the simulated
clips is not as long as on the real pattern in resist.
References
The model correlates very well with the actual data and
will now be used to optimize future FinFET designs to
decrease optical proximity effects.
Conclusions
The combination of design-based metrology with
off-line image-based measurements is a useful tool
to describe different parameters of a 2D pattern. The
capability for 2D characterization is shown using a
FinFET pattern. Rounded corner, width variation,
introduction of serifs, biasing, and different exposure
settings are the parameters that have been varied to
gain an understanding of the patterning behavior of
the 2D structure.
12
Spring 2006
Yield Management Solutions
The authors would like to thank Nadine Collaert (IMEC
integration) and Ivan Pollentier for their useful discussions, as well as the algorithm group at KLA-Tencor for
their support.
1. S. Xiong and J. Bokor, Sensitivity of double-gate and FinFET
devices to process variations, IEEE Trans. Electron Dev. 50
(11), p2255-2261, 2003.
2. J. Croon et al., Line Edge Roughness: Characterization, Modeling and Impact on Device Behavior, IEDM, Electron
Devices Meeting, p307-310, 2002
3.L.H.A. Leunissen et al., Full Spectral Analysis of Line Width
Roughness, SPIE, Vol5752, p499-509, 2005
4.M. Ercken et al., Challenges in Patterning 45nm Node Multiple-Gate Devices and SRAM cells, Interface 2004
5.C Bevis “Design driven inspection or measurement for semiconductor using recipe” US 6,886,153 (2005)
This paper was originally published at INTERFACE
2005, the FUJI FILM Electronic Materials Microlithography Symposium.
May 22–24, 2006
Sheraton Hotel
Boston, Massachusetts, USA
www.semi.org/asmc
The 17th Annual IEEE/SEMI
®
ASMC 2006
Advanced Semiconductor Manufacturing Conference
People
Processes
Controls
ASMC 2006 continues the rich and established tradition of
this premier conference dedicated to unveiling breakthroughs
in semiconductor manufacturing from wafer fab productivity
and profitability to advanced process controls and device yield.
With more than 90 peer-reviewed technical papers, and
poster sessions, ASMC 2006 attracts engineers and managers
from around the world.To register visit us at www.semi.org/asmc.
Media Sponsors:
Sponsored by:
Cover
Story
From Data to Decisions
Chris A. Mack, Lithography Consultant
John Robinson, KLA-Tencor Corporation
The value of metrology data is explored conceptually by describing the systematic
progression from the data to a decision made in the fab. The Knowledge Hierarchy, a conceptual framework for understanding the increasing value of data as it
becomes information, then knowledge, then a decision, is introduced. Carefully
spelling out every step in the decision making process allows for an understanding
of where the weak links in the chain are located, and which improvements will
have the greatest impact on overall decision quality. This framework then allows
one to properly assess the relationship between data quality and decision quality,
and work towards systematically improving decision quality.
Introduction
How does one quantify the value of a metrology tool? Obviously this is a
commonly asked question for both producers and purchasers of metrology
equipment. While there are many answers and approaches used for specific
applications of metrology data, there are some common themes that apply to
all metrology value statements. Thinking about these commonalities over
the last several years, we have developed a framework for understanding the
value of metrology that we call “from data to decisions.”
A carpenter friend of ours is fond of saying, “Nobody wants a 1/4" drill. They
want a 1/4" hole.” The drill is the most effective tool for getting the hole
they really want, but the value comes from the hole. Likewise, nobody wants
a metrology tool. What they really want is:
• A process that’s in control and is predictable
• Lower rework rates
• Better bin sort (device performance)
• Faster ramp to high yield
• Sustained higher yields
• Quick detection and elimination of yield crashes or potential yield crashes
Metrology tools are just an effective means to achieve these primary goals of
profitable semiconductor manufacturing. Obviously, if one is to express, and
hopefully quantify, the value that a metrology tool adds to a fab, then one
must clearly link the immediate use of the tool (collecting data of some sort)
to the final goals of that use (improved fab profitability).
Spring 2006
www.kla-tencor.com/magazine
15
C
o
v
e
r
S
t
The Knowledge Hierarchy
Before tackling the problem of how
to understand the value of metrology
in a wafer fab, a few preliminary concepts and terms should be clarified.
What is the difference between data
and information? Between information and knowledge? How are these
concepts related? Data, information,
knowledge, and finally acting on
that knowledge to make a decision,
form a chain of increasing value we
call the Knowledge Hierarchy.
$!4!
).&/2-!4)/.
+./7,%$'%
To understand this hierarchy let’s
define and give examples for each
step. Data are, of course, the raw
numbers or images provided by the
measurement tool (from the Latin, it
is “the thing given”). It is a collection of numbers with units and with
known uncertainty (that is, known
precision and accuracy). Information,
on the other hand, is data in context. Information includes sufficient
details of what was measured where
and when so that it can be easily discerned from other similar collections
of data. It is organized and accessible. Information may also include
the filtering out of extraneous bits
of the data or distilling the numbers
down as much as possible (reporting
a mean and standard deviation, for
example). Knowledge is an interpretation of the information based on an
understanding (that is, a model) of
cause and effect. Whereas information answers the question of “what”,
knowledge answers the question
“why”. Finally, decision means acting
on the knowledge obtained. In our
fab context, this means acting with
the intention of improving the fab’s
bottom line (improving yield, bin
sort, etc.).
16
Spring 2006
o
r
y
The table below provides a CD metrology example that clarifies the distinctions between data, information,
knowledge, and decisions. Suppose
a product lot uses a standard sampling plan where some number of
CD measurements are made. The resulting collection of numbers, with
their associated uncertainties (either
measured or assumed), is the data.
As you might imagine, a collection
of numbers out of context is next to
useless. Thus, when the context is
added (the targets were nominally
90 nm isolated
lines in resist ar$%#)3)/.
ranged along the
scanner slit) and
statistically interpreted (determining that the variation was statistically different from
most other lots), the data becomes
information. Already, the transformation of the data to information
can be immensely useful.
But information alone is not enough.
What is causing the systematic
).&/2-!4)/.
variation $!4!
in CDs across
the slit? In
Table 1, the targets have been specifically designed for sensitivity to focus
errors. Adding other information
(a separate measurement to monitor
dose errors) and a previously calibrated model of how these targets
vary with focus and exposure enables
us to assign a cause to the variation
seen in the data: there was a -80 nm
focus tilt across the slit. Further, the
model also allows us to estimate how
much a process change might reduce
the CD variation along the slit. This
is knowledge. And it is powerful
knowledge, because it allows us to
understand what actions will cause
what benefits. It allows us to make
a decision.
From this example we can see that
data adds value to the fab only when it
moves up the Knowledge Hierarchy
and enables a decision to be made.
The value to the fab is in the decision – or, more properly, in making
the correct decision. Thus, the best
way to judge the value of the data is
to judge the value of the decision that
the data enables. But we’re getting
ahead of ourselves. First we must
+./7,%$'%
$%#)3)/.
understand how
to systematically
move up the Knowledge Hierarchy
from data to decisions.
Concept
Definition
Example (from CD metrology)
DATA
A collection of numbers,
calibrated and with known
repeatability
The measured CDs are 96.2 ± 0.9 nm,
94.4 ± 0.8 nm, etc.
INFORMATION
The right data, at the right
time, in the right context,
organized for access
Isolated 90 nm lines vary systematically by 6.5 nm across the slit for
this lot
KNOWLEDGE
Interpretation of information
to assign cause and effect
A -80 nm focal tilt adjustment would
reduce the systematic CD variation
across the slit from 6.5 nm to 2.1 nm
DECISION
Acting on the knowledge with Let’s make the focus tilt adjustment
an expectation for benefit
before the next lot is run because
we believe it will have a positive,
noticeable impact on yield
Table 1: Knowing how to reduce CD variation.
Yield Management Solutions
C
Moving up the Knowledge
Hierarchy
The details of the Knowledge Hierarchy are very use-case dependent,
with the most generic elements
near the bottom and the most usecase specific elements at the decision stage. Data is characterized by
accuracy and precision specifications
that, in many cases, apply to a wide
range of applications. Once the context is added to make information,
however, we already have one or
more decisions in mind. Targets to
be measured are optimized for maximum sensitivity to some variables
and minimum sensitivity to others,
in order to be most useful in moving up the Knowledge Hierarchy.
Knowledge is extremely use-case
dependent, working towards a specific decision. Thus, it seems that
a systematic approach for defining a
Knowledge Hierarchy must be top
down: start with the question that
you want to answer, the decision
that you want to make.
Let’s consider an example. One of
the most common and important use
cases for parametric metrology data
is lot dispositioning at photolithography: measure representative wafers
from a lot after lithography but before
etch to see if the lot should be
reworked. Thus, the driving question (decision) is, “Should this lot be
reworked?” Important and related
questions are, “If this lot is reworked,
what should be done differently?”
and “Can we feed any information
forward to the etch step that will
improve the final results?” Let’s pick
the first question, and see what is
involved in making the decision. To
do this, we must methodically list
every step in the sequence of steps
that go into that decision. On the
right is an attempt at a fairly exhaustive listing of activities that go on,
from start to finish, when making
the lot go/no-go decision for the case
of overlay.
o
v
e
r
S
t
o
r
y
I. Preparation
A.Define the metrology tool to be used
• Required precision, accuracy and throughput
B.Design the measurement target
C.Define the within-field sampling plan
D.Put measurement targets in the chip design
• Scribe kerf, interdie streets, within die
E.Define the full sampling plan
• Fields per wafer, wafers per lot, lot frequency
F.Create measurement recipe
G.Create analysis recipe
• May include reticle data, lens distortion map
H. Create overlay spec for lot pass/fail
• Spec is intended to reflect device yield/performance
• Spec may depend/influence sample plan, tool specs, analysis approach
I. Define the process (action plan) that applies the spec in production
II. Measurement
A.Print wafers
B. Transport wafers to overlay tool and load
C. Select wafers to measure
• May be manual or from host or recipe
D. Make measurements
E. Perform analysis (usually automatically)
F. Upload measurement and analysis results to host or 3rd party system
III. Analysis Method
A.Method 1: compare raw data statistics to spec
B.Method 2: compare modeled results (model coefficients, modeled max error,
overlay limited yield) to specs
C.Method 3: SPC-like analysis (check for out-of-control condition)
D.Apply some combination of above methods
E.Assess the quality of the data
IV. Decision Regimes
A. Obvious pass – send the lot on
B. Obvious fail – rework the lot
C. Gray Area Options
• Consider gray area as failure – rework the lot
• Shrink the gray area
– Make more measurements (repeat on same points, increase sample),
possibly on a different tool
– Change measurement algorithm for greater precision
• Apply human judgement (last resort)
V. Decision Post-Mortem
A. For reworked lots, how have things improved?
• Measure reworked lots
• Compare new measurements to old
• Did corrections work as expected?
B. For reworked lots, what is the root cause of the problem?
• What process changes would reduce rework rate?
• Are the processes and tools in control?
C. For passed lots, are things OK downstream?
• Correlation of overlay results to yield
• Is the expected failure rate obtained?
D. Can the overall dispositioning process be improved?
• Relate results to fab metrics (yield, cycle time, throughput, CoO)
• Time to results
• Measurement costs
• Cost of a bad decision
Spring 2006
www.kla-tencor.com/magazine
17
C
o
v
e
r
S
t
o
r
y
$!4!
Generally, there are five basic steps in
the making of a standard production
decision: preparation, measurement,
analysis, decision, and post-mortem.
Preparation (or planning) is often
underestimated in terms of its importance to the overall quality of the
decision to be made. In particular,
the sampling plan and the design of
the measurement target will have a
profound impact on the quality of
the decision, often far in excess of
the measurement uncertainty itself.
Proper preparation allows data to be
seamlessly turned into information.
Of course, the measurement itself is
important – no amount of planning
or analysis can make up for uncertainty or inaccuracy in the raw data.
But a focus on measurement tool
specifications out of context of the
decision to be made has little value.
Analysis of the data (turning information into knowledge) assigns a
probable cause to what is happening
on the wafer. In overlay, it assigns
correctables: if the lot had been run
with these different settings, this is
how much better things would have
turned out. The decision is made
by relating the knowledge of what
could be done better with what is effectively a cost analysis: is it worth
it to make a process change (e.g., to
rework the lot)? The final step, after the decision has been made and
implemented, is the post-mortem.
Have we learned anything from this
experience that we can use to do
things better next time?
As the example in the next section
illustrates, there is a direct correlation between the basic way in which
fab decisions are made and the process of moving up the Knowledge
Hierarchy.
18
Spring 2006
).&/2-!4)/.
+./7,%$'%
$%#)3)/.
PREPARATION
System to turn data into information
MEASUREMENT
Create data
ANALYSIS
Turn information into knowledge
DECISION
Turn knowledge into action
POST-MORTEM
Improve the data to decision process
Assessing the value of data
Given a thorough understanding of
how data moves up the Knowledge
Hierarchy to become a decision, we
now have a framework for how to
assess the value of metrology. Data
is valuable only in so much as it affects the quality of a decision. (Note
that deciding to make no changes
to the process is still an important
decision to make, one that should
be made actively, not passively.) If
the process of making a decision is
systematized (as is often the case in
our increasingly automated fabs),
it will be possible to make a quantitative correlation between data
quality and decision quality.
For example, for the case of a go/
no-go lot dispositioning decision
we break decision errors into alpha
and beta errors. An alpha error (also
called a type I error) is when we
decide to rework a good lot. A beta
(or type II) error is when a bad lot is
not reworked. Each error has its own
unique costs in the fab. The cost of
making a bad decision is the cost of an
alpha error multiplied by the probability that an alpha error will occur,
plus the cost of a beta error times its
probability of occurrence. Lowest
overall cost occurs when the ratio
of the probabilities of alpha errors
to beta errors is equal to the ratio
of the costs of beta errors to alpha
errors. It is always easy to decrease
one type of error at the expense
Yield Management Solutions
of the other, so good metrology and
analysis will strive to both lower and
balance the two types of errors. Given
a systematic analysis and decision
making method, one can relate the
quality of the data and the actual
problem occurrence probability to
the probability of making alpha and
beta errors. In that way, real dollar
values can be a signed to measurements and to measurement specifications.
When determining correctables, data
uncertainty can be related directly
to uncertainty in the correctables
(which then can be related to the
value of the reworked lots, as well
as the changing problem occurrence
probability). Again, it is possible
to use the systematic decision making process to quantify the impact
of different measurement specifications, target quality, sampling
plans, and data analysis models on
the quality of the correctable decisions being made.
Quantifying the relationship between data quality and decision
quality provides a tool for assessing
the decision process as well. Given a
certain data quality, what sampling,
target design, and analysis method
provides the needed decision quality?
Often metrologists focus on improving measurement precision when
decision quality can more easily be
improved with better preparation or
analysis.
C
Conclusions
Metrology is valuable, which is why
we make measurements. The value,
however, can only be quantified by
first describing the systematic progression from the data to a decision, then assessing the relationship
between data quality and decision
quality. The Knowledge Hierarchy
is a conceptual framework for understanding the increasing value of
data as it becomes information, then
knowledge, then a decision. It maps
directly to the actual steps used in a
fab when metrology data drives decisions. Carefully spelling out every
step in the decision making process
allows for an understanding of where
the weak links in the chain are located, and which improvements will
have the greatest impact on overall
decision quality. Metrology data
only “adds value” to the wafers when
it moves up the hierarchy to become
a valuable decision.
o
v
e
r
S
t
o
r
y
Author’s note
Anybody who works for a metrology company has no doubt heard this
refrain from a customer at some point or another: unlike process tools,
metrology tools do not “add value” to the wafer. I’ve never quite understood what this comment means. Certainly it can’t mean that
metrology tools aren’t valuable; otherwise, why would people buy them?
I think it means that the way in which metrology “adds value” to the
wafer is more indirect, and thus harder to express in simple, short
sentences. During my tenure at KLA-Tencor, I thought about this problem,
and with my colleague John Robinson, developed a framework for understanding the value of metrology that we call “from data to decisions.”
- Chris Mack
Biographies
Chris Mack was Vice President of Lithography Technology for KLA-Tencor from
2000 – 2005. He currently writes and consults in Austin, Texas.
John Robinson, Director of Marketing and Applications Development at KLA-Tencor, received his Ph.D. in physics at the University of Texas at Austin in 1995. Based in Austin, Texas, Robinson joined KLA-Tencor in 1997, and has been responsible for
overlay, optical CD, and CD-SEM analysis products.
Spring 2006
www.kla-tencor.com/magazine
19
Lithography
R
e
t
i
c
l
e
I
n
s
p
e
c
t
i
o
n
When to Raise the Red Flag
Effective Dispositioning of Defective Masks
Jerry Huang, Lan-Hsin Peng, and Chih-Wei Chu, ProMOS Technologies
Kaustuve Bhattacharyya, Ben Eynon, Farzin Mirzaagha, Tony Dibiase, Kong Son, Jackie Cheng, Ellison Chen, and
Den Wang, KLA-Tencor Corporation
Progressive mask defects are an industry-wide mask reliability problem, particularly when the defects approach the critical
state where the mask either needs to be pulled out of production or sent for cleaning (or repair). This problem is especially
troublesome with expensive high-end masks running deep ultraviolet (DUV) lithography. In these cases, the fab will want
to sustain the problematic masks in production as long as possible, until just before the masks begin impacting the process
window. This study found that while a small, growing defect may not print at the best focus exposure condition, it can
still influence the process window, shrinking it significantly. Direct, high resolution reticle inspection enables early detection
of these defects; however, fabs still need an effective means to disposition defective masks. A lithographic detector has been
evaluated to see if it can predict the criticality of such progressive mask defects.
Examining the nature of mask
defect growth
In a typical fab, many masks remain problem-free (clean) even after a large number
of exposures. On average, about 1% of
binary masks (at 365 nm lithography) and
6% to 15% of embedded phase shift masks
(EPSMs) (using DUV lithography) show a
defect growth problem through the duration of their usage in the fabs1,2. A direct,
high resolution mask inspection can detect
these defective masks effectively. But the
nature of this defect growth can be severe on
some masks, which means thousands of real
crystal growth-type defects on the pattern
side of masks. As one can imagine, this can
Figure 1. Progressive mask defects growth in fab.
20
Spring 2006
Yield Management Solutions
render the defect review session of these problematic
masks to be quite complicated. However, a mask inspection tool such as KLA-Tencor’s TeraScan STARlight has
the capability of binning the defects by size as well as
by type (such as “on-chrome”, “on-clear”, “on-half-tone”,
etc.). This helps to disposition masks that have a reasonable number of defects. When the total defect count
grows on certain masks, however, traditional review
techniques may result in a very lengthy review session.
Hence, it was decided that the new TeraScan STARlight-2 (SL2) should be used to evaluate a run-time
mask error enhancement factor (MEEF) based detector
that may isolate the defects of interest from those
thousands of total defects caught by SL2.
In Figure 1 below, a fab’s incoming inspection indicates
that a mask arrived clean, but after only 20 days of
R
e
t
i
c
production usage, the mask showed catastrophic defect
growth. While defects on the mask image to the left
can be easily reviewed individually (manually), the total
count of the defects on the mask image to the right is so
large that a manual review is difficult. A more automated way to review defects is needed.
Impact on the process window
Understanding the impact of small progressive defects
on the process window is absolutely critical. Some critical mask defects will print on the wafer (collapse process
window), and some will have no impact at all. But there
will be many defects on these highly defective masks
that will fall in between these parameters. They may not
completely collapse the process window, but may reduce
it. Criticality of mask defects has to be examined from
this point-of-view, as any reduction in process window
can potentially cause problems.
l
n
s
p
e
c
t
i
o
n
a big role. If the defect location and size are kept the
same and the half pitch of the mask is reduced, the
probability of the defect to cause bridging on the wafers
will increase. This means a mask defect (in Figure 2,
consider the 120 nm defect) that caused no problem
in the past on older design nodes (like at the 135 halfpitch process) may now impact the process window on a
new design node (90 nm half-pitch) due to denser geometry and higher MEEF. When the defect size becomes
200 nm, it completely collapses the process window.
Simulation Results at 193 nm Exposure and 0.75 NA
Figures 3 and 4 show the process window effect of a
120 nm defect on mask. With the tighter design rules,
the volume of defects of concern continues to increase.
Focus
Exposure
It is also important to not only consider a defect from
pure size, as contamination defects are not completely
opaque and can be considered phase defects. A combination of defect size, transmission loss, and location should
provide a good indication of the defect’s criticality. This
makes MEEF-based binning necessary.
It can be seen from the simulation images in Figure 2
that the half pitch of the mask and defect location plays
I
e
Figure 3. A 120 nm mask defect did not impact the Focus
process window for 135 nm
half-pitch process.
Focus
Defect size
(on mask)
K1
factor
135nm
120nm
0.52
135nm
200nm
0.52
135nm
320nm
0.52
90nm
120nm
0.35
Wafer
image
Exposure
Mask
defect
Exposure
Half
pitch
Figure 4. A 120 nm mask defect reduced the process window significantly at
90 nm half-pitch process.
90nm
200nm
0.35
It can be seen that if the mask defect increases in size
to 200 nm, there will be no process window left in the
figure below for this 90 nm half-pitch process.
Figure 2. Mask defects being simulated on wafer — MEEF and defect size impact.
Spring 2006
www.kla-tencor.com/magazine
21
R
e
t
i
c
l
e
I
n
s
p
e
c
t
i
o
NMNODE
%03-IS
PRINTEDUSINGA
DOSEMATRIX
Focus
n
-ASK$EFECT
$EFECT0RINTED
ON7AFER
Exposure
"EST%XPOSURE
#ONDITION
.OMINALDOSE
Figure 7. Mask defect impacting the process window — wafer images from CD
Figure 5. A 200 nm mask defect collapses the process window at 90 nm
Developing the detector
half-pitch process.
Printed mask defect and the process window
A 110-nm node 193 EPSM was inspected and then
exposed using a dose matrix. Dose was varied +/- 20%.
The results are as follows:
-ASK
SEM tool.
-ASK$EFECT
$EFECT0RINTED
ON7AFER
Litho3 and ReviewSmart on KLA-Tencor’s TeraScan
mask inspection tool are the two detectors being
characterized. This paper provides preliminary results
from a few inspections performed with the new TeraScan
STARlight-2 (SL2). A complete characterization was
beyond the scope of this work. Further characterizations
will be required in the near future to understand the
implication of these detectors on various mask layers
and nodes.
Litho3
A combination of defect size, transmission loss, and
location should give a good indication of the defect’s
criticality. So a MEEF-driven litho type detector was
developed to use at run-time. With the proper setup,
this detector is capable of binning critical defects under
a single bin. This detector uses the following concepts:
a. It contains a group of specialized detectors that operate
run-time based on geometry and lithographic context
Figure 6. Mask defect printed on wafer.
A dose matrix exposure of this mask showed that even a
small defect can impact the process window. The defect
was visible at slight under-dose conditions. Even -5%
dose variation from the best exposure condition enhanced the printability of the defect.
Substantial ongoing investigation is required to understand the printability of contamination defects. However, in this current work, the focus was kept on the
development of an effective disposition method for mask
defects (go / no-go criteria). For simplicity, all of the
work here is based on an optical image from the mask
(via the TeraScan inspection system), with the goal being to develop a run-time detector to provide speed.
22
Spring 2006
Yield Management Solutions
b. Defects of the same size and intensity will be binned
differently based on the defect location MEEF; i.e.,
a certain defect located in a high MEEF area will go
in a particular litho bin while a similar defect (in size
and intensity) that is in a low MEEF area will remain
excluded from this special bin.
If the litho detector is set up correctly, it will bin all the
critical defects of interest in the special bin. The thresholds that control this bin are user definable from the
setup. An initial evaluation showed encouraging results.
The mask below had roughly 700 defects. When the
litho detector was used, some of these defects were
binned in a special bin (run-time). All of the defects in
R
e
t
i
c
!LLCRITICALDEFECTSAREHIGHLIGHTEDINRED
.ONCRITICALDEFECTS
AREHIGHLIGHTEDIN YELLOWORGREEN
Figure 8. Litho3 detector binning the critical defects (in red) on a TeraScan SL2
system.
this special bin were critical in nature and can be seen
below. These are all highlighted in red for visibility.
Non-critical defects are shaded in yellow and green. The
user can tighten these special bin criteria so that fewer
defects are shaded in red. But care must be taken to set
these criteria correctly.
ReviewSmart
l
e
I
n
s
p
e
c
t
i
o
n
The ReviewSmart detector is developed based on the
goal of more effective binning of similar defects3.
ReviewSmart identifies defects which are lithographically similar using a set of operator-specified thresholds.
These similar defects are then binned into a group. A
large number of defects can be binned into only a few
groups and within each group, defects are also ranked by
severity. All of these happen at run-time from the main
GUI (no extra simulations to run). The operator then just
needs to classify one defect from each of these groups,
rather than reviewing all defects and having the rest of
the defects auto-classified. The following is an example
from a highly defective mask where ReviewSmart was
able to bin small crystal growth-type defects on isolated
(non-critical) areas effectively, placing this type of defects
in a single bin while keeping defects on dense geometry
in a separate bin.
Conclusions
Progressive mask defects such as crystal growth and
haze continue to threaten the industry. Resolution
requirements have driven the IC industry to implement very low k1 lithography processes, which elevate
the impact of mask errors as a result. Simulation data
and then a real print-study on a 193 EPSM showed that
some of the critical mask defects will print on wafer
(collapsing process window) and some will have no
impact at all. But there will be many defects on these
highly defective masks that will fall in between these,
impacting the process window if they print at slightly
under-dosed conditions. Such defects
may not completely collapse the
process window, but they will reduce
it. Criticality of mask defects has to
be examined from this point-of-view,
as any process window reduction can
potentially cause a problem.
A certain percentage of masks (1%
to 15%) show this progressive defect
problem. High-resolution mask
inspection will detect this defect
problem. But initially, many of these
defects are just forming and not so
opaque in nature. During defect
review on a mask inspection tool,
sorting out critical defects from this
ocean of nascent defects is no simple
task. Disposition of such mask defects
becomes a lengthy task when a large
number of defects (mainly progressive
Figure 9. ReviewSmart binning of defects on TeraScan systems.
Spring 2006
www.kla-tencor.com/magazine
23
R
e
t
i
c
l
e
I
n
s
p
defects) are present on the mask. Effective disposition of
a highly defective mask needs carefully set, productionworthy go / no-go criteria.
The new lithographic run-time detectors developed and
tested during this work targeted only contamination
defects. Some initial characterization will be required
to fine-tune this detector in the fab. This study showed
promise towards creating a helpful tool for mask disposition in the fabs. Efforts will continue over the coming
months to perfect these detectors and their usage.
Acknowledgements
The authors would like to thank the following individuals for their contribution:
William Volk, Qiang Li, Steven Labovitz, Ching Yun
Hsiang, Paul Yu, Amir Azordegan, and Zhian Guo of
KLA-Tencor Corp.
e
c
t
i
o
n
and Den Wang, Mask Defect Dispositioning, in 25th
Annual BACUS Symposium on Photomask Technology,
edited by J. Tracy Weed, Patrick M. Martin, Proc. of
SPIE Vol. 5992, 59921X, (2005) CID# 599206
References
1. K. Bhattacharyya, M. Eickhoff, Mark Ma, Sylvia Pas,
A Reticle Quality Management Strategy in Wafer Fabs
Addressing Progressive Mask Defect Growth Problem at low
k1 Lithography, Photomask Japan, 2005
2. K. Bhattacharyya, K. Son, B. Eynon, D. Gudmundsson,
C. Jaehnert, D. Uhlig, A Reticle Quality Management Strategy
in Wafer Fabs Addressing Progressive Mask Defect Growth
Problem at low k1 Lithography, BACUS Symposium on Photomask Technology, 2004
3. P. Yu, V. Hsu, E. Chen, R. Lai, K. Son, W. Ma, P. Chang,
J. Chen, Implementation of an Efficient Defect Classification
Methodology for Advanced Reticle Inspection, Photomask
Japan, 2005
Jerry Huang, Lan-Hsin Peng, and Chih-Wei Chu,
Kaustuve Bhattacharyya, Ben Eynon, Farzin Mirzaagha,
Tony Dibiase, Kong Son, Jackie Cheng, Ellison Chen,
2006
KLA-Tencor Trade Show Calendar
February 21-22
SPIE Microlithography, San Jose, California
March 6-7
IC China, Shanghai, China
March 21-23
SEMICON China, Shanghai, China
June 6
IITC Hospitality Reception, Burlingame, California
July 12-14
SEMICON West, San Francisco, California
July 12
YMS West, San Francisco, California
24
Spring 2006
Yield Management Solutions
Spring 2006
www.kla-tencor.com/magazine
43
33
R
e
Lithography
t
i
c
l
e
I
n
s
p
e
c
t
i
o
n
警告を出すタイミングを見極める
危険マスクを適切に判別するには
Jerry Huang, Lan-Hsin Peng, and Chih-Wei Chu, ProMOS Technologies
Kaustuve Bhattacharyya, Ben Eynon, Farzin Mirzaagha, Tony Dibiase, Kong Son, Jackie Cheng, Ellison Chen, Den
Wang, KLA-Tencor 社
マスクの進行性欠陥は、マスクの信頼性に関わる業界全体の深刻な問題です。このような問題は、遠紫外線
(DUV) リソグラフィを実施している高コストのハイエンドマスクでは特に深刻です。そのような場合でも、工場
では、問題のマスクがプロセスウィンドウに影響を及ぼし始める直前まで使い続けることを望んでいます。この
研究によって、微小な進行性欠陥はフォーカス/露光条件が適切であればウェーハに転写されることはないが、
それでもプロセスウィンドウに影響を与え、プロセスウィンドウを大幅に狭めるということが明らかになりまし
た。高解像度でのレチクルの直接検査により、これらの欠陥を早期に発見することはできますが、いまだに欠陥
マスクの効果的な判定方法を模索している工場が多いことも事実です。本文では、あるリソグラフィ欠陥検出ツ
ールを評価して、このような進行性のマスク欠陥の致命度の予測可能性を考察した結果を報告します。
マスク欠陥の増大要因を調査する
一般的なウエハファブでは、多くのマス
クは継続した使用後も異物問題と無縁で
す。つまり、クリーンな状態を保ってい
ます。平均では、波長365 nmリソグラフ
ィによるバイナリマスクの約1%、また
DUVリソグラフィによるハーフトーン位
相シフトマスク (EPSM) の約6~15%で、
製造工程でマスクを使用している間に欠
陥が増大する問題が発生しています1,2。
マスクを高解像度で直接検査すれば、こ
のような欠陥マスクを適切に検出できま
す。しかし、この欠陥増大の度合いは、
マスクによっては深刻なものになり、マ
スクのパターン面に数千もの結晶成長型
の実欠陥が発生します。その結果、この
ような問題マスクの欠陥レビューセッションは非常
に難しくなります。KLA-TencorのSTARlight マスク
検査ツールは、欠陥のサイズやマスクのタイプ (遮
光部、透過部、ハーフトーンなど) 別に欠陥をビニ
ングする機能があります。欠陥数がある程度であれ
ば、この機能を使って効果的にマスクの良否判定を
行うことができます。しかし、総欠陥数が多い場
合、従来のレビュー技術によるレビューセッション
は非常に長い時間を要します。今回、新登場のTeraScan STARlight (以下、“SL2”) のマスク誤差増大要
因 (MEEF) をベースとした検出ツールを評価して、
SL2で捕捉された数千単位の総欠陥数から重要な対
象欠陥のみを抽出できるかどうかを検証することに
しました。
次の図1は、入荷時の検査ではクリーンな状態のマスク
が、量産環境でマスクを20日間使用した後で、深刻な欠
Spring 2006
www.kla-tencor.com/magazine
25
R
e
t
i
c
l
e
I
n
s
p
e
陥増大が発生したことを示しています。左側にあるマ
スク画像の欠陥は容易にひとつひとつレビューできます
が、右側のマスク画像の総欠陥数は膨大で、人による個
々のレビューはほぼ不可能です。したがって、より自動
的に欠陥をレビューする方法が必要です。
プロセスウィンドウへの影響
プロセスウィンドウ内の微小な進行性欠陥の影響を
把握することは特別重要です。ウェーハに転写される
(プロセスウィンドウを消滅させる) 重大なマスク欠陥
もあれば、まったく影響を与えない欠陥もあります。
しかし、このように欠陥の多いマスクでは、その多く
が中間に属しています。このようなどちらともつかな
い欠陥はプロセスウィンドウを完全に消滅させないま
でも、狭める可能性があります。少しのプロセスウィ
ンドウの縮小も潜在的な問題を抱えるため、この視点
からマスク欠陥の致命度を検証する必要があります。
c
i
o
n
ことがわかります。欠陥の発生場所とサイズが同じで
もマスクのハーフピッチが小さいと、欠陥がウェーハ
上にブリッジを引き起こす確率は高くなります。これ
は、旧デザインノード (135 nmハーフピッチプロセスな
ど) では問題を発生しなかったマスク欠陥 (図2の欠陥
サイズ120 nmに注目) が、プロセスパターンの微細化と
MEEF値の増加によって、先端デザインノード (90 nmハ
ーフピッチ) のプロセスウィンドウに影響を与えるかも
しれないことを意味します。欠陥サイズが200 nmになる
と、プロセスウィンドウは完全に消滅してしまいます。
波長193nm露光とNA0.75によるシミュレーション結果
図3および4は、マスク上の120 nm欠陥がプロセスウィン
ドウに与える影響を示しています。デザインルールの高
集積化によって、欠陥の致命度は増大します。
焦点
露光
また、異物欠陥の中には透明なものがあり、この場合は
位相欠陥と解釈されるため、純粋にサイズの点から欠
陥を検証するだけでは不十分です。欠陥サイズ、透過
損失、および欠陥の発生場所を考慮して初めて欠陥の致
命度を正確に把握できることを念頭に置く必要がありま
す。それが、MEEF測定ベースのビニング機能が必須で
ある理由です。
図2のシミュレーション画像を見ると、マスクおよび欠
陥の発生場所のハーフピッチが大きな影響を与えている
t
図3:ハーフピッチ135 nmプロセスでは、120 nmマスク欠陥はプロセスウィンド
Focus
ウに影響を与えない
Mask
defect
Defect size
(on mask)
K1
factor
135nm
120nm
0.52
135nm
200nm
0.52
135nm
320nm
0.52
90nm
120nm
0.35
Wafer
image
Exposure
Half
pitch
図4:ハーフピッチ90 nmのプロセスでは、120 nmマスク欠陥がプロセスウ
ィンドウを大幅に縮小
90nm
200nm
0.35
図2:ウェーハ上でシミュレーションしたマスク欠陥 – MEEF値と欠陥サイズの影響
26
Spring 2006
Yield Management Solutions
次の図では、マスク欠陥のサイズが200 nmまで大
きくなると、ハーフピッチ90 nmのプロセスではプ
ロセスウィンドウが完全に消滅することがわかり
ます。
R
e
t
i
c
l
e
I
n
s
p
e
c
t
i
o
n
図7:プロセスウィンドウに影響を与えるマスク欠陥 – CD SEMツールによる
ウェーハ画像
図5:90 nmハーフピッチプロセスでは、200 nmマスク欠陥がプロセスウィ
ンドウが完全に消滅させる
マスク欠陥の転写とプロセスウィンドウ
110 nmノードによる波長193 nmのEPSMを検査し、露光
量マトリックスに従ってマスクを露光しました。露光
量は±20%変化させました。結果は次のとおりです。
欠陥検出ディテクターを開発する
今回の研究で評価対象としたのはKLA-TencorのTeraScanマスク検査ツールの、Litho3およびReviewSmartとい
う機能です。本文では、新しいTeraScan STARlight (SL2) を
使用した暫定的な結果を報告します。本格的な評価は
本研究の範囲外であり、多様なマスクレイヤおよびノ
ードに対する同検出ツールの効果を把握するには、今
後も広範囲な評価が必要になると思われます。
Litho3
欠陥サイズ、透過損失、および欠陥の発生場所とい
う情報が揃わないと、欠陥の致命度を正確に予測す
ることはできません。そのため、検査時に自動的に
MEEF値に連動して実行するリソグラフィタイプ検
出ディテクター(Litho3) を開発しました。Litho3は、
適切にセットアップすることによって、重大欠陥を
1つのビンに集約することができます。Litho3は、次
のコンセプトに基づいて開発されています。
図6:ウェーハにマスク欠陥が転写
露光量マトリックスに従ってこのマスクを露光し
た結果、微細な欠陥でもプロセスウィンドウに影
響を及ぼすことが判明しました。露光量が少し
でも不足すると、マスク欠陥の転写が認められま
す。つまり、最適な露光条件の露光量からわずか
5%減少しても、欠陥の転写性が増加しました。
異物欠陥の転写性を予測するには、今後も継続的な
調査が必要です。本研究において筆者らが目的とし
ているのは、マスク欠陥の適切な判断方法を確立す
る、つまり、マスクが量産に適しているかどうかの
判断の目安となる基準を設定することでした。本研
究では、TeraScan検査装置から得られたマスクの光
学画像を使用し、処理能力の高い欠陥検出およびレ
ビュー機能を開発することを目的としました。
a. マスクパターンおよびリソグラフィのコンテキ
ストに連動して検査実行時に動作する専用ディ
テクターを有すること。
b. 同じサイズおよび強度の欠陥でも、欠陥の発生
場所のMEEF値に基づいてそれぞれ異なるビンに
分類すること。つまり、MEEF値が高い領域にあ
る欠陥を専用リソグラフィビンに分類し、サイ
ズと信号強度は同じでもMEEF値が低い領域にあ
る欠陥はこの専用ビンには分類しない。
Litho3を正しくセットアップすれば、重要な欠陥は
この専用ビンに分類されます。このビンを制御する
しきい値はセットアップ時にユーザが定義できま
す。初期評価では期待の持てる結果が出ました。
次の図に示すマスクには約700の欠陥があります。
Litho3を実行することにより、これらの欠陥の一部を専
Spring 2006
www.kla-tencor.com/magazine
27
R
e
t
i
c
l
e
I
n
s
p
e
c
t
i
o
n
ReviewSmart
ReviewSmart検出ツールは、類似する欠陥をさらに
効果的にビニングする目的で開発されました3。ReviewSmartは、オペレータが指定したしきい値を使用す
ることによって、リソグラフィの観点からどの欠陥
が類似しているかを特定します。さらに、これらの
類似欠陥は同グループにビニングされます。多数の
欠陥でもわずか数個のグループにビニングされ、各
グループに分類された欠陥は重大度別にランク付け
されます。これらはすべて、検査実行時並行して実
行され、しかも検査時間にほとんど影響がありませ
ん。検査終了後、オペレータはグループ分けされた
欠陥をグループごとに分類していきます。以下は欠
陥の多いマスクのサンプルです。このサンプルでReviewSmartは、重要度の低いオープンエリア上の微小な
結晶成長型欠陥と、高集積度パターン上の欠陥を効
果的にビニングし、それぞれの欠陥を別個のグルー
プに分類しています。
結論
図 8:TeraScan SL2システムのLitho3ディテクターによる重大欠陥のビニン
グ (赤色)
用ビンに分類することができました。この専用ビンに
分類された欠陥は現実に重大であり、次の図のように
表出しました。重大欠陥はわかりやすいように赤で示
してあります。重大欠陥以外の欠陥は黄色と緑色で示
しています。専用ビン基準を厳しく設定して赤いビン
に分類する欠陥を少なくすることもできますが、その
際は設定を慎重に行う必要があります。
結晶成長やヘイズなどの進行性のマスク欠陥は引き
続き業界にとって脅威となっています。解像度要求
が厳しくなり、IC業界は非常に低いK1係数を持つリ
ソグラフィプロセスを導入しようとしています。そ
の結果、マスク上の誤差がウエハ上のパターンに与
える影響が大きくなっています(MEEF値の増大)。波
長193 nmのEPSMに関するシミュレーションデータや
実際の転写テストによって、マスク上の欠陥によっ
てはウェーハに転写されてプロセスウィンドウを消
滅させるものと、まったく影響を及ぼさないものが
あることが判明しました。しかし、
これらの中間に属する欠陥でも、露
光量が少しでも不足するとプロセス
ウィンドウへの影響があることも明
らかになっています。このような欠
陥はプロセスウィンドウを完全に消
滅させないまでも縮小させます。プ
ロセスウィンドウの縮小が問題を引
き起こすという可能性を考慮し、こ
の観点からマスク欠陥の致命度を検
証する必要があります。
進行性欠陥の問題は、マスクの特
定の割合 (1~15%) で認められます。
高解像度によるマスク検査を実施
すれば、このような欠陥問題を検
出することは可能ですが、初期段
階では欠陥が形成途上にあり、光
を通してしまいます。マスク検査
ツールによる欠陥レビューにおい
て、このような初期段階にある大
28
Spring 2006
Yield Management Solutions
R
e
t
i
c
量の欠陥から重大欠陥を識別することは容易ではあ
りません。マスク上に多数の欠陥(基本的に進行性欠
陥)が発生している場合、マスク欠陥の分類作業は非
常に長い時間を要します。欠陥の多いマスクを適切
に判断するには、マスクが量産に適しているかどう
かの判断の目安となる基準を慎重に設定する必要が
あります。
本研究で開発して試験を実施した新しい“Litho3”リソ
グラフィ欠陥検出ディテクターは、異物欠陥のみを
検査対象としています。量産環境にこのツールを導
入するには、同ツールの微調整のために初期評価が
必要となるでしょう。本研究により、製造工場での
マスクの良否判断に役立つツールの開発に明るい前
途を見出すことができました。ただし、理想的なデ
ィテクターセッティングを開発して用途を体系化す
るために、今後数ヶ月で取り組む予定です。
謝辞
本研究に取り組むにあたり、次の方々から支援、協力を
いただきました。この場を借りて感謝の意を表します。
KLA-TencorのWilliam Volk氏、Qiang Li氏、Steven
Labovitz氏、Ching Yun Hsiang氏、Paul Yu氏、Amir
Azordegan氏、Zhian Guo氏。
l
e
I
n
s
p
e
c
t
i
o
n
Farzin Mirzaagha氏、Tony Dibiase氏、 Kong Son氏、
Jackie Cheng氏、Ellison Chen氏 およびDen Wang氏、
第25回フォトマスク技術に関する年次BACUSシ
ンポジウムにおける、写真・光化学計測技術者
協会事務弁護士Patrick M. Martin氏、J. Tracy Weed氏
編纂危険マスクの判別 Vol. 5992, 59921X, (2005)
CID# 599206
参考文献
1. K. Bhattacharyya, M. Eickhoff, Mark Ma, Sylvia Pas,
A Reticle Quality Management Strategy in Wafer Fabs
Addressing Progressive Mask Defect Growth Problem at low
k1 Lithography, Photomask Japan, 2005
2. K. Bhattacharyya, K. Son, B. Eynon, D. Gudmundsson,
C. Jaehnert, D. Uhlig, A Reticle Quality Management Strategy
in Wafer Fabs Addressing Progressive Mask Defect Growth
Problem at low k1 Lithography, BACUS Symposium on Photomask Technology, 2004
3. P. Yu, V. Hsu, E. Chen, R. Lai, K. Son, W. Ma, P. Chang,
J. Chen, Implementation of an Efficient Defect Classification
Methodology for Advanced Reticle Inspection, Photomask
Japan, 2005
Jerry Huang氏、Lan-Hsin Peng氏 および Chih-Wei
Chu氏、Kaustuve Bhattacharyya氏、 Ben Eynon氏、
Spring 2006
www.kla-tencor.com/magazine
29
KLA-Tencor Rings the
On Nov. 18, 2005, KLA-Tencor executives gathered at the
NASDAQ offices in Times Square to ring the bell that closed
the day’s trading activity. The New York City
event marked a passing of the torch from
retiring CEO Ken Schroeder to newly
appointed CEO Rick Wallace.
Ken Schroeder bids
NASDAQ a
fond farewell.
(L-R) John Kispert, Ken Levy, Rick Wallace, and Ken Schroeder team up to ring
in a new era for KLA- Tencor.
30
Spring 2006
Yield Management Solutions
NASDAQ Closing Bell
In NASDAQ tradition, signatures are gathered for display on the
Market Site Tower — the epicenter of financial news and events.
This modern day icon is a seven-story high-tech electronic
display that illuminates Times Square 24 hours a day.
Spring 2006
www.kla-tencor.com/magazine
31
w w w . k l a - t e n c o r. c o m / l i t h o
]SYV TEXXIVRMRK TVSGIWW GSRXVSPVIWSYVGI
0)%62136)
0EXIWXWXVEXIKMIWJSV6IXMGPI5YEPMX]
%WWYVERGI0MRI[MHXLERH3ZIVPE]'SRXVSP
73098-328330/-87
)EW]EGGIWWXSSYVTSVXJSPMSSJ
TEXXIVRMRKTVSGIWWGSRXVSPXSSPW
ˆ
( %8%
ˆ
- 2 * 3 6 1 %8 - 3 2
ˆ
/ 2 3; 0 ) ( + )
ˆ
()'-7-327
6)+-78)6
*SVYTGSQMRK;IFMREVWERH)ZIRXW
%7/8,))<4)68
.YWXGPMGOXSVIUYIWX]MIPHGSRWYPXMRKSV
ETTPMGEXMSRW&/1W
79&7'6-&)
*SVIEVP]RSXM´GEXMSRSJYTHEXIW
Œ/0%8IRGSV'SVTSVEXMSR
Lithography
D F M
Bridging the Gap Between
Design and Mask
Physics-based Model for OPC Verification
Yung Feng Cheng, Yueh lin Chou, Chuen Huei Yang and CL Lin, UMC Corporation
Bo Su, Gaurav Verma, William Volk, Mohsen Ahmadian, Hong Du, Abhishek Vikram, Scott Andrews, KLA-Tencor Corporation
Disparate inspection strategies have given way to various approaches for verifying post-OPC designs for manufacturing.
However, a new paradigm has emerged in design verification that moves OPC verification from the design plane to the
wafer plane, where it really matters. KLA-Tencor’s DesignScan system inspects the OPC decorated design by simulating
how the design will be transferred to the reticle layer and how that reticle will be imaged into resist across the full
focus-exposure calibration window. Building on this paradigm is a new methodology on process window monitoring for OPC
databases using DesignScan and report results for a chip. This methodology will be explored in this article, along with new
applications in areas such as reticle target CD specification.
Introduction
As technology progresses towards 65 nm
and beyond, optical lithography faces increased difficulties, and reticle enhancement
techniques (RET) become imperative for
multiple process layers. With RET implementation, especially the optical proximity
correction (OPC) technique, the mask layout
deviates further from the design-intended
layout. Thus, the need for linking design
space with process space to ensure designintended device integrity is even greater.
For the OPC decorated design database, it
is important to verify the OPC before the
mask-making step. This verification step
can help ensure that there are no design-related defects, and it can provide a reasonable
process window for a given process. Design
rule or optical rule checkers have typically
been used successfully to find design-related
defects at best focus and exposure conditions. However, for full chip process window
verification, there was no such system available until now. The main requirements for
process window monitoring are good resist
modeling and inspection speed.
We previously reported an integrated approach using DesignScan. In the article, we
detail DesignScan’s defect detection capabil-
ity, model accuracy by comparison to wafer scanning
electron microscopy (SEM) images, and simulation
speed1. We also propose new use cases for DesignScan
for design-based process monitoring and control, as well
as process window impact-based mask specifications.
Uncovering pattern-dependent
systematic defects
DesignScan 290 is a new inspection system from KLATencor that detects pattern-dependent (feature-based)
systematic defects in the lithography process window of
the post-RET design. The inspection is accomplished by
simulating the transfer of the design to the reticle plane
and subsequently projecting the reticle image onto the
photoresist. The simulations are conducted at nominal
condition and at user-defined off focus-exposure conditions
through the process window and beyond. The normal
DesignScan inspection is a two-step process. First comes
the best focus and exposure (F0E0) inspection (the simulated
image to database inspection), in which the simulated
resist images at the best focus and exposure condition are
compared to the pre-OPC design database (design intent)
to detect possible defects due to OPC decoration. Second
is the process window inspection (the simulated image
to simulated image inspection), which uses the best
focus and exposure condition resist images as the reference. In this phase, each simulation within the process
window is compared to the one at nominal conditions to
detect any unacceptable variation in pattern fidelity.
Spring 2006
www.kla-tencor.com/magazine
33
D F M
The DesignScan system consists of hardware and software
components, many of which are based on the KLA-Tencor
TeraScan die-to-database reticle inspection system and
the KLA-Tencor PROLITH simulation software. The
system is configurable for a selected throughput target.
Prior to the DesignScan inspection, the post-RET/OPC
designs are converted to a proprietary KLA-Tencor format
optimized for inspection throughput. During inspection,
the converted design is segmented into patches and fed to
the proprietary image computer, which is used for both
the simulation of the design patches and the detection
of defects. The image computer is a multiple processorbased supercomputer capable of simulating multiple
jobs simultaneously. Design patches that fail within the
process window are noted as defective not only within
the process window, but anywhere in the FE space which
is simulated. Defect reports are sent to the operator consol, which is also used for inspection setup and review.
The simulation models three distinct steps in the lithography process: the transfer of the pattern to the reticle,
the formation of the aerial image, and the formation
of the resist image. The models for the reticle manufacturing process and aerial image formation, which uses
vector imaging, are based on mature TeraScan technology used for die-to-database reticle inspection. The
resist image formation and development are based on a
proprietary fast resist model and PROLITH technology,
also well developed and tested.
Each of the three models is calibrated or specified separately and could be decoupled, due to the physics-based
nature of those models. Physics-based models are used
to simulate reticle images from the design. The aerial
image model is specified by providing the following
basic illumination parameters: illumination wavelength,
parametric or measured source shape, and objective lens
NA. The resist model is calibrated by matching physical parameters to tuning data. The tuning data consists
of focus-exposure matrix (FEM) critical dimension (CD)
data from one-dimensional lines and spaces at various
pitches and sizes. Due to physics-based resist modeling,
any FE condition can be selected for simulation within
the calibration space (tested up to 2x process window).
By contrast, empirical models have to be calibrated at
each FE condition at which the simulation has to be
done. Also, physics-based models require recalibration
only if the physics of the process is changed, the polygons can be changed arbitrarily, and no recalibration is
required. For empirical models used by other OPC verification tools, a recalibration is required if the physics or
the polygons are changed. This leads to a significantly
34
Spring 2006
Yield Management Solutions
reduced calibration burden for physics-based models
as compared to the burden required for maintaining
empirical models.
Creating the right recipe
At runtime, the user creates an inspection recipe
through a menu-driven graphical user interface (GUI)
shown in Figure 1. It is at this point that the design
database to be inspected and the previously calibrated
or specified models are recalled. The user also selects the
range of focus and exposure conditions to be used in the
inspection during this step. Each focus and exposure
condition to be simulated can be defined individually
and need not be a regular array of points. Any segment
of the design that has been converted to the DesignScan
format can be inspected. The system supports the definition of multiple inspection areas, each of which can have
independent defect sensitivity settings. The sensitivity
setting can be changed for each detector and each area
independently. The runtime setup process typically
takes less than 10 minutes.
Once the recipe is defined, the inspection process begins
and each patch of the design is processed through the
simulation and defect detection models. It is at this
stage that the various simulations for each design patch
are compared to the simulation at nominal conditions.
The defect detection algorithm checks for the following types of defects: bridges, breaks, extra and missing
printed features, minimum resist width and minimum
space width (CD variation defects), line-end shortening
Figure 1: The DesignScan setup GUI. The recipe is created in a series of steps
that begin by selecting the design to be inspected.
D F M
for prioritizing the structures for corrections. It also enables correction of those defects which have the biggest
process window impact first, since they are the process
window limiters. Without such a guide, one may correct those defects which are not the process window
limiters; as a result, no process window improvements
will be seen.
Experiments: SRAM test database and
Metal 1 layer
Figure 2: The DesignScan review GUI. This interface is configurable and
multiple configurations can be saved as templates.
(LES), and interlayer overlap (ILO) defects. The defect
detection is based on any topographical change between
a test resist profile and the reference resist profile
A proprietary defect binning model is used to group all
lithographically identical defects. This greatly improves
review efficiency. During inspection, DesignScan applies
lithography hierarchy to perform binning, i.e., patterns
are compared within the lithographically significant distance for matching and binning. The efficiencies gained
by the defect binning system will be dependent on how
the pattern repeats in the design, among other factors.
The binning is particularly useful when certain defective
pattern is repeating, like in memory devices. In the example shown below, using the SRAM design database, a
reduction in the number of unique defects by a factor of
one and half order of magnitude has been observed.
An SRAM test database, called SMP, was chosen for the
test. This SMP database is designed using the 65 nm
design rule, with a minimum half pitch of 90 nm. The
database is decorated using model-based OPC, and the
Metal 1 layer serves as the test layer. Two FEM wafers
were printed using UMC’s Metal 1 process with resist
calibration patterns and the SMP test database patterns.
The focus and exposure range extends beyond the process window. Four line/space patterns are used for model
calibration. While the model accuracy was 4.1 nm when
this work was originally completed, we have made significant improvements on model accuracy and recently
demonstrated 2.5 nm for a poly-silicon layer. We expect
to achieve better than 2 nm model accuracy. A Bossung
plot of the measured CDs from the FEM wafers and the
DesignScan model-predicted CDs for a dense line is
shown in Figure 3.
The SMP database is 3.9 x 8.6 mm2 in size on a wafer
scale. A typical DesignScan inspection with 9 FE conditions takes about 62 minutes for this database. The
FEM data: space 100p 190.data
observation
model prediction
140
120
cd (nm)
Review can occur after or concurrent with the inspection. The defect review application shown in Figure 2
can be used to display a wide range of defect-related
information, such as database images, mask images,
aerial images, and resist images. The display is userconfigurable and a configuration can be saved as a
template and recalled later. The defects can be sorted by
process window impact, as well as defect type or defect
ID. It is very useful for process window monitoring to
sort defects based on the process window impact. Such
capability is only possible by DesignScan because one
can select any FE grid for simulation within the calibration window. By knowing where a defect appears in FE
space and its relative location to the nominal condition, DesignScan can easily obtain the process window
impact separation and sort defects based on the process
window impact. Such sorting provides a powerful guide
160
100
80
60
40
-200
-150
-100
-50
0
50
100
150
200
250
defocus
Figure 3: The Bossung plot of the measured CD data and DesignScan simulated
CD data from the model calibration for a dense line.
Spring 2006
www.kla-tencor.com/magazine
35
D F M
system specifies that, for an 8 mm x 8 mm database, simulation with 9 FE conditions finishes
within 2.5 hours. Nine FE conditions are a minimum requirement to adequately monitor the
design’s performance across the process window
by varying focus, dose, and both (diagonally)
around the nominal condition.
Design
RET application
DesignScan 9 FE
inline inspection.
Defect found?
Fix
Yes
In OPC database process window inspection, the
DesignScan PW
inspection.
following process flow is proposed (see Figure 4):
Hot spot found.
after OPC decoration, perform a DesignScan inline inspection at 9 FE conditions (3x3). If there
Mask tape out
Pass on hot spot
Local fix few hot
are defects found within the process window,
information to wafer
spots to extend
then either the RET has to be modified to fix
process monitoring
process window
the defects or the process has to be re-centered
for the design, until no defects are present in the
Figure 4: The process flow chart in design process window monitoring for DesignScan.
process window. Subsequent to the design being
clean in the process window, DesignScan process
proportional to the number of defects found. Extending
monitoring inspection—pushing FE conditions
defocus 100 nm and dose 2 mJ/cm2 further out uncovfor simulation beyond the edge of the process window
ered no defects [see Figure 5 (b)].
and to find hot spots (weak spots) in the chip—should
be run. Depending on the FE conditions (distance from
In order to find weak spots, the FE conditions are
the process window) and the number and nature of hot
extended even further outside the process window. Gap
spots found, one of the following actions can be selected:
defects start to appear on the high dose side, and bridge
send to mask tape out since it passed inline inspection
defects appear on the low dose side [see Figure 5 (c)].
or perform local fixes to a few hot spots to extend the
1332 thin line and gap defects were found at a dose
process window further (optimization); at the end, pass
14.3% higher than the nominal dose and at 180 nm dethe hot spot information downstream to monitor those
focus. The proprietary advanced defect binning (ADB)
hot spots in mask making and wafer processing (helping
capability reduced the total defects to 21 unique groups,
wafer process control), since these patterns are the most
due to the repeating pattern nature of the memory
sensitive and first to fail in the design due to process
device. Similarly, 2374 thin trench and bridge defects
window drift or change.
were found at a dose 21.4% lower than the nominal
dose and at 180 nm defocus. DesignScan ADB reduces
Results and analysis
the total defects to 27 unique groups. Since gap defects
appear closer (14.3% vs. 21.4% relative to the nominal
1. Process window monitoring
The SMP test database was already
well debugged using wafers before it
was inspected on DesignScan. However, the process could be improved
and completed faster with a simulation-based system like DesignScan.
DesignScan inline inspection found no
defects within the process window, as
shown in Figure 5 (a). The FE conditions to be simulated are selected at
the edge of the process window and
to match the wafer FE conditions (for
verification purpose only). DesignScan Figure 5: The standard DesignScan inline inspection on SMP. No defects were found (represented by green dots)
color codes the defect map based on
within process window (a). Pushing FE points out further in focus and dose still uncovered no defects (b). In order
defect types (green means no defects
to find weak spots, the FE points need to be extended by nearly 2x the process window (c).
found), and the size of the circle is
36
Spring 2006
Yield Management Solutions
D F M
respectively. By pushing the dose away from the
nominal dose a little further, the thin line defects become gaps as shown in Figure 6 (e). The
gap defects appear at the exact locations and FE
conditions as predicted by DesignScan. The SEM
results confirm the weakness in those defective
locations and the simulated resist images match
wafer SEM images as well. DesignScan predicts
the gap defect onset at 180 nm defocus, while
the SEM image shows that the gap defects occur
between 180 nm to 230 nm defocus (granularity
limited due to the focus stepping distance for
the FEM wafers).
Figure 6: DesignScan simulated resist images at nominal condition (a) and at defective condition
(b). Corresponding SEM images from a FEM wafer at nominal condition (c) and at defective condition. Increase the dose further (e), and the predicted thin line defects become gaps, confirming the weakness of those locations.
Figure 7 shows another example of thin line/gap
defect, indicated by a red dot in the DesignScan
simulated resist image. The corresponding wafer
SEM image confirms the defect. DesignScan
predicts the gap defect onset at 180 nm defocus,
while the SEM image shows the gap defects occuring between 180 nm to 230 nm defocus.
Those thin line defects become gap defects
when the FE point is pushed further away from
the nominal focus and exposure (F0E0). Given
the fact that CD curves in a Bossung plot for a
given feature do not cross, the weakest features
will have the largest CD changes and become
breaks/bridges first when moving FE away from
F0E0. The onset of breaks/bridges is relatively
easy to determine from SEM images. Thus, the
predicated FE condition in which a break/bridge
defect occurs can be directly compared to the FE
condition where such a defect occurs on a FEM
Figure 7: Another example of thin line/gap defect (left) and its corresponding wafer image (right).
wafer through SEM review (subject to the increThe thin line breaks at the exact place that DesignScan predicted would be a weak point.
mental changes of focus and dose of FEM). Due
to systematic and random errors in mask making,
dose) to the optimal condition, they are the weakest
in optical imaging systems, and in resist image
structures and also the process window limiter. Figure
formation, there is FE offset between the simulated
6 shows an example comparing a group of thin line
resist images from design database to the actual resist
defects predicted by DesignScan and the corresponding
images at defective locations. We have established the
wafer SEM images. In Figure 6 (a) and (b), the resist
DesignScan prediction offset specification on FE condiimages from DesignScan at nominal condition (a) and
tions where a defect will occur based on internal tests:
defective condition (b) are shown, where the white areas
within the process window, the dose offset should be
indicate areas without photoresist (spaces) and the dark
within 2.5% and the focus offset should be within
areas represent the areas covered with photoresist (lines).
30 nm; outside the process window, the dose offset
At high dose defective condition, DesignScan found
should be within 5% and the focus offset should be
multiple defects (indicated by red dots) due to the thinwithin 50 nm due to increased line/space changes as a
ning of certain resist lines as shown in Figure 6 (b). To
function of dose and/or focus change. The confirmed
compare DesignScan simulation results, the corresponddefects found by DesignScan on the SMP database are
ing wafer SEM images are shown in Figure 6 (c) and (d)
all within the offset specifications.
for the nominal condition and the defective condition,
Spring 2006
www.kla-tencor.com/magazine
37
D F M
tive weakness of several defects as a function of focus
and dose changes is indicated. There are four defects
within the field of view (FOV), labeled from the top to
the bottom as d1 to d4 (see the lower left corner resist
image). The upper right corner image is the closest to
the nominal conditions, and the lower left image is the
farthest from the nominal conditions. It is clear from
Figure 8 (a) that d2 is the weakest and d3 is the strongest among the four. By knowing the relative weakness
of those defects, a designer can then prioritize the structures for corrections. Fixing the weakest structures will
help extend the process window until the next weakest
becomes the weakest. Fixing the second weakest structures first will not extend the process window, since the
process window is dictated by the weakest structures.
B
C
D
Figure 8: In order to better match the simulation result and the wafer SEM image,
DesignScan simulated near the onset of a group of defects in finer 3x4 FE grid:
every 0.5 mJ/cm2 in dose in vertical direction and every 10 nm in defocus in
horizontal direction as shown in (a). A corresponding SEM image matches the
bottom simulated image in 180 nm defocus column, which is enlarged in (c).
B
Since DesignScan can simulate whole or part of the
SMP database anywhere within the resist model calibration conditions (FE range), it is quite useful in certain
cases to separate defects by their weakness (where in
FE space a defect occurs relative to the nominal condition). Figure 8 shows an example of a group of bridge
defects near the onset at much finer FE grid. Again, the
fast simulation speed allows us to do such a fine FE grid
(18 FE points) in 67 minutes for the SMP test database.
The fine grid FE simulation pinpoints where a defect
starts to appear; thus, in the case of Figure 8, the rela38
Spring 2006
Yield Management Solutions
C
D
Figure 9: Another bridge defect example. DesignScan correctly predicted the weak
points. Fine FE grid simulation tells relative weakness of two defective locations.
D F M
The fine FE grid inspection can also help system developers to gauge the accuracy of the model prediction.
By matching the catastrophic defective condition of
predicted defect onset to that of a FEM wafer, the defect
prediction offset can be established. For this particular
group of defects, the dose offset is 3.6% of nominal dose
and the focus offset is 10 nm [see Figure 8 (b) and (c)].
Figure 9 shows another example for bridge defects. The
fine FE grid simulation clearly distinguishes the relative
weakness of two defects within the FOV, and helps to
match the simulated resist image to the wafer SEM image [see Figure 9 (b) and (c)].
Table 1 summarizes the confirmation results for all gap
defects and bridge defects. 20 out of 21 groups of gap
defects and 27 out of 27 groups of bridge defects are
confirmed by the wafer SEM images.
&%RANGE
$EFECTTYPE !$" #ONFIRMED
/FFSET
NM
.!
NM GAPS
NM
BRIDGES NM
Table 1: The defect confirmation summary.
All of the defect information can be used in a subsequent mask-making process (pattern-dependent CD
control) and in wafer processing monitoring (patterndependent yield loss). Since we know that the gap
defects are the weakest and are the process window
limiters, it is critical to achieve tight CD control in
a mask-making process in the regions that contain
those gap defects. The CD control in other regions is
less critical; thus, the CD control specifications can be
loosened. In wafer processing (lithography and etch),
these gap defectivity locations (structures) will be the
first to fail if the process condition deviates away from
the nominal condition, which can cause yield loss. Such
information will greatly increase the efficiency of wafer
processing monitoring.
2. New DesignScan detectors-LES and ILO
We tested the newly developed line-end shortening
(LES) detector on the SMP database. The detector can be
enabled and disabled in recipe setup. DesignScan found
no LES defects within the process window for SMP. 490
LES defects were detected outside the process window
at 21.4% lower than nominal dose and 180 nm defocus
using a detection threshold of 60 nm—an adjustable
parameter. When a line-end shortening exceeds 60 nm
relative to the reference line end, the simulated resist
image at nominal condition, or for the nominal condi-
One group of gap defects (31 total instances) predicted
by DesignScan cannot be conclusively confirmed by
SEM images as shown in Figure 10. Thus, 3675 out
of 3706 (99.16%) defects have been confirmed by
wafer SEM images. In the simulation model calibration process, we intentionally tune the data so that we
over-predict defects. By doing so, we increase the alpha
risk (false alarm, detecting more nuisance defects), but
minimize the beta risk (failure to detect, not miss a real
defect), since missing a real defect is more costly than a
false alarm.
Figure 11: An example of an LES defect detected by DesignScan. The DesignFigure 10: DesignScan predicted a defect (left) and its corresponding SEM image
(right). The SEM image cannot conclusively confirm the defect in this case.
Scan simulated resist images at defective (left) and reference (right) conditions are
shown on top. The green lines represent the design feature edges and the purple
lines indicate resist edges. The corresponding wafer SEM images are shown
below at the defective (left) and the reference (right) conditions.
Spring 2006
www.kla-tencor.com/magazine
39
D F M
tion compared to the drawn/decorated database, the line
end will be flagged as defective. More LES defects can
be detected by lowering the LES detection threshold.
ADB reduces the defects into 21 groups. SEM review
images have confirmed all 21 group LES defects, and
Figure 11 shows an example of an LES defect in a zoomin view. DesignScan predicated images match the wafer
SEM images well. In all detected LES defect cases, no or
inadequate OPC for the line ends are identified. Based
on the DesignScan LES findings, the OPC at those line
ends can be significantly improved to avoid severe LES.
Another newly developed 2D DesignScan defect detection capability is interlayer overlap (ILO) detector. The
interlayer overlap detector can be activated and the detection parameter adjusted independently for the layer
below and the layer above.
For the SMP database, we will focus on contact to Metal
1 layer overlap testing. Metal 1 is the layer of interest
and the overlap error between the contact and Metal 1
is caused by line-end shortening of metal lines. DesignScan found 60 ILO defects with the default parameter
setting at 17.8% lower than nominal dose and 180
nm defocus, and 343 ILO defects at 21.4% lower than
nominal dose and 180 nm defocus, along with other defects. Figure 12 shows an example of an ILO defect (red
circle at right) detected by DesignScan with SMP. The
simulated resist image at nominal condition is at left
and the resist image at defective condition is at right.
The green lines represent the metal database and the
blue squares are contacts.
be implemented is determination of the reticle target
CD specification through process window simulation.
This can be achieved by simply biasing the polygons of
the post-OPC database in both directions (+/-) during
the data conversion phase and re-inspecting the database. By evaluating the process window impact of the
biased database, the CD tolerance specifications for different regions of the database can be determined.
The SMP test database in our study was biased+/-5 nm,
+/- 10 nm, and +/- 15nm. Inline 9 FE condition
DesignScan inspection still yields zero catastrophic
defects. However, the gap defects (Figure 6 shows some
of the examples) appear early on the high dose side
for positive biases (white space increases) as shown in
Figure 13. No bridge defects appear in the FE range
inspected on the low dose side for negative biases (white
space decreases) up to 15 nm. For concept demonstration purposes, we stopped at 15 nm bias, since higher
bias may require the re-adjustment of the nominal
condition and resist model recalibration.
Figure 13: Defect map of SMP with 5 nm, 10 nm, and 15 nm biases (increase
white area). Thin line (blue) and gap (orange) defects appear closer to the nominal
condition as the bias increases, which means the process window gets smaller.
Figure 12: An example of an interlayer overlap defect (indicated by the red circle)
of SMP between the contact layer (blue squares) and the Metal1 layer (green
lines). The resist edges are enhanced with purple lines for the nominal condition
(left) and defective condition (right).
3. Mask CD target specification
One of the new applications for which DesignScan can
40
Spring 2006
Yield Management Solutions
Not only are those gap defects the weakest in the
SMP database and the process window limiters, they
also dictate the reticle CD specifications. Maintaining
good CD control in the gap defective locations is much
more critical than in other regions, the foundation for
pattern-dependent reticle CD specifications. We can
also conclude that for the SMP database, the reticle CD
control is asymmetric: for the tested metal layer case,
maintaining tighter CD control is required on the
positive bias side rather than on the negative bias side.
To avoid such CD control asymmetry and also extend
the process window, re-centering the process is recommended. For the SMP test database, lowering the
nominal dose from 28 mJ/cm2 to 27 mJ/cm2 will help
to reduce defect-related yield loss, assuming no other
D F M
negative effect to the device functionality. Typically,
the process window center, which is the center of the
process window used for all designs, is determined using
test structures. This is inadequate because the test structures do not always adequately represent the full database and also because the behavior of all designs is not
identical through a process window. Using DesignScan,
the process center can be determined and optimized for
each individual design, providing the largest possible
process window and, hence, yield.
In this illustration we have discussed the mask CD specification use case through open and bridge defects. Similar
analysis can be performed for CD variations through the
use of DesignScan CD simulation capability.
Summary
We have demonstrated a methodology for monitoring
the performance of a post-OPC design database across
and beyond the process window using DesignScan. The
ability of simulating resist images at any arbitrary FE
location (within model calibration range) using one calibrated model and fast inspection speed allows DesignScan to perform full process window monitoring and
optimization. The methodology in this paper has been
demonstrated using a 90 nm half pitch SRAM Metal 1
database. In this database, no defects were found within
the process window. Gap defects were found and confirmed on the wafer 14.3% away from the nominal dose
and at 180 nm defocus. Bridge defects were confirmed
on the wafer 21.4% away from the nominal dose and at
180 nm defocus. These patterns are the weakest across
the FE conditions. If there is a change in the process,
these locations will fail first; hence, these locations
should be used for inline monitoring of the lithography
process for systematic changes. This database has a fairly
robust process window; however, the process window
can also be further improved by fixing these potentially
yield-limiting locations that have been identified by
DesignScan.
In this article, we have demonstrated additional use
cases for DesignScan. The solution can be used for
design-based mask CD specification determination and
design-based process centering. Users can fit the process
to the design or vice versa. We have also demonstrated
the methodology for using DesignScan for design-based
process control and monitoring.
Acknowledgements
The authors would like to thank the UMC lithography
and OPC team and KLA-Tencor DesignScan engineering team for their support.
Yung Feng Cheng, Yueh lin Chou, Chuen Huei Yang
and CL Lin, Bo Su, Gaurav Verma, William Volk,
Mohsen Ahmadian, Hong Du, Abhishek Vikram, Scott
Andrews,Reducing Design Respins, in 25th Annual
BACUS Symposium on Photomask Technology, edited
by J. Tracy Weed, Patrick M. Martin, Proc. of SPIE Vol.
5992, 59921X, (2005) CID# 599244
Reference
1. W. Howard, et al, “Inspection of integrated circuit databases through reticle and wafer simulation: An integrated
approach to Design for Manufacturing (DFM)”, Proc. SPIE
vol. 5756, pp 61-72 (2005).
Spring 2006
www.kla-tencor.com/magazine
41
Lithography
M
e
t
r
o
l
o
g
y
Opening the Window to
Higher Parametric Yield at 32 nm
DFM and APC Strategies Tackle Process Window Limitations
Kevin M. Monahan, KLA-Tencor Corporation
© 2005 IEEE. Kevin M. Monahan, Enabling DFM and APC Strategies at the 32nm Technology Node. Reprinted, with permission,
from International Symposium on Semiconductor Manufacturing (ISSM) 2005 Conference.
Most semiconductor manufacturers expect 193-nm immersion lithography to remain the dominant patterning technology
through the 32-nm technology node. Even now, the interaction of more complex designs with shrinking process windows
is severely limiting parametric yield. The industry is responding with strategies based upon design for manufacturability
(DFM) and multi-variate advanced process control (APC). The primary goal of DFM is to enlarge the process yield window, while the primary goal of APC is to keep the manufacturing process in that yield window. This article discusses new
and innovative process metrics, including virtual metrology, that will be needed for yield at the 32-nm technology node.
Introduction
Enabling DFM and APC strategies with
metrology depends on innovation1. As a
minimum, DFM will require feeding forward
design intent, simulator output, layout clips,
and design rule-check (DRC) hot spots to
expedite setup of measurement tools. Current DRC and aerial image modeling at best
focus and exposure conditions is increasingly
unreliable. In the future, process windowaware approaches will require powerful
full-chip simulators that can accurately
predict and measure developed patterns in
resist, along with accurate measurement feedback to calibrate the printability simulator.
To control development costs, the conversion of data to information, knowledge, and
decisions must be taken as far upstream as
possible.
On the process control side, implementing an APC strategy requires feeding forward
both process context and measurement data.
In the future, we know that process context
and measurement data must increase dramatically to support multi-variate control at
the lot, wafer, field, die, and intra-die levels.
In addition, yield and performance losses
42
Spring 2006
Yield Management Solutions
are often caused by process integration issues or combinations of profile, shape, roughness2, thickness, and
pattern placement errors. For these applications, new
measurement types are required, creating a concomitant need to decrease the cost and increase the yieldrelevance of each measurement. In addition, combined
dispositioning and parametric yield analysis will
require data from multiple metrology tools.
New metrology requirements
The case for linking design, layout, mask, and wafer
processes with metrology is compelling. Greater complexity is offset by the advantage of greater access to adjustment; an array of strategies generates higher return
than just one. The increasing metrology needs of DFM
and APC can be met by innovations in the measurement
of pattern shape, profile, overlay, thickness, composition,
and electrical properties. As an example, some of the key
dependencies for device performance are given by the
transistor drive current equation below:
( )
Id=1 W
• ( e • µ) • ( V- V ) 2
t
2 L •T
Drive current at saturation depends on physical dimensions such as gate width W, gate length L, and gate
oxide thickness T. It can limit the speed and, therefore,
M
the average selling price of a device. Drive current also
varies with electrical properties such as channel electron
mobility µ, gate oxide dielectric constant x, and threshold
voltage Vt. These, in turn, are affected by such factors
as strain, composition, and transistor architecture. The
following sections will explore innovations in physical
dimension and pattern placement metrology.
B
C
D
Figure 1: (a) Design-intent layout prior to simulation. (b) Noise-free SEM image
simulation. (c) SEM image for comparison with layout.
From CD to Shape Metrology
e
t
r
o
l
o
g
y
From CD to Profile Metrology
Scatterometry-based CD (SCD) metrology will evolve
into more yield-relevant “profile metrology” and may
become the reference tool for calibrating the CD SEM.
SCD is able to accurately reproduce cross-section profiles
imaged in a transmission electron microscope (TEM)
(Figure 2a). The ability of SCD based on spectroscopic
ellipsometry (SE) to accurately measure footing and
notching at the base of gate structures has led to twofold improvements in correlation to electrical L-poly
and drive current (Figure 2b). For this reason, SCD tools
are currently displacing other metrology tools in feedforward APC applications from lithography to etch. In
control applications for shallow-trench isolation (STI),
significant cost savings have been realized by metrology convergence. SCD tools are currently displacing
CD SEM, AFM profile, and SE film thickness tools for
the control and monitoring of isolation (Figure 2c). The
benefits are lower cost, shorter cycle time, and greatly
reduced temporal, spatial, and technology de-correlation
for the more performance-relevant compound measurements such as aspect ratio.
Scanning electron microscopy (SEM)-based CD metrology will evolve into more yield-relevant “shape metrology”. Measurements such as line-end shortening, miniFrom Scribe Line to In-chip
mum gap, line roughness, and feature rounding will
Traditional box-in-box (BiB) overlay metrology will
be performed as commonly as standard CDs are today.
evolve into more yield-relevant, grating-based overlay
To decrease cost, multiple measurements will be made
metrology (AIM). This will take measurement of pattern
within the same image field to
assess local pattern variation
and provide robust averages
for APC. OPC printability
verification will be an increasingly critical DFM application
for the SEM. Design-based
metrology (DBM) will enable
both APC and DFM strategies. Briefly, DBM is the use
of design-intent and hot-spot
information to define perforB
A
mance-relevant measurement
locations within a semiconductor device. A particularly
powerful implementation3
is shown in Figures 1a-1c. It
solves one of the most vexing
DBM problems by simulating
a noise-free SEM image from
design input. The simulated
image is then used for robust
C
pattern recognition and precise location of measurement
Figure 2: (a) SCD profiles on TEM cross-sections. (b) SCD and SEM correlation to L-poly. (c) Simultaneous SCD sites.
measurements on STI.
Spring 2006
www.kla-tencor.com/magazine
43
M
e
t
r
o
B
l
o
C
g
y
D
Figure 3: (a) Traditional box overlay target. (b) Robust grating overlay target. (c) Small in-chip overlay target.
placement error to new levels of accuracy and enable
combined CD and overlay dispositioning. At the 32-nm
node, BiB overlay metrology (Figure 3a) will suffer from
extreme process sensitivity, particularly with respect
to reticle fabrication error, asymmetric deposition and
etching, and chemical mechanical planarization (CMP).
Grating-based overlay technology (Figure 3b) can
decrease process-induced measurement error by a factor
of two. Remaining pattern placement error, including
unmodeled intra-chip error, will be addressed with tiny
in-chip grating targets4 (Figure 3b). These enable significant reduction of model residuals, the largest remaining
source of overlay measurement error. In some cases, such
small overlay targets may be combined with lineend-shortening (LES) targets that are used to monitor
focus and exposure excursions in lithography cells. The
benefits are lower cost per yield-relevant measurement
and higher temporal, spatial, and
technology correlation for rootcause analysis.
ment targets is critical; therefore, calibrated lithography
models, such as PROLITH, will be employed to assist
in the initial target optimization. These models must
use realistic mask data and comprehend the most aggressive resolution enhancement technologies (Figure 4a).
Second, they must provide accurate results for immersion lithography at 193 nm (Figure 4b). Third, they
must enable rigorous virtual metrology to supplement
actual physical measurement (Figure 4c). The benefits
are lower cost per measurement, inline validation of
physical metrology, and upstream pattern analysis to
reduce design, mask, and wafer-level yield loss.
Conclusions
This article identified four trends that address the
metrology needs of semiconductor manufacturing at the
32-nm technology node. In particular, the focus has been
on the physical parametric measurements that will enable
future APC and DFM strategies. The key trends are:
• The transition of the SEM from CD metrology to
a more yield-relevant and cost-effective shape
metrology utilizing critical design data.
• The transition of SCD from CD metrology to more
yield-relevant profile metrology for gates, STI,
sidewall spacers, and contact holes.
From Actual to Virtual Metrology
Traditional process modeling and
simulation will evolve into yieldpredictive “virtual metrology”.
Even now, the measurement technologies discussed (SEM, SCD,
and AIM) rely to some extent on
simulation. Simulated SEM images assist with design-based pattern shape metrology. Rigorous
coupled wave (RCW) algorithms
generate libraries of ellipsometric
spectra for comparison with actual
SCD data. Overlay simulators5
predict the optical signatures
of innovative overlay targets in
order to maximize sensitivity
and minimize response to process
noise. Finally, robust printability
of SEM, SCD, and AIM measure44
Spring 2006
B
A
C
Figure 4: (a) Chrome-less phase mask structure. (b) Polarization simulation for I-193nm. (c) Calibrated virtual metrology simulation. Yield Management Solutions
M
• The transition of overlay metrology from box-in-box
targets in the scribe line to more yield-relevant AIM
grating targets inside the chip.
e
t
r
o
l
o
g
y
References
1. K. Monahan and B. Trafas, SPIE Vol. 5756, 2005.
2. P. Leunissen, et al., SPIE Vol. 5752, 2005.
• The transition from actual to virtual metrology using
calibrated process simulators, such as PROLITH and
its derivatives.
3. L. Capodieci, et al., KLA-Tencor YMS West 2005.
Acknowledgements
5. L. Seligson, et al., SPIE Vol. 5752, 2005.
4. P. Leray, et al., SPIE Vol. 5752, 2005.
The author thanks Brian Trafas, Murali Narasimhan,
Umar Whitney, John Robinson, Amir Azordegan,
Gian Lorusso, Matt Hankinson, Ted Dziura, David
Tien, Mike Adel, Chris Sallee, and Dan Wack for
valuable inputs.
© 2005 IEEE. Kevin M. Monahan, Enabling DFM
and APC Strategies at the 32nm Technology Node.
Reprinted, with permission, from International
Symposium on Semiconductor Manufacturing (ISSM)
2005 Conference.
Reticle Yield Management Seminar
A valuable venue for innovative ideas
KLA-Tencor’s Yield Management Seminars (YMS) focus on the latest solutions and strategies
for accelerating yield through critical technology transitions. Participants have the unique
opportunity to learn and gather information from several leading experts in the field.
Date: Time: Location: Monday, September 18, 2006
12:30-5:30
Monterey, California
Call for future papers
Papers should focus on using KLA-Tencor tools and solutions to enhance yield through increased
productivity and performance. If you are interested in presenting a paper at one of our upcoming
Yield Management Seminars, please submit a one-page abstract to: Nancy Williams by fax at
(408) 875-4144 or email at nancy.williams@kla-tencor.com.
YMS at a Glance
Date
Location
July 12
San Francisco, California
August
Singapore; Taiwan
Awards
S
e
c
t
i
o
n
s
CUSTOMER
COMMITMENT
COMMENDED
WITH SAMSUNG BEST ENGINEER HONOR
Applications Engineer Helps Optimize Puma 9000
Performance at Memory Production Line
For HongSun So, being a successful senior product
applications engineer at KLA-Tencor Korea involves more
than technical expertise. Success is also driven by a deep
understanding of the customer’s real problems. Assigned to
work with the Samsung Electronics Semiconductor Memory
Division on its Puma 9000 implementation, So has clearly made
an impression. For his dedication, the Samsung MI group has
honored So with its Samsung Best Engineer award.
“Receiving Samsung’s award is an honor to the role of the
applications engineer,” said So, a KLA-Tencor employee for the
past five years. “And it means that I need to continue to work
hard and do good work to keep this honor.”
So considers as a key responsibility his efforts in helping Samsung speed their production ramps. When a tool encounters an
issue, he’ll typically stand by long after a service engineer has
resolved the problem, just to be sure performance is maintained.
He’ll perform applications tests promptly, and he’s committed to
writing the optimal recipes to help the customer monitor unique
killer defects in production. For the most part, he considers himself to be a voice of the customer, gathering and delivering customer feedback to KLA-Tencor engineering and marketing teams
to help enhance the tools.
46
Spring 2006
Yield Management Solutions
“Every day, I meet or call Samsung to maintain our good communication flow as well as our relationship,” explained So. “When
the customer successfully applies some of the Puma performancerelated tips I provide, they express their gratitude.”
In the fourth quarter of Fiscal Year 2005, So received a KLA-Tencor General Manager Award for his “unequivocal devotion to
work.” Exhibiting strong leadership within the local applications
group, So has mentored and trained local applications engineers
as well as customers, helping them make optimal use of their
Puma 9000 tools.
Prior to being assigned to Samsung, So was the owner of successful Puma evaluation projects for R&D at Hynix. There, he completed 10 layer evaluations within a three-week period—which
was considered by the customer to be unprecedented.
For the Samsung honor, So was chosen from a pool of 370
applications and customer service engineers from more than 20
vendors. Notes So, “Having ownership of one’s job and product is very important in the eyes of a customer. I always try to
understand the customer’s situation and requests, and work hard
alongside the customer to resolve any issues or develop ways to
more effectively find unique defects.”
Accelerating Yield
K-T Certified
TM
Refurbished price. Winning performance.
The one-stop, worry-free source for fully refurbished KLA-Tencor tools
➤ Guaranteed to meet or exceed original product performance specifications
➤ Short lead times and KLA-Tencor expertise meets your manufacturing requirements with no surprises
➤ Includes performance and reliability-enhancing software and hardware upgrades
➤ Largest supply of KLA-Tencor refurbished tools
➤ Service and applications packages designed to fit your needs
➤ Full asset management program for when you need help managing your surplus assets
➤ For more information, visit:
w w w. k l a - t e n c o r. c o m / c e r t i f i e d
©2004 KLA-Tencor Corporation.
®
Accelerating Yield®
Particle on-epi:
� bright scatter
� dark reflected
� same optical size
Particle in-epi:
©2005 KLA-Tencor Corporation.
� bright scatter
� dark reflected
� smaller scatter signature
(film thicker over particle)
Do you know the three W’s of epi-layer
inspection? Only Candela finds where it is, what it is, and when it occurred.
™
Differentiating between subtle optical characteristics can provide critical information on defects. A particle
under the epi layer is a very different problem than a particle on the surface. Our Optical Surface Analyzers
(OSA) are unique surface inspection systems that employ a combination of measurement technologies
to automatically detect and classify a variety of defects. Defects are binned by size into user-defined
categories, and displayed on a defect map. The OSA images remain linked to the report, for quick and
effective review.
� Automatically classifies particles and scratches as “on” versus “in or under” the epi layer
� User-defined defect classifications allow automated detection and reporting of unusual defect types
� Crystal defects such as dislocations and polytype changes are automatically detected and counted
� Manual or automated cassette-to-cassette operation
� Accommodates wafer sizes from 50 to 300 mm
� For more product information, go to:
www.kla-tencor.com/candela
48
Spring 2006
Yield Management Solutions
Lithography
M
e
t
r
o
l
o
g
y
Automating Investigation of
Line Width Roughness
Full Spectral Analysis Enables Benchmarking of New Resists
L.H.A. Leunissen, M. Ercken and J. A. Croon, IMEC
G.F. Lorusso, H. Yang, A. Azordegan and T. DiBiase, KLA-Tencor Corporation
One of the most commonly used estimators of line width roughness (LWR) is the standard deviation. However, this
approach is incomplete and ignores a substantial amount of information. As an alternative, full spectral analysis can
be used to investigate and monitor LWR. A variety of estimators—including standard deviation, peak-to-valley,
average, correlation length, and Fourier analysis—have been implemented online on CD SEM. The algorithms were
successfully tested against e-beam written LWR patterns, both deterministic and random. This methodology, which
allows a fully automated investigation of LWR, was used to monitor LWR over a long period of time, benchmark new
resists, and to investigate the effect of LWR on device performance and yield.
Introduction
The importance of LWR for future technology nodes has been demonstrated in various
experimental and theoretical investigations.
However, although it is clear that LWR
will influence device performances and
production yield, not to mention metrology
strategies, the quantitative details are still
controversial. This information is crucial in
order to define specific guidelines and monitoring criteria. Furthermore, it has been
recently pointed out that a measurement
of LWR alone does not guarantee the full
characterization of the physical phenomenon
under investigation. A full spectral analysis
is needed. In the case of self-affine edges, for
example, it has been demonstrated that a set
of three parameters is required to define the
physical system: the LWR standard deviation s, the correlation length x, containing
spectral information on the edges, and the
fractal exponent a, related to the kind of
diffusion process involved in the physical
creation of the edges.
In this study, we use full spectral analysis of LWR to
investigate the effect of LWR on yield, precision, and
resist characterization. All of the algorithms used in
this investigation are now implemented online on the
eCD series of KLA-Tencor CD SEMs. The availability
of these algorithms on the monitoring tool is essential
in order to allow LWR characterization in a production
environment. A purposely designed LWR standard,
created by means of direct e-beam writing, has tested
the performance of the measurement algorithms in
terms of accuracy and precision.
In order to experimentally evaluate the effect of LWR on
yield, direct e-beam writing was used to create devices
with varying degrees of LWR. This allowed us to estimate
the experimental parameters in the proposed LWR model,
so we could specify the requirements needed to set
up a monitoring procedure. This article reports LWR
monitoring data over a period of about four months.
The experimental results of the monitoring confirmed
various assumptions of the model, such as the normal
distribution of the LWR population and the validity of
the self-affine hypothesis. The procedure reported here is
intended to describe the various steps needed to design
LWR monitoring in a production environment.
Spring 2006
www.kla-tencor.com/magazine
49
M
e
t
r
o
l
o
g
y
Although the influence of LWR on yield attracts much
attention, the relationship between line edge roughness
(LER) and precision is critical and often underestimated.
This investigation compares the functional dependence
of critical dimension (CD) precision on LWR, correlation
length and gate placement error, and compares the theoretical predictions with experimental measurements.
Finally, an online LWR algorithm is used to quantify
the difference in performances of resists and topcoats for
immersion lithography. These results indicate that LWR
measurement algorithms are a critical metrology tool in
a variety of applications not strictly related to process
monitoring.
Experimental conditions
The measurement algorithms used to characterize LWR
in this work are now fully available on the KLA-Tencor
CD SEMs in the eCD family. They are capable of measuring various LWR estimators, such as sigma, peakto-valley, and average delta, as well as correlation length
and power spectrum. They allow exporting of all of the
edge information for further analysis. The accuracy and
reliability of the algorithms were tested by means of
e-beam written LER standards. Figure 1 shows an
image of a deterministic LER standard. The use of
these standards allowed fine-tuning and characterization of the performances of the algorithms. The LWR
precision for two consecutive measurements in the
same site was estimated to be 0.2 nm.
Figure 1: SEM image of a LWR standard written by e-beam lithography (Leica
VB6-HR). These standards were used to tune and characterize the online algo-
The resist line patterns were generated with additional
programmed LER using e-beam lithography. On both
sides of the original line, small rectangles were taken,
of which the dose was altered. The dose was randomly
varied between all of the small rectangles, thus resulting in a varying linewidth. The nominal linewidth was
close to 62 nm. The rectangles presented here have a
length of 60 or 120 nm, resulting in a range of correlation lengths. The dose range was chosen to have various
LWR. After the e-beam lithography step, the processing
was continued with etching of the poly-Si, resist strip,
and SiON removal.
In order to determine the effect of gate placement error
on precision for a sample with known roughness characteristics, we simulated the effect of gate misplacement
on a set of experimental results on a sample with well
characterized LWR and correlation. The experimental
results were then compared with the analytical prediction.
For the experimental LWR monitoring, IMEC’s current
standard C013 gate process monitor (130 nm node
design rules) was chosen. Litho target CD was 110 nm.
Targeted gate etch CD was 70 nm. To achieve this 40
nm CD loss during etch, resist and hardmask trim
was applied during the etch process. This monitor was
run bi-weekly. In each monitor batch, two wafers were
selected for measuring the LWR (only after etch). Per
wafer, five dies were measured (center, north, east, south,
and west) and for each die, the LWR measurement was
averaged over 20 sites along a long line. This procedure
was applied seven times.
Currently, IMEC is screening several dedicated ‘wet’
resists and topcoats for the start-up of the immersion
program on their ASML XT:1250Di. Results shown
in this article were still performed on the ASML
PAS5500/1100 ‘dry’ system. All resists were exposed
on ARC29A (Brewer/Nissan Chemical) as organic
bottom anti-reflective coating. Resist thickness applied
was 150 nm. Vendor-recommended process conditions
were applied. All resist coating was done manually
(pipets). Target pitch resolution (150 nm) was achieved
using dipole illumination, 0.89/0.65s @ 0.75 NA.
The topcoats investigated also needed to be applied
manually. An optimal thickness was chosen for
reflection control.
rithms, as well as to create devices with various degrees of LWR.
LWR description
In order to test the yield model, and estimate its parameters, transistors with additional LWR were created.
50
Spring 2006
Yield Management Solutions
Many factors contribute to LWR, including resist composition1, aerial image contrast1,2,3,4 development1,4 and
M
process conditions3, 4, 5. The LWR is usually characterized by the 3s value, where the standard deviation (s)
is defined as:
(1)
where dW(zi) is the deviation from the average linewidth dW and N is the number of measurement points.
It is not sufficient to measure only the 3s variation5, 6, 7.
A more complete description of the LWR can be obtained by measuring the full spatial frequency dependence of the roughness. All this makes the comparison
and quantification of line edge roughness more difficult.
Besides s, the spatial frequency components can be
resolved using a power spectral density (PSD) function,
height-height correlation function, or by assuming a
first-order autoregressive process. The spatial frequency
dependence can be described by two additional parameters that quantify the spatial aspects of LWR: the
roughness exponent (a) and the correlation length (x).
The roughness exponent is associated with the fractal
dimension D (a=2-D)8 and its physical meaning is
that it gives the relative contribution of high frequency
fluctuations to LWR. Large values of a signify less high
frequency fluctuations. On the other hand, the correlation length denotes the distance after which the edge
points can be considered uncorrelated.
Impact of LWR on yield
Poly gates of devices exhibit a certain linewidth roughness. Generally, parameters that describe the operation
of the MOS device are a function of its gate length (L)
(Note that the gate length of a MOSFET is equal to the
width of the poly line). Roughness causes this length
to locally vary over the width (W) of the device. This
roughness causes an increase in parameter fluctuations
and off-state current. It can also give rise to yield loss.
In this section we will calculate the impact of LWR on
yield for devices. The CD distribution as found experimentally in Figure 4 and Figure 8 shows that the
population is normally distributed, which results in the
following probability density function
(2)
with <L> the average linewidth and sLWR the local CD
variation. We take the following variable DL=L-<L>/sLWR
e
t
r
o
l
o
g
y
having dDL=dL/sLWR. We assume that a segment causes
the whole device to fail if its length is smaller than a
certain critical length (Lcrit). Therefore, we need the
cumulative distribution function, which gives the probability that L<Lcrit. The chance that this happens for one
segment (psegt) is:
(3)
The device contains W/x segments depending on the
width. When W<x, the chance that one device is failed
is equal to pseg (min{1,W/x]=1). To calculate the chance
that a device is not working (pdev), the device is divided
into segments with width x. Each segment is considered
to have a constant gate length that varies with sLWR.
This yields:
(4)
Finally, the yield of a circuit with Ndev identical devices
can be calculated to be:
(5)
It is assumed that the circuit only functions when all of
its devices are working. Transistors with different line
edge roughness were used to test this yield model. A
rather arbitrary failure criterion was chosen, namely the
appearance of punch-through, which will demonstrate
the mode nicely. Punch-through reduces the control of
the gate on the off-state current, which causes it to bend
upwards. Mathematically, this translates into: the device
fails when d2lnID/dVGS2 > 0 at VGS=0 V10. Figure 2
shows the fraction of working transistors as a function of
the gate width for smooth transistors and the two cases
where extra roughness was introduced. The correlation
length x and LWR s are indicated in the plot, as determined by SEM pictures. The induced correlation length
is increased with respect to the initial values because
they are convoluted with the intrinsic correlation length
of the resist (about 40 nm). It is seen that Equation (5)
gives a reasonably good fit to the experimental data with
Lcrit@50 nm (the accuracy is somewhat limited due to
a sample size of only ~60 devices per point). It follows
that Lcrit= Lnominal - 16.5 nm.
Spring 2006
www.kla-tencor.com/magazine
51
M
e
t
r
o
l
o
g
y
LWR monitoring
,72 NM
X NM
9IELD;=
3 S ,72 NMXNM
3 S,72 NM XNM
.DEV;=
Figure 2: Relative amount of 62-nm gate length transistors that do not suffer from
In Figure 4, we show the LWR distribution obtained
during our monitoring for about 600 measurements.
The population is clearly normal, as evidenced by the
Gaussian fit. We estimated an average LWR of 2.3 nm,
with a standard deviation for the individual measurement of 0.34 nm. As a consequence, the precision of the
estimated LWR is 0.04 nm. These results confirm the
assumption on normality of the LWR stochastic process,
used earlier to calculate the effect on the yield. Beside
LWR, we also monitored correlation length and fractal
exponent. The correlation length resulted in 34.5 nm,
with a precision of 1.2 nm on this estimate, while the
fractal a was 0.55, with a precision of 0.009.
punch-through, as a function of the number of devices for three gate patterning
processes with intrinsicly different LER characteristics.
Technology Transistor 3sLWR (<L> - Lcrit)/3Gate Length
Node
per Chip (ITRS) [nm] (calc.) [nm]
[nm]
22 nm
35 nm
45 nm
65 nm
90 nm
130 nm
8.85E+09
4.42E+09
2.21E+09
1.11E+09
5.53E+08
2.76E+08
0.70
1.00
1.40
2.00
3.00
4.60
0.76
1.10
1.52
2.12
3.13
5.50
9
13
18
25
37
65
EBUB
mU
$PVOUT
Using the simple model, we want to determine the
impact on a large-scale design. Therefore, we consider
a digital circuit with a large number of devices and
assume W=x =L using the numbers as given in Table 1.
-83ON
Figure 4: Experiments found distribution of the LWR (solid line) and a fitted
Gaussian (dashed line).
Inserting the numbers in Equations (3)-(5), we calculate
the yield for a given sLWR. Figure 3 displays the yield
for the applied parameters for the 130 nm node down to
the 22 nm node, respectively. We consider here a device
failed when more than 0.1% of its components do not
satisfy the requirement for critical length.
Figure 5 shows the combined results for the first four
different monitoring days, 100 measurements each.
The information on the LWR can be extracted by using
three different characterization functions: (a) structure
function, (b) power spectrum, and (c) dependence of the
&
&
ON
ON
ON
&
&
S ON
X ON
X ON
&
S &
X &
ON
S
Table 1: Predicted paremeters for future technology nodes.
:JFME
X ON
(S
BV
ON
ON
&
S &
1L
BV
T-
ON
&
&
B
&
S-83 ON
Figure 3: Yield as a function of different technology nodes as calculated using
Eq.(5). The parameters as defined in Table 1 are used.
52
Spring 2006
Yield Management Solutions
SON
C
&
D
&
LON
-ON
Figure 5: Typical behavior of the three different characterization functions: (a)
structure function, (b) power spectrum, and (c) dependence of the LWR on gate length.
M
LWR on gate length. The plot shows the average functions measured in each monitoring. The repeatability of
the four estimators is evident: the different monitoring
yields essentially the same curve, since the stochastic
process is essentially the same. There are, however, differences in between the three approaches. The structure
function is definitively a better estimator for the correlation length and the fractal dimension as compared to
the power spectrum or the s(L), and it contains information on each of the three parameters.
The results in Figure 5 demonstrate experimentally that
the LWR is indeed a self-affine process, as previously assumed. The three curves are essentially identical to the
simulations by Constantoudis et al. in13, thus confirming
that the full description is achieved with a three-parameter model, where the descriptors are s, a, and x. The
results also validate the hypothesis that the knee in the
power spectrum is indeed related to the correlation length.
e
t
r
o
l
o
By using our estimate of s and x, we define the yield
criteria from an earlier section of this article for a
CD=75.8 nm (Figure 6).
S-83ON
y
for distances smaller than the correlation length (x) there
exists a dependency of the data. We assume a first-order
autoregressive process R(x) as defined by
(6)
with sinf representing the sigma at the low frequency
limit (long box length). Then, the following two-parameter equation can be derived that fits the sigma
variation with increasing box length L12, 14:
(7)
Therefore, if the smeas(L) measured for a given length
is smaller than sinf, the portion of the LWR that is not
measured is the CD variation (sCDU) contribution of the
resist that can be calculated using
We notice also that the estimate of the sigma from the
edges appears to be slightly smaller as compared to the
one from the three estimator functions (2.3 nm versus
2.5 nm). This is indeed accounted by the autoregressive
model of s(L). The monitored sinf=2.5 nm after correcting for the limited field of view.
g
(8)
In the next section of this article, we discuss the effect
of LWR, correlation length, box length, and positioning
error on CD precision. Suppose that the box length of
the measured line is large such that s(L)~sinf. The consequence is that repetitive measurements yield the same
results. Repeating measurements for shorter lengths
result in a spread of the determined smeas.
For example, we re-measure the CD of a line, but the
line is shifted with respect to the original measurement
position. The error with respect to the original measurement position has the same form as Equation (7), but
L is replaced by the positioning error (Dxpos) and sinf is
replaced by sCDU. The correlation length is related to
the resist and is unaltered.
(9)
.POJUPSJOH$ZDMF
Figure 6: The dots indicate the monitoring data with error bars. The different
dashed lines represent the yield criteria for a 50%, 95%, and 99%, respectively.
LWR and precision
It is demonstrated that the measured LWR (smeas) changes
when the box length is increased. Investigation of points
along the edge shows that for larger distance (x) between
two measurements, the data is independent. However,
If Dxpos=0, additional CD measurement yields identical
results. For Dxpos>>x the measurements are independent
and the spread is sCDU.
We take a measurement box of L=200 nm and measure
the LWR to verify the model experimentally. Figure 7
shows the experimental findings and the theoretical results from Equation (9). The parameters in Equation (9)
are x =44.3 nm and sinf=5.5 nm as found from earlier
experiments using this resist.
Spring 2006
www.kla-tencor.com/magazine
53
M
e
t
r
o
l
o
g
y
for immersion lithography on the ASML XT 1250i at
IMEC (exposures are performed on the ASML /1100
‘dry’). The Gaussian behavior is again evident. Each
measurement is performed over 340 sites, and the precision of the estimated average is less than 0.1 nm, thus
allowing clear, rigorous discrimination between resist
type. The measurements were fully automated, and they
took less than one hour per resist. The mean LWR for
the five resists is given in Table 2:
.PEFM
&YQFSJNFOU
$%1SFDJTJPOON
$ YON
Figure 7: CD repeatability as a function of the positioning error along the edge
Resist:
sLWR [nm]
LS2
LS3
LS6
LS7
LS8
4.68
3.30
3.61
3.90
2.90
Table 2: Mean s for five resists as evaluated on the ASML /1100 ‘dry’.
of the line.
The theoretical results match the experimental data.
These findings show that for a given box length the
precision of a CD measurement is limited due to the
positioning accuracy of the CD-SEM. If better LWR reproducibility is required, then the box length should be
increased or the repositioning at the exact same position
needs to be improved.
Besides average information on the roughness across
the wafer, it is possible to obtain wafer mapping of the
LWR. Figure 9 radial LWR maps for different resists
are given as an example. The data clearly indicate a
spike in LWR of the resist L4 close to the center of the
wafer, most probably related to problems in the coating
procedure.
-4
-4
-4
Resist characterization
1
,3
,3
,3
COUNTSAU
-4
,3
,3
Figure 9: Radial 3 sLWR maps for different resists.
Conclusions
S
NM
,72
Figure 8: Measured distribution of the sLWR for 5 different resists.
Spring 2006
%JF
54
-4
S-83ON
The algorithms used to characterize roughness are being
applied to a variety of use cases, in addition to process
control. In particular, we reported recently9,15 on LWR
measurement characterizing LER transfer to etch and
roughness variation caused by 193 nm shrinkage. We
discuss here the use of the LER algorithm applied to
benchmarking of resist and topcoats. The results in
Figure 8 show the LWR measured on five different
resists, being screened to define the candidate of choice
Yield Management Solutions
The effects of line edge roughness on the intrinsic MOS
transistor performance and yield have been investigated
in a 130 nm CMOS technology with gate lengths ranging down to 50 nm. From the experiment, it follows
that Lcrit ~ Lnominal - 16.5 nm. This margin can be
increased by: reducing _LWR or increasing _. However, the latter ‘solution’ will also give rise to increased
parameter fluctuations. A better approach might be to
M
actually try to decrease _ so far that further processing
more efficiently smooths out the roughness, which, in its
turn, leads to a lower value of _LWR. The experiment
for a large-scale digital design shows that the devices
have to be able to cope with acceptable device deviations
for a given _LWR. It also has been demonstrated that
the online metrology can be applied with high accuracy
for the screening of new resist materials.
Acknowledgements
The authors would like to thank Christie Delvaux,
Nadia Vandenbroeck, and Frieda Van Roey for all
the practical work and assistance in the many measurements. The authors are indebted to the European
Commission and the Medea+ organization, for the
funding of the European project IST-1-507754-IP
(More Moore). The authors would also like to thank
the IMEC lithography department, including the
industrial affiliates based at IMEC and the IIAP
partners of the 193-immersion program.
L.H.A. Leunissen, G.F. Lorusso, M. Ercken,
J. A. Croon, H. Yang, A. Azordegan, T. DiBiase,
Full Spectral Analysis of Line Width Roughness in
SPIE 2005 Metrology, Inspection, and Process Control
for Microlithography XIX, Proceedings of SPIE
Vol. 5752, pgs. 578-590 (2005)
References
1. W.G. Lawrence, “Spatial Frequency Analysis of Line Edge
Roughness in Nine Chemically Related Photoresists”, Proc.
SPIE 5039, 713 (2003)
2. J. Shin, G. Han, Y. Ma, K. Moloni and F. Cerrina, “Resist
Line Edge Roughness and Aerial Image Contrast”, J. Vac. Sci.
Technol. B19, 2890 (2001)
3. H.P. Koh, Q.Y. Lin, X. Hu and L. Chan, “Effect of Process
Parameters on Edge Roughness of Chemically Amplified
Resists”, Proc. SPIE 3999, 240 (2000)
4. S. Masuda, X. Ma, G. Noya and G. Pawlowski, “Lithography
and Line Edge Roughness of High Activation Energy Resists”,
Proc. SPIE 3999, 252 (2000)
5.M. Ercken, G. Storms, C. Delvaux, N. Vandenbroeck,
P. Leunissen and I. Pollentier, “Line Edge Roughness and its
Increasing Importance”, Proceedings of Interface (2002)
e
t
r
o
l
o
g
y
6. G.P. Patsis, V. Constantoudis, A. Tserepi and E. Gogolides,
“Quantification of Line Edge Roughness of Photoresists. Part I: A
Comparison Between Off-line and On-line Analysis of Top-down
SEM Images”, J. Vac. Sci. Technol. B21, 1008 (2003)
7. G. P. Patsis, V. Constantoudis, A. Tserepi, and E. Gogolides,
Grozdan Grozev, and T. Hoffmann, “Roughness Analysis of
Lithographically Produced Nanostructures: Off-line Measurement and Scaling Analysis”, Microelectronic Engineering 678: 319-25 (2003)
8. A.-L. Barabasi and H.E. Stanley, “Fractal Concepts in Surface
Growth”, Cambridge University Press, Cambridge, England,
1995
9. L.H.A. Leunissen, G. F. Lorusso, T. DiBiase, H. Yang, A. Azordegan, “On-line Spectral Analysis of Line Edge Roughness:
Algorithms Qualification and Transfer to Etch”, Semiconductor
Fabtech (to be published)
10. J.A. Croon, L.H.A. Leunissen, M. Jurczak, M. Benndorf, R.
Rooyackers, K. Ronse, S. Decoutere, W. Sansen and H.E.
Maes, “Experimental Investigation of the Impact of Line-edge
Roughness on MOSFET Performance and Yield,” Proc. ESSDERC, 227 (2003)
11. V. Constantoudis, G. P. Patsis, L. H. A. Leunissen, and E. Gogolides, “Line Edge Roughness and Critical Dimension Variation: Fractal Characterization and Comparison using Model
Functions”, J. Vac. Sci. Technol. B 22, 1974 (2004)
12. L.H.A. Leunissen, W.G. Lawrence and M. Ercken, “Line
edge roughness: Experimental Results Related to a Two-parameter Model” Microelectron. Eng. 73-74, 265 (2004).
13. V. Constantoudis, G. P. Patsis, L. H. A. Leunissen, and E.
Gogolides, “Toward a Complete Description of Line Width
Roughness: a Comparison of Different Methods for Vertical
and Spatial LER and LWR Analysis and CD Variation”, Proc.
SPIE 5375, 967 (2004)
14. J.A. Croon, W. Sansen, and H.E. Maes, “Matching Properties of Deep Sub-micron MOS Transistor” Springer (to be
published)
15. L.H.A. Leunissen, M. Ercken, M. Goethals, S. Locorotondo,
K. Ronse, G.B. Derksen, D. Nijkerk, G.F. Lorusso, “Transfer
of Line Edge Roughness During Gate Patterning Process” Proceedings of International Symposium on Dry Process (ISBN
4-9900915-7-4), 1 (2004)
Spring 2006
www.kla-tencor.com/magazine
55
Critical Dimension
M
e
t
r
o
l
o
g
y
Focusing on the Drifts
Spectroscopic Ellipsometry-based APC for Consistent
Device Performance
W. Lin, S. Liao, R. Tsai, M. Yeh, C. Hsieh, Y. Yu, and B.S. Lin, United Microelectronics Corporation
S. Fu and T.G. Dziura, KLA-Tencor Corporation
Lot-to-lot after-develop inspection (ADI) critical dimension (CD) data are generally used to tighten the variation of
exposure energy of an exposure tool through an automated process control (APC) feedback system. With decreasing device
size, the process window of an exposure tool becomes smaller. Therefore, whether the ADI CD can actually reveal the
real behavior of a scanner becomes a more critical question, especially for the polysilicon gate layer. CD SEM has generally been chosen as the metrology tool for this purpose, but top-down CD SEMs do have their limitations. Spectroscopic
ellipsometry-based scatterometry technology, commonly referred to as SpectraCD, provides an alternative. In this study,
SpectraCD, in contrast with CD SEM, improved the linearity of the correlation between ADI after-etch inspection
(AEI) CDs from 0.4 to 0.8. The resulting data provided sufficient motivation to switch the APC feedback system from
CD SEM to SpectraCD.
Introduction
APC has been in place for some time in
semiconductor fabs and has proven to be
crucial for achieving the required manufacturing tolerances for advanced technology
nodes. The technique has been applied in
CD control using CD SEM data as input,
which the APC system then uses to adjust
the scanner dose for subsequent lots. This
algorithm works well provided that the
focus-exposure window is periodically monitored1. As design rules continue to shrink,
the demands on metrology performance are
increasing, motivating process engineers to
evaluate alternative CD measurement technologies such as SpectraCD (SCD), which is
a model-based scatterometry tool utilizing
spectroscopic ellipsometry. SCD has demonstrated good precision and throughput performance, and is being evaluated for inline
process control2. The tool reports multiple
profile features (CD, height, sidewall angle
56
Spring 2006
Yield Management Solutions
(SWA)) which can in principle be used to control multiple aspects of the process. This article describes the
results of employing SCD in a 130-nm APC system.
CD measurement with SpectraCD
SpectraCD is a model-based metrology tool for measuring CD and profile of structures. It uses a broadband
light source to collect spectroscopic ellipsometry data,
reporting this as spectral variation in ‘alpha’ and ‘beta’
(analogous to the ellipsometric quantities tan Y and cos
D). Data is collected from grating targets that contain
the device structure of interest. The gratings may be
one-dimensional (line) or two-dimensional (contact
arrays). Measuring the device profile consists of computing a diffraction spectrum and fitting the resulting
alpha-beta spectrum to the data collected, then reporting the profile parameters that give the best fit to the
data. The model is computed either offline (library
mode) or in real-time regression mode (CDExpress).
The tool is also a complete films characterization and
measurement platform.
M
e
t
r
o
l
o
g
y
Monitoring ADI and ACI processes
4$%.$%
ZY
3
&YQPTVSFFOFSHZ
Figure 3. Correlation between SCD MCD and exposure energy, measured over
the time period shown in Figs. 1 and 2.
&UDIFS"
.$%
Data were collected from a 130-nm polysilicon process
from 211 product lots; two scanners, six etch chambers,
three CD SEMs, and one SCD tool were involved in this
study. In the initial APC algorithm, CD SEM CD data
was fed back to the scanner to control exposure; no feedback was employed to control scanner focus. SCD measurements were made in parallel with the data collected
by the APC system. It was observed that the ADI MCD
(CD at 50 percent of height) measured by SCD exhibited a systematic trend over a certain group of lots (Fig.
1), implying that one or more of the scanner parameters
was drifting (while APC was enabled). Another possibility was that SCD was measuring the lots incorrectly
(for whatever reason). Several tests were then performed
to isolate the root cause. Figure 2 shows the trend in
scanner exposure energy during the same time period.
The change in MCD is clearly anti-correlated with the
change in scanner dose, indicating that the SpectraCD
had correctly flagged a dose drift. In fact, if the change
in MCD is normalized to the change in dose (Figure 3),
&UDIFS#
&UDIFS"
"$*
"%*
4$%.$%
-PU
Figure 4. Lot-to-lot trends in photo and etch MCD compared over the same time
period, as measured by SCD. The results indicate that the changes flagged by
SCD in photo were replicated in etch.
.$%
QU."
-PU
the result (8.6 nm/mJ) agrees quite well with the calibration obtained independently with a single matrix
wafer (7.1 nm/mJ).
Figure 1. ADI MCD trend over 135 lots (data and 5-lot moving average) as
%PTF
4$%.$%
measured by SCD, while under CD SEM-based APC control.
.$%
%PTF
-PU
Figure 2. Trend of scanner dose and ADI MCD from SCD measured over 135
A comparison of the lot-to-lot trends at ADI and ACI
gave additional confidence that SCD measured the lots
correctly. Figure 4 shows the trend in ADI MCD and
ACI MCD from the same lots, processed by one scanner
and three different etch chambers. The changes in CD
track very well between photo and etch; R2 correlation
values ranged from 0.8 to 0.87. It is expected that any
CD changes occurring in photo would be transferred to
etch in the absence of any feed forward correction. Finally
the SCD WA trend (Fig. 5) indicated good stability of
the scanner focus, eliminating that as a possible cause
of the MCD drift. The measured SWA 3s ~ 0.7 deg;
combining this with a known 0.8 deg SWA drift per
0.1 µm of focus drift gives a focus stability < 0.1 µm.
lots. There is a clear correlation between the MCD and dose over time.
Spring 2006
www.kla-tencor.com/magazine
57
M
e
t
r
o
l
o
g
y
CFTUGPDVT ˜N
48"EFH
CFTUGPDVT ˜N
48"
QU."
-PU
Figure 5. Lot trend of the ADI SWA measured by SCD, indicating good stability
of scanner focus.
"%*
"$*FUDICJBT
can be drawn: (1) the CD SEM is capturing real photo
and etch CD variation, and the SCD measurement is
less sensitive to these changes, or (2) SCD is measuring
the lots correctly. We rejected conclusion (1) because
it is unlikely that the SCD measurement is insensitive
for both ADI and ACI in a way that gives good correlation between photo and etch through the entire range
of CD variation. Also, the CD SEM CD measurement
is known to be affected by any SWA variation; for poly
lines ~ 2 nm/deg of CD shift has been measured3, and
this can increase the measured lot-to-lot variation. The
possibility that there was an issue with one particular
CD SEM was considered; therefore, a performance comparison was conducted with that CD SEM on several
other tools. The correlation between etch and photo
was consistently better on the SCD tool; R2 varied from
0.1-0.38 for the CD SEMs and 0.8-0.87 for SCD was
already mentioned. Testing was conducted using the
SCD tool with the APC system.
$%
APC control with SCD
-PU
Figure 6. ADI and ACI lot trends measured by CD SEM. The ACI CD value has
been shifted by the mean etch bias.
The greater measured correlation coefficient between
dose and MCD (0.785) compared to SWA and MCD
(0.177) lends more weight to the conclusion that real
changes in dose drove drift in MCD, which were
flagged by the SCD measurement.
The performance of the APC system with SCD data
input was evaluated from ADI control to Lcap.
Figure 7 shows the ADI MCD lot trend before and
after switching to SCD APC. A dramatic improvement
in the lot-to-lot 3s of 53 percent is observed. This was
achieved with no change in etch bias and an improvement in etch bias variation as well. Figure 8 shows
the etch bias with and without SCD APC. The etch
bias is essentially unchanged, and the etch bias 3s is
reduced by almost 24 percent. With these improvements in CD control, one would expect that significant
improvements at electrical test would be realized as
well. This was confirmed by Lcap data taken under the
$%4&.CBTFE"1$
4$%CBTFE"1$
A close examination of the CD SEM data from the same
lots proves instructive. Figure 6 shows ADI and ACI
lot trends, with the mean value of the ACI CD shifted
by the mean etch bias in order to overlay the two curves
for comparison. For the most part, the data indicates
that the CD at etch follows that at photo, but there are
some clear excursions (highlighted in the figure), and
the post-etch CD variation is larger. In fact, the etch
bias lot-to-lot 3s is more than 70 percent greater on the
CD SEM. There are several possible conclusions that
58
Spring 2006
Yield Management Solutions
.$%
Comparing SCD and CD SEM performance
.$%
QU."
-PU
Figure 7. Comparison of APC system performance when the metrology input was
changed to SCD.
M
e
t
r
o
l
o
g
y
$%4&.CBTFE"1$
4$%CBTFE"1$
-DBQ
&UDICJBTON
ZY
3
"%*.$%
-PU
Figure 8. Etch bias for the case of CD SEM-based APC and SCD-based APC.
Figure 10. Correlation over several lots of Lcap with ADI MCD measured by SCD.
Acknowledgment
4$%CBTFE"1$
-DBQ
$%4&.CBTFE"1$
-DBQ
QU."
W. Lin, S. Liao, R. Tsai, M. Yeh, C. Hsieh, Y. Yu,
B.S. Lin, S. Fu, T.G. Dziura, Feasibility of Improving
CD SEM-based APC System for Exposure Tool by
Spectroscopic Ellipsometry-based APC System in SPIE
2005 Data Analysis and Modeling for Process Control
II, Proceedings of SPIE Vol. 5755, pgs. 138-144 (2005)
References
1.K. Monahan, “Microeconomics of Accelerated Shrinks in
Demand-Limited Markets,” ISSM 2001.
-PU
Figure 9. Lcap lot trend under CD SEM-based APC and SCD-based APC.
two APC systems. Figure 9 shows that a 45 percent
reduction in Lcap variation was achieved by converting
to a SCD-based APC system. The correlation between
ADI MCD, measured on a grating target, and Lcap,
measured in a test key, is good, confirming that good
front-end control using SCD results in consistent device
performance.
Conclusions
The data presented in this study demonstrates the
applicability of SpectraCD for litho and etch process
control, especially when used as the data source for the
fab APC system. SCD is capable of flagging process
drift with good precision. Dose calibrations agree with
established metrology, making transfer of process control to SCD seamless. The high measurement stability
at both photo and etch result in stable front-end process
control and good final electrical device performance.
2.R. M. Peters, R. H. Chiao, T. Eckert, R. Labra, D. Nappa,
S. Tang, and J. Washington, “Production Control of Shallow
Trench Isolation (STI) at the 130nm Node Using Spectroscopic Ellipsometry Based Profile Metrology”, Proc. SPIE,
vol. 5375, pp. 798-806 (2004).
3.V. Ukraintsev, “Effect of bias variation on total uncertainty of
CD measurements”, Proc. SPIE, vol. 5038, pp. 644-650
(2003).
4. W. Lin, S. Liao, R. Tsai, M. Yeh, C. Hsieh, Y. Yu, B.S. Lin,
S. Fu, T.G. Dziura, Feasibility of Improving CD SEM-based
APC System for Exposure Tool by Spectroscopic Ellipsometrybased APC System in SPIE 2005 Data Analysis and Modeling for Process Control II, Proceedings of SPIE Vol. 5755, pgs. 138-144 (2005)
Spring 2006
www.kla-tencor.com/magazine
59
Overlay
M
e
t
r
o
l
o
g
y
In-chip Overlay Metrology
in 90-nm Production
Bernd Schulz, Rolf Seltmann, and Joerg Paufler, AMD
Philippe Leray, IMEC
Aviv Frommer, Pavel Izikson, Elyakim Kassel, and Mike Adel, KLA-Tencor Corporation
© 2005 IEEE. Bernd Schulz, Rolf Seltmann , Joerg Paufler, Philippe Leray, Aviv Frommer, Pavel Izikson, Elyakim Kassel & Mike Adel,
In-chip overlay metrology in 90nm production. Reprinted, with permission, from International Symposium on Semiconductor Manufacturing
(ISSM) 2005 Conference.
While scanner aberration-induced pattern placement errors (PPE) can be measured, simulated, and validated by scanning
electron microscope (SEM), the magnitude of the effect on late-generation scanners is small — of the order of ~1.5 nm in
one line peak to peak across the slit. In-die overlay data contains additional sources of variation beyond PPE, and the
results have been verified by SEM. Current practices based on linear models do not capture in-die variations as large as
5 nm, which significantly impacts model residuals. Currently, in-die target insertion is an insurance policy which enables
in-die troubleshooting when process issues are suspected, and will potentially improve lot dispositioning in the future.
Introduction
Overlay is traditionally measured with
metrology structures located in the four
corners of a reticle field, consistent with the
assumption of an overlay model with linear
field dependence. This assumption dictates
that the four corner measurements represent the extremes of overlay of all structures
inside the reticle field. A number of recent
studies have indicated that the discrepancies from field linearity may no longer be
negligible when considering overlay control
requirements at 65 nm. In order to be able
to characterize and monitor overlay at multiple field locations, it has become necessary
to insert metrology structures into locations other than the field edge scribe lines.
Standard size overlay structures are difficult
to insert in-chip and represent a barrier to
verification of the above assumption. This
study reports on results of in-chip overlay
metrology on product wafers using microAIM targets of 13x13 µm2.
on real product reticles. The size of the mark under test
is about 13x13 µm2, compared with the standard size
of 27x27 µm2. The design of the reference layer has a
NS (non-segmented) outer and segmented inner. As
will be described later, this mark can be used as a PPE
monitor prior to the subsequent processing and lithographic steps. The current layer design consists of a
NS inner, so the complete mark behaves as a NS AIM
overlay mark. This is shown in the SEM images in
Figures 3 and 4. Insertion locations were selected in
areas of the device without electrical functionality.
These areas are typically tiled with dummy areas. Some
of the dummy tiles are removed to place the overlay
'JOHFSUBSHFUTJOEJF
1SPEVDU
'JOHFSUBSHFUTJOEJFTFBM
1SPEVDU
Metrology target insertion into
product dies
This section describes the methods used for
designing and laying out micro-AIM marks
60
Spring 2006
Yield Management Solutions
Figure 1: Alternative micro-AIM layout strategies: in-die method on the left and
internal scribe method on the right.
M
e
t
r
o
l
o
g
y
1SFDJTJPO
"*.TDSJCFMJOF
N"*.TDSJCFMJOF
N"*.JOEJF
#J#TDSJCFMJOF
ON
9
Figure 2: Optical image of micro-AIM target inserted in-die as shown on the left
in Figure 1.
:
Figure 5: 3sigma precision performance of standard AIM, scribe line micro-AIM,
and in-die micro-AIM targets compared with scribe line box in box.
9
:
$%4&.0WFSMBZ
Figure 3: SEM images of micro-AIM
Figure 4: SEM image of micro-AIM
target in photoresist.
target after second lithography and
Metrology performance on in-die targets
In order to enable metrology target insertion in multiple field locations, target size needs to be substantially reduced. In this study the actual target area was
reduced by a factor of four compared with the standard
scribe line targets. One of the potential risks associated
with this drastic reduction is the possible degradation
in overlay mark fidelity (OMF) and precision due to
reduced spatial and temporal information content.
The data in Figure 5 indicates that although some
degradation is observed, the performance is still deep
within the specifications of the Archer AIM tool. The
in-die optical metrology results were also validated by
a correlation study with CD SEM-based overlay results
after etch, as shown in Figure 6. A Mandel regression
subsequent etch.
marks (See optical image in Figure 2). An alternative
insertion strategy also tested was based on internal
scribe lines within a six-die field. These two insertion
strategies are shown diagrammatically in Figure 1.
Figure 6: CD SEM overlay versus imaging overlay results from in-die targets.
was performed on the data sets to evaluate linearity and
biases, and the results in Table 1 indicate a good linearity with a small but significant bias in the x direction.
This is potentially attributed to CD SEM bias as no tool
induced shift (TIS) correction is applied, as opposed
to the case in optical metrology where TIS is routinely
measured and corrected.
X overlay
Slope
Offset [nm]
R2
0.99 ± 0.07
-1.4
0.88
Y overlay
0.94 ± 0.11
0.2
0.77
Table 1: Orthogonal regression parameters for correlation data in Figure 6.
Spring 2006
www.kla-tencor.com/magazine
61
M
e
t
r
o
l
o
g
y
Aberration-induced pattern placement
errors across the field
Exposure tool optical aberrations have been demonstrated in the past to be a key contributor to in-die
PPE. It is also known that the magnitude of PPE is
dependent on pitch and feature size1,2. Furthermore,
Kye et. al have shown that such errors are also dependent on illumination conditions, such as coherency
and illumination shape. Ueno et. al have published
a methodology for characterizing with high precision
the field dependence of PPE using grating-based
simultaneous overlay marks. In the current work,
such simultaneous marks have been reduced in size
to 13x13 microns in order to fit into the available
space at in-die locations while maintaining withinspecification metrology performance. We have also
performed full electromagnetic simulations utilizing
aberration measurements (taking into account Zernikes
Z5 to Z15). The PPE sensitivity of the segmented
and non-segmented parts of the grating to individual
aberrations was calculated. As expected a near linear
response to Zernikes was found with negligible crosstalk
between individual aberrations. Figure 7 and Figure 8
display the results of these simulations compared with
on-product measurements from wafers from two different generation exposure tools (PAS/1100 & XT1250).
11&ON
(Y direction) across the slit for two subsequent generations of scanners.
11&ON
911&ON
4MJUQPTJUJPO<NN>
Figure 9: CD SEM validation data for PPE in the x direction, measured on the
same lot as the data in Figure 6, after etch.
11&9(FO"
4JNVMBUJPO
11&9(FO#
4JNVMBUJPO
Figure 7: Simulated and measured pattern placement error dependence (X direction) across the slit for two subsequent generations of scanners.
Spring 2006
$%4&.(FO#BU'*
4MJUQPTJUJPONN
62
11&9(FO#BU'*
11&9(FO"
4JNVMBUJPO
11&9(FO#
4JNVMBUJPO
Figure 8: Simulated and measured pattern placement error dependence 4MJUQPTJUJPONN
These results demonstrate that this effect can be measured and correlated well to simulations. In three out
of four cases, the mean discrepancy was less than 0.5 nm
and below 1 nm in the fourth case. More significantly,
it is also observed that X-PPE peak-to-peak variation
across the slit was reduced from 4 nm to 1.5 nm from
Yield Management Solutions
one exposure tool generation to the next. Further
validation of the PPE data was also achieved by
performing CD SEM measurements. In this case,
optical and CD SEM metrology was performed
after etch, since photoresist coverage prohibits
SEM-based overlay metrology immediately subsequent to lithography. The results are shown in
Figure 8 where a mean offset of ~ 0.5 nm is
observed between the SEM-based and optical
metrology. A bias is also observed between the
optically detected PPE before and after etch of
about a nanometer. Such etch-induced PPE have
been observed in our previous work and will be the
subject of further endeavor. As we shall see in the
next section, despite the impressive improvement
in aberration-induced placement errors, significant
high order structure is still observed in the overlay
behavior across the scanner field, indicating that
we need to continue the search for the dominant
in-field contributors.
M
e
t
r
o
l
o
g
y
Full-field modeling
In this phase, the in-die overlay measurements were
compared with the predicted overlay based on a model
generated using a standard four-corner sampling plan as
follows. In a first step, the in-die high density sampled
data set was used to generate an interfield (wafer-level)
model. This model was removed from the raw data and
the remaining intra-field data was averaged over all fields
to generate an “in-die map”. This map was then compared
with (i.e. subtracted from) the linear intrafield model
generated using a standard four-corner sampling plan.
This methodology was used on several data sets from
production lots on different products and in each case, a
high order fingerprint was observed for the in-die map.
In some cases, this map displayed systematic deviations
from the linear model of up to 5 nm. It was also observed
that this signature varied in the scan direction, further
supporting the conjecture that aberration-induced PPE
is not the primary cause of high order intra-field overlay.
In particular, a correlation was observed between the
in-die map and the position within the individual chip.
ON
Figure 10: Systematic in-die residuals over scanner field on product, remaining
after removal of standard four-corner linear model.
Acknowledgements
Conclusions
Aberration-induced pattern placement errors have been
characterized on product wafers for two different scanner
generations. We observe their magnitude to be reduced
to the extent that the systematic residuals of in-die
overlay measurements cannot be explained by aberration-induced PPE alone. Other error sources are still a
significant contribution to the total residual error and
need to be further characterized.
In-die micro overlay targets can be used to refine the
current overlay models. Sampling plans for overlay
measurements with a combination of conventional and
in-die overlay structures will become an important
compromise between throughput in production and
improved feedback and lot dispositioning.
© 2005 IEEE. Bernd Schulz, Rolf Seltmann , Joerg
Paufler, Philippe Leray, Aviv Frommer, Pavel Izikson,
Elyakim Kassel & Mike Adel, In-chip overlay metrology
in 90nm production. Reprinted, with permission, from
International Symposium on Semiconductor Manufacturing (ISSM) 2005 Conference.
This work was performed with the help of an R&D
grant from the European Community under contract
# IST-001854 Project OCSLI.
References
1.C. Progler, S. Bukofsky, and D. Wheeler, “Method to budget
and optimize total device overlay,” in Optical Microlithography XII, Luc Van den hove, Editor, Proceedings of SPIE Vol.
3679, 193-207 (1999).
2.Atsushi Ueno, Kouichirou Tsujita, Hiroyuki Kurita, Yasuhisa
Iwata, Mark Ghinovker, Jorge M. Poplawski, Elyakim Kassel,
and Mike E. Adel, “Improved overlay metrology device correlation on 90-nm logic processes”, Proc. SPIE vol. 5375,
222 (2004)
3. Jongwook Kye, Mircea Dusa, Harry J. Levinson “Linewidth
asymmetry study to predict aberration in lithographic lenses”,
Proceedings of, SPIE vol. 4346-134 (2001).
Spring 2006
www.kla-tencor.com/magazine
63
Process
I
n
s
p
e
c
t
i
o
n
Reliable, Repeatable Wafer and
Tool Dispositioning in 300 mm Fabs
Bruce Johnson, Rebecca Pinto, Ph.D, and Stephen Hiebert , KLA-Tencor Corporation
Advances in wafer fabrication along with rising economic pressures on chipmakers have created greater challenges in
the dispositioning of wafers and process tools. Such a climate has rendered it nearly impossible for manual disposition
inspection to deliver even adequate results in a manufacturing environment or for process tool requalification. Manual
inspectors often miss gross process problems, passing wafers downstream, where they will later be scrapped or create yield
loss.Automated disposition, on the other hand, can integrate into a fab’s defect analysis infrastructure to enable better
yield learning. These advantages, plus an automated system’s capability for high sampling, make it suitable for a low
cost of ownership inspection strategy.
Introduction
Advanced fabs require accurate and rapid
disposition decision-making during manufacturing, as well as a quick assessment of
tool and process module output. Operators
at manual or semi-automated inspection
stations have historically done much of this,
but these methods have been ineffective for
quite some time. Manual inspections are
expensive, and the results are well known
to be unreliable. This is especially true for
advanced 300 mm manufacturing, where
vanishingly small device features, factory automation, and large wafer surfaces challenge
the ability of the operator to assess the wafer
and lot; these conditions place large quantities of valuable wafers at risk.
There are cases in most fabs where an operator has missed process or tool errors which
have resulted in litho hot spots, CMP underpolish, scratches, underetch, splashback,
coating failures, and many other types of
gross errors. These can happen randomly on
one wafer in a lot, on some pattern of wafers
within a lot (such as every other wafer due
to process tool chamber/stage configuration), or on a whole production lot. Most
fabs have had significant yield hits from lots
64
Spring 2006
Yield Management Solutions
which were, for example, not coated with resist, but
which were not recognized or sampled by the inspector
and ultimately had to be scrapped. The cost of a small
inspection error – missing a significant, but challenging-to-detect process error – can be very high.
In fabs with large product mixes, such as a foundry or
a development line, each lot may represent all of the
material for a specific customer. The loss of that lot,
especially if it happens late in the device manufacturing process, can be devastating to both the fab and its
relationship with the fab’s customer. For some smaller
fabless semiconductor customers, it can almost be fatal
because of the cycle time hit on a key part. Yet, these
failures do happen with surprising regularity when
fabs are not able to sample at the level and sensitivity
required to capture critical excursions consistently and
early on. Manual inspection has resulted in many cases
of missed problems and resultant loss because of its
inability to reliably find important defects. Automated
inspection, on the other hand, is well suited to performing the disposition job. Its major strengths are:
• Good sensitivity to detect defects of all types
• Consistent results from tool to tool, day to day,
and fab to fab
• High throughput to adequately sample every lot
I
n
s
p
e
c
3FQFBUFYQPTVSF
t
i
o
n
4QMBTICBDL
4DSBUDI
%FGPDVT
'JMN13
QSPCMFN
TIPUEFGPDVT
%FWFMPQ/(
6O oFYQPTVSF
)PUTQPU
1PPSDPBUJOH
/PUDIEJTDPMPS
1BSUJDMF
Figure 1. Examples of product wafer frontside litho errors targeted at disposition inspection. (Problems routinely missed by inspection operators are highlighted in red.)
• Low cost of operation
• Factory automation to work within a fully
automated fab
• Detailed, documented results compatible with
fab defect analysis systems
This paper examines these aspects of disposition inspection for both manual and automated approaches.
Low-overhead
automatic recipe creation
Recipe creation with an automated wafer
and tool disposition system can offer many economic advantages for a fab:
• Simple, reality-based recipe design and
user interface
• Derivative recipes (layout, layer,
The job of disposition
The vast majority of production lots are good, and
should be passed on to the next processing step. However, every once in a while, there may be a processing
problem which impacts any of:
disposition rules)
• Low skill level required, especially for
derivate recipes
•Automated recipe completion at first run
• A large portion of a wafer
• One or several wafers within a lot
• Consistent, fast results between recipes
• An entire lot
• Multiple lots
These problems may come from the challenging process
windows of today’s advanced technology, material problems, process recipe errors, process drift, operator error,
or production tool errors.
Dispositioning vs. defect line monitoring
The disposition inspection’s goal is to quickly find
these larger problems and to identify if there is an overall problem with the process which requires attention.
It is not the goal of the inspection to find smaller subtle
Spring 2006
www.kla-tencor.com/magazine
65
I
n
s
p
e
c
t
i
o
n
Integrating
with the automated factory
Even the best manual inspector does not run
• Engineering disposition – for issues which the
operator might not be positioned to handle
on factory automation, and s/he needs to be
In addition, the disposition result may flag the need to
stop the process or production tool, either by a qualified
operator or the responsible engineer.
in proximity to the wafers, which violates the
goals of a highly automated fab. Automated
inspection, on the other hand, is fully
compatible:
• Automatic material handling
• Remote review
• Status reporting
• Remote host control
• Alarms for wafer and lot results
• Alarms for tool status
problems which are ongoing yield detractors, such as
is done with defect line monitor, or with film thickness,
overlay error, and CD measurement. These inspections
and measurements are typically done on a small statistical
sample—two or three wafers per lot--and fed directly
into a process control scheme. Most fabs perform automated disposition inspection on every lot since the
problems can happen on a lot-to-lot basis and resultin
expensive loss. Because it is done
on (almost) every lot, it is important that the inspection also be
fast to keep production moving.
Most material will pass. The
rejected material may go into one
of several paths, depending on
the process module and type and
severity of problem:
Disposition inspection criteria
The disposition needs to be accurate and consistent; it
should neither reject good material (often called the
Alpha risk), nor should it pass bad material (Beta risk).
Both errors can be costly. The limitations of manual
inspection are known to give it a high Beta risk, and
this shows up in higher downstream scrap and yield
loss. Automated inspection’s higher capture rate of all
defect types improves this, but it should not do this
at the cost of higher Alpha risk, which may impact
fab productivity. It is important that the disposition
inspection be immune to normal process variation
which could result in false rejects. Along these lines,
it is also useful for the inspection to be able to bin
defects according to their level of criticality to yield
(i.e., nuisance vs. killer defects). Such a capability can
further speed up disposition decision-making. The
sample size is also important to both of these risks;
the sampling should be statistically valid for the
nature of the problems that the inspections target.
The inspection needs to be fast, so that it does not hold
up production material, and it must carry a low operational cost to be feasible in a production environment.
Yet, enough material must be sampled to have good
discrimination and accurate lot disposition.
JOTQFDUFEBSFB
• Rework – often handled by the operator
NNXBGFSTTBNQMFE
9WJTVBMmFMETBU9
#BDLTJEFJOTQFDUJPO
NJOVUFTÊMPUJOTQFDUJPO
• Scrap
• Waive – for captures not meet-
ing rework or scrap criteria
Spring 2006
"VUPNBUFEEJTQPTJUJPO
NNXBGFSTTBNQMFE
PGXBGFS
#BDLTJEFJOTQFDUJPO
NJOVUFTÊMPUJOTQFDUJPO
'BCTNBZVOEFSDVUJOTQFDUJPOBDDVSBDZ
CZJOTQFDUJOHGFXFSUIBOXBGFSTMPU
Yield Management Solutions
Figure 2. Manual and automated Viper 243X inspection time comparison.
66
.BOVBMEJTQPTJUJPO
I
n
s
p
e
c
t
i
o
n
BACKSIDE DEFECT INSPECTION
1BSUJDMFJNBHF
'4%FGPDVT
#41BSUJDMF
Numerous process steps can leave particles on wafer backsides, due to the process, maintenance or cleaning problems,
or handling. When these are carried on
to the next step, the backside particles can
result in process problems at these subsequent steps. The most common example is
in lithography, where a particle causes a
bump on the wafer frontside as the wafer is
pulled onto the scanner chuck. The scanner
can then have a focus error at that point,
resulting in a CD error or pattern failure.
Wafer backside particle (right side) caused a defocus hot spot on the wafer
In some cases, the particle will stay with
the wafer, but in other cases, it can transfer
to the chuck. The high force of the vacuum pull-down often causes it
to almost fuse to the scanner chuck. This results in a hot spot at the
same location on each wafer. Once the problem is identified, the
scanner must be taken down to clean off the particle or replace the
chuck, and a requalification is always required. This scanner can
incur significant downtime.
frontside. The captured image of the particle is seen at the far right.
Finally, the disposition inspection needs to be
compatible with the level of factory automation, especially for advanced 300 mm fabs.
This is a challenge for manual inspection,
since fabs minimize the number of operators
on the production floor.1 Material movement
to and from the tool, host control, and results
integration are all important considerations.
In addition to product disposition, fabs use
similar inspection strategies to requalify
production tools and cells. After a production
tool has been down for maintenance or to correct a problem, it must be requalified before
being released back into production. This
requalification, depending on the specific process step, typically includes measurement of
the appropriate parameters (overlay and CD
for litho, film thickness for films and CMP
etc.), microdefect qualification, and gross process qualification. The requirements described
for disposition inspection apply to this tool
qualification inspection as well.
These requirements will now be explored
in greater detail for manual and automated
disposition inspection.
The manufacturing challenge
Device manufacture requires fast, accurate
decisions to keep good product moving.
Integral to this is high productivity from
process tools, so rapid decisions are needed to
requalify them periodically or after maintenance. In most cases, fabs want to verify that
A backside product disposition inspection is typically done at process steps prior to litho (and other sensitive process steps) to avoid
the rework, yield loss, and the hit to scanner productivity. Because
the litho process itself can also create backside defects (through
splashback, for example), a backside inspection is also typically
done as part of the develop inspect disposition.
Based on these issues and customer inputs, optional backside
inspection has been added to the KLA-Tencor Viper 2435 automated
disposition system. An additional backside stage is joined with the
primary scanning stage; the scanning motion of the frontside inspection also scans the backside stage. After a wafer is inspected on the
first stage, it is transferred to the second stage where it is scanned
simultaneously with the following frontside wafer scan. The backside stage has its own darkfield illumination and camera to perform
this inspection. Because the backside inspection adds a step to the
sequence, the system throughput is slightly reduced.
Defect results are shown as an additional channel which may be
reviewed just like any other defect, and may be transferred to
KLA-Tencor Klarity Defect software. Fabs have used this new capability
to identify the cause of hot spots. The above graphic shows an example
of a hot spot which was caused by a confirmed backside particle.
This backside inspection is a convenient capability to add to an
existing frontside disposition inspection. An alternative to consider
where no frontside inspection is being done is to use a KLA-Tencor
Surfscan SP1 unpatterned wafer inspection tool with a backside
inspection module3.
Spring 2006
www.kla-tencor.com/magazine
67
I
n
s
p
1
.BOVBM
e
2
t
i
o
3
1
9
c
4
3
n
4
5
6
%JTQPTJUJPO
%JTQPTJUJPO
.JOVUFT
9
.BOVBM
t'JOEEFGPDVTJOTDSFFOHBMMFSZ
t*OGPSNQSPDFTTFOHJOFFS
t1&DIFDLTDSFFOHBMMFSZEJSFDUMZ
t1&DIFDLXBGFSJO$%4&.
3FBE,-"3'
t%JTQPTJUJPO
t'JOEEFGPDVTJONBDSP
t*OTQFDUJPOJONJDSPTDPQF
t3FDPSEEFGFDUTQPTJUJPO
t*OGPSNQSPDFTTFOHJOFFS
t1&DIFDLXBGFSJO."%*
t1&DIFDLXBGFSJO$%4&.
t%JTQPTJUJPO
Figure 3. A comparison of manual and automated (Viper 243X) lot disposition time with 300-mm wafers inspected at a
frequency of six wafers per lot. Source: Powerchip, from YMS Europa poster paper.
every production lot is good, paying particular attention to problems which could result in significant yield
hits. Because the vast majority of lots are good, the
inspection is actually very tedious. The operator expects
product to be good, and becomes complacent (or bored),
often missing the improperly processed wafers or lot.
While a manual inspector must hunt (often unsuccessfully) for defects, an automated system can spend more
productive time actually viewing the problems found,
and making clear and documented disposition decisions.
The following bar graph compares the complete cycle
time for a manual inspector and an automated disposition tool for a representative case. The result is that a
22
7JQFS
.BOVBM
21
19
%FGFDU$PVOUT
9
6
3
1
2
4
1
3
1
3
0
3
0
3
0
3
1
0
1
0
1
0
DVT GPDVT FGPDVT
FGP
F
E
FE EHFE
FEH
F
O
O
U
I
MF
PO
F
E
PS
PU
WPJ QBSUJD DPNF JTDPM QPTVS TDSBUD USJBUJP JOBUJP GBMM
TQ
E
T
IPU
FY
UBN
OP
DPO
%FGFDU5ZQF
Figure 4. Manual and automated (Viper 243X) defect capture for all types
(frontside inspection only). In every case, manual inspection missed defects, and
in some cases, complete defect types. Note that dispositions based on the low
manual capture rates would result in downstream scrap and yield loss.
68
Spring 2006
Yield Management Solutions
problem can be identified more
quickly with automated disposition, putting less production
material at risk.
One operator can run and
manage multiple automated
disposition tools at the same
time, which s/he obviously
cannot do for manual inspection. Consequently, from a cost
of ownership (CoO) standpoint,
operation costs with an automated
inspector are approximately 1/3
of those of a manual inspection
operator. In addition, the disposition results are much more
consistent than those produced
by multiple inspection operators.
Any automated inspection system
requires recipes to run. Fabs state that it is necessary to
minimize any overhead associated with this disposition
activity. Using automated recipe creation (ARC), which
draws on recipes already created, most new devices and
layers should be completed in well under 10 minutes
by an average production operator. Derivative layer
recipes are automatically created by the tool at first run.
Automated inspection operation should not require an
engineer or highly trained technician.
Limitations of manual inspection
Multiple fab studies have shown that manual inspection
finds only one-tenth to one-quarter of what automated
disposition inspection uncovers. These errors come from
spinners, exposure tools, developers, etch tools, polishers, other process equipment, and handling. They can be
visible to the naked eye or with moderate magnification.
But manual inspection completely misses most defects,
often because the wafer and inspection conditions are
challenging, the operator looks in the wrong place, or
the operator is not paying sufficient attention. In many
cases, inspection operators have missed the fact that
some wafers had no pattern — on the whole lot, even on
multiple lots — or the material is not sampled. As automated inspections at ADI (after develop inspect), AEI
(after-etch inspect), and after polish have become more
routine, it is clear that manual inspection — although
relatively low in initial cost — leaves the door wide
open for revenue and profitability loss due to increased
scrapped wafers and lower yield.
I
Automated inspection has the
advantage of providing real data
rather than anecdotal descriptions from operators. The real
data can then be used to understand trends and isolate sources,
similar to what is done with
micro defect data, allowing
quantification and prioritization
of problem severity using Klarity Defect or alternative analysis
systems. In almost all cases,
fabs implementing automated
inspection of macro defects have
been surprised by the extent
of defects and process issues,
and their newfound ability to
respond quickly to fix them.
n
s
p
e
c
t
i
Manual Disposition
Viper 243X Disposition
Inspection area
5 points (< 10%)
(operator dependent)
Whole wafer (100%)
Within-lot sampling
3 wafers/lot (10%)
Whole lot (100%)
Lot sampling
100% (desired)
100%
Throughput
23-26 wph*
100 wph*
Inspection time/wafer
1-7 to 2.7 minutes*
(3 wafers only)*
0.6 minutes*
(25 wafers)*
Lot inspection time
5-8 minutes
15 minutes
Frontside illumination
Brightfield
Brightfield and Darkfield
Device (chip) inspection
Yes
Yes
Scribe line inspection
Limited
Yes
Unpatterned area inspection
Yes (although often ignored)
Yes
Wafer edge exclusion
0 mm
0 mm
Edge bead removal inspection
Inconsistent
Yes
Backside inspection*
Optional
Optional
o
n
Very good
While the operational costs of
both the manual and the autoReport detail & archive
Limited operator note
Lot summary, wafer report, gallery
mated disposition inspections
report, recipe report
are relatively low, they are not
Image save
No
Wafer images, defect clips
the most important cost. The
main goal of the disposition
Factory automation
No
Yes
inspection is to prevent bad
Table 1. A comparison of manual and Viper 2345 disposition inspections reveals that automated inspection is more
material from continuing down comprehensive and is faster in delivering go/no-go decisions. * Backside inspection requires more time per lot.
the line, where it will either be
scrapped or result in yield loss.
Since these types of process errors can be very large in scope,
• Cost due to missed defects is higher for manual
these costs have been known to be very high. When
inspection in losses through scrap and decreased yield.
rework is possible, such as in litho or at some CMP
steps, the value of the wafers can even be rescued. The
• Risk of zero-yield lots is higher with manual
other major goal for disposition inspection is to identify
inspection; this can affect a fab’s relationship with
a process tool which is unfit for production. This may be
its customer.
a result of the product wafer inspection, or through
a periodic cell monitor.
• The value of information is greater from the
automated disposition inspection.
As seen in Figure 4, manual inspection was found to be
inferior in both its sensitivity across all defect types and
Table 1 summarizes some additional details of comparison.
its ability to even see certain defect types. Each missed
problem will move downstream, only to be scrapped
later, or rejected as a yield loss.
Accurate disposition decisions
Ideally, whatever method is used for reaching a disposition, the decision should be as close to the specification
Cost and benefit of ownership:
as possible. It is sometimes hard to quantify the cormanual vs. automated inspection
rectness of decisions. One fab performed an evaluation
In summary, the overall cost of manual inspection is
of two automated inspection tools to see which one best
higher than that of automated disposition inspection:
matched the engineer responsible for the process module. The evaluation was done on product wafers with
• Manual CoO operational cost is higher for a
normal process variation.
statistically valid sample size
Consistency between operators/
tools, and over time
Very poor
Spring 2006
www.kla-tencor.com/magazine
69
I
n
s
p
e
c
t
i
o
n
Figure 5 shows the deviation of the two inspectors from the disposition decision made by the
responsible engineer. In the cases where the other
tool found too many defects, it was found that
these were not real defects, so the tool would
be rejecting a good lot (Alpha risk), or incorrectly placing it on hold. Where the tool found
far fewer defects, it was failing to reject a bad lot
(Beta risk); this would result in downstream scrap
or yield loss. The 243X disposition decisions
closely matched the engineer’s decisions.
"CPWFoUPPNBOZEJFSFKFDUFE
7JQFS9
0UIFSBVUPNBUFEUPPM
%JGGFSFODFOVNCFSPG%JF'BJMFE
Fabs require the capability to fine-tune go/no go
disposition rules based on defect distribution,
both by size and location. Rules can include zones
on the wafer, as well as overall lot defectivity.
This helps to optimize both the alpha and beta
risks of the disposition decisions. All results must
be available for review, and should be able to be
sent to the fab defect analysis system for correlation and historical analysis.
#FMPXoUPPGFXEJFSFKFDUFE
-PUOVNCFSXBGFSOVNCFS
Figure 5. Field comparison of two automated inspectors in a competitive head-to-head, deviation
from engineer decision.
Litho defocus disposition —— case studies
- - - - - -
- -
-
- - -
-
- -
-
-
-
-
-
-
-
-
-
.BOVBMJOTQFDUJPO
JOTQFDUJPO
%FUFDUJPOUISFTIPMEó˜NEFGPDVTDBQUVSF
%FUFDUJPOUISFTIPMEñ˜NEFGPDVTDBQUVSF
%FUFDUJPOUISFTIPMEó˜NEFGPDVTDBQUVSF
1SPCBCJMJUZPGEFUFDUJPO
.BOVBM
%FGPDVT˜N
Figure 6. Defocus detection threshold -- the Viper 2435 is demonstrated to be significantly more sensitive. Programmed defocus settings are indicated on the wafer maps.
70
Spring 2006
Yield Management Solutions
Collapsing lithographic process windows mean that there
is increasing likelihood of
areas of the wafer being out of
focus. Defocus can result from
scanner errors for a field or
within a field, and due to particles on the wafer backside or
on the scanner chuck, resulting in a “hot spot.” These
areas of defocus may result in
complete pattern failure or a
significant change in CD. In
most cases, these are very hard
to detect on real product wafers, but they result in yield
loss. In the cases where it is
due to a persistent particle on
a chuck, every wafer may have
yield loss at that location.
An experiment was run to
determine the ability of automated and manual inspectors
to identify defocus. A metal 1
logic product wafer was created with fields with known
amounts of defocus, and then
I
n
s
p
e
c
t
i
o
n
material out of specification. Such a low-level
defect may be isolated to its source by stacking
the results in the defect analysis system across
wafers and even across lots.
CMP disposition – case studies
Figure 7. The same defocus hot spot detected on the wafers as indicated2.
inspected by both methods. The results are shown in
Figure 6.
The automated tool was found to be significantly more
sensitive than the inspection operator. The fab’s process
window specification for this layer is 0.1 µm defocus;
the automated inspector found 50% of the defective
fields at this limit. The inspection operator did not find
defocused fields until they were well beyond the spec.
In this case, the defocus covers the full field. Hot spots
may be smaller, and the 2435 has been shown to
be effective at finding them. Advanced processes
are quite sensitive to even small particles on the
backside.
CMP is a process module which is still seeing
ongoing challenges and rapid process development. Many fabs have implemented automated
disposition inspection of macro defects because
of the relatively high rate of such defects and because of process instability. As an example, a fab
sampled a portion of a Cu CMP lot and found
one wafer with underpolish defects, as shown
in the following figure. The cassette map shows
that wafer 22 was rejected according to the
disposition rules, and the wafer map clearly shows
the problem, as does the saved wafer image.
In Figure 8, the CMP polish tool did not flag that
there was anything wrong with either the wafer or the
lot. The inspection result triggered an inspection of
the whole lot, where it was found that four additional
wafers were bad.
In Figure 9, it was possible to rework the wafers by sending them for additional polishing. This saved scrapping
high-value, near-end-of-line wafers.
In some cases, the backside event is not a
particle on a specific wafer, but on the scanner chuck. Automated disposition results very
quickly indicate the problem, as can be seen in
the following lot review screen. In this case, a
scanner with twin stages produced a hot spot on
every second wafer. This can easily be seen in the
cassette map, and is highlighted on the individual wafer thumbnail results.
Such an inspection result may be used to drive a
CD SEM to the specific location and measure the
pattern to determine the CD deviation.
While Figure 7 clearly shows the nature of the
problem, not all such cases are as clear. A small
stage problem may be right on the threshold
of the process window, and not always produce
Figure 8. Cu CMP lot sample with residual Cu detected. Disposition inspection thumbnails are
shown above, and the sampled cassette map is to the left.
Spring 2006
www.kla-tencor.com/magazine
71
I
n
s
p
e
c
t
i
o
n
Process flow
Automated disposition inspection fits into the
normal process flow, taking the place of the manual inspection. Because the system is fully factory
automation compliant, it likely fits into the flow
more easily than the manual step.
Figure 11 shows an example of how automated
disposition inspection can be fit into a litho
process flow.
Figure 9. Cu CMP lot with residual Cu detected on five wafers. The cassette map clearly shows
the rejected wafers.
Of course, the disposition inspection may be
performed within any sequence, but the abovedescribed flow is the most common BKM (bestknown method), following the expected yield and
cost cascade. In advanced litho processes, overlay
is typically the greatest yield detractor, so it
makes sense to do this first. The CD SEM is often
the busiest, so any material which can be routed
away from it (through rejection at a prior disposition) saves its capacity.
Note that this shows that all three inspections
and measurements are performed, but there are
layers where neither overlay nor CD measurement
are done. These non-critical steps can still have
process problems which result in gross process
errors (such as no resist or no develop), and should
still have a disposition inspection performed.
For each wafer or lot which is rejected, an operator will review the result to determine the course
of action. In many fabs, for layers where the cause
and fix are clear, the operator may directly take
the corrective action, particularly when it involves
rework. Usually, however, before a lot is scrapped,
it will go to an engineer for disposition.
Figure 10. Post CMP disposition inspection identified eight lots of wafers with edge damage. Wafer maps of the failed lots are shown above.
In another case (see Figure 10), an excursion was highlighted during a disposition inspection at metal CMP.
Eight full lots (200 wafers) were found to have a similar
type of wafer edge defects. Investigation showed that one
ECP tool was causing the problem across all the lots.
While the wafers had to be scrapped, the quick recognition of the problem allowed the ECP tool to be shut
down, minimizing damages to additional lots.
72
Spring 2006
Yield Management Solutions
Similar process flows have been established for other
process modules, with attention to the most cost-effective
sequence.
Process tool requalification takes a similar approach,
although it may vary depending on whether the qualification is done on product or test wafers. In litho, for example, the requalification is typically done with PCM4
(Photo Cell Monitoring) on resist test wafers; these same
wafers may be used for the gross error requalification
I
n
s
p
e
c
t
i
o
n
CAPTURING ALL DEFECT TYPES
The goal of disposition inspections is to capture all of the
types of errors which can happen. There are several issues which make this very challenging:
such as color due to film errors, spin or develop problems, residual pattern, arcing, striations, and backsplash.
•
Defects may have very different reflectivity or scattering profiles. Missing resist, hot spots, copper residual,
particles, scratches, and etch errors all look very
different from one another.
–
•
Some defects may be uniform, either covering the
whole wafer or consistently appearing on a specific
location. Manual and some automated inspection
may miss these because there is no fixed reference.
– Backside darkfield for backside particles and
damage
Problems to address include:
– No resist on a wafer
– No exposure
– No develop
– No etch
These channels are inspected concurrently, maximizing
throughput. The frontside inspections are done through
two channels, and the backside is inspected while the
next wafer is on the frontside stage (discussed in sidebar 1).
• Algorithms – High speed image processing extracts
defects from each of the image channels, while
suppressing noise from process variations:
– Process window failure which occurs on
a particular pattern (in litho, etch, and CMP)
– Die-to-die – Detects random defects such as scratches, striations, and others which vary across the wafer. This will also capture defects which repeat from exposure field to exposure field.
* A wafer (especially 300 mm) is a very large surface
to look at for subtle errors.
%BSLmFME$BNFSB
#SJHIUmFME$BNFSB
– Field-to-field – Detects additional random
defects, including scribe line problems
$BNFSB
$POUSPMMFS
#BDLTJEF.PEVMF
0QUJPOBM
4DBO
Frontside darkfield (scattered light imaging)
for defects which result in changes in scatter-
ing, such as scratches, defocus, particles,
peeling, pattern collapse, and under etch.
*MMVNJOBUPST
*NBHF
1SPDFTTPS
%FGFDU.BQ
8BGFS
:4UBHF
$POUSPM
$PNQVUFS
Concurrent brightfield and darkfield channels feed frontside wafer
images to the image processor to capture all defect types.
KLA-Tencor’s Viper 2435 automated disposition system
(and its predecessor) is designed to specifically target a
wide range of defect types. This is done through several
aspects of the design:
• Hardware – optical channels are designed to
detect defects with different characteristics:
– Frontside brightfield (direct reflection imaging) for defects which change the appearance, – Wafer-to-wafer – Wafer reference identifies missing film, uncoated, undeveloped, unetched wafers where the entire wafer processing is incorrect
– EBR – Detects errors in edge bead removal
– Edge Damage Check – Detects problems with the wafer edge and bevel
– Unpatterned or partially patterned area outside the main device array – Detects random and process errors outside the main patterned area
The specialized hardware and algorithms combine
to cover a wide range of defect types, giving good
sensitivity to the types of problems which fabs need to
capture at disposition inspections at litho, etch, CMP,
films, and other process modules.
Spring 2006
www.kla-tencor.com/magazine
73
I
n
s
p
Stepper / Track
e
c
t
Next
step
Rework or
feedback
i
o
n
Overlay
IN or OUT Spec?
Viper 243X
Pass
Pass or Fail Spec?
Hold
CD SEM
Pass
IN or OUT Spec?
Pass
Wafer Out
Rework
or feedback
Operator review
PL
for first filter out
is Previous /
Current Layer?
Pass
Notes on Host SW
CL
Rework or
feedback
Engineer disposition
Monitor 100% wafer
Alert
For final decision
or stop excursion
Waive
Scrap
Figure 11. Litho disposition process flow with 2435 performing defect disposition.
as well. This quickly and accurately qualifies the resist
coat, exposure, and develop process. If PWQ5 (Process
Window Qualification) is used for the qualification,
then additional test or product wafers are processed for
the gross defect portion of the requalification.
Conclusion
Fast, accurate disposition decisions are needed in today’s
300 mm fabs. The disposition decisions require the
ability to consistently detect a broad range of defects
and process failures on the wafer, at the wafer edge,
and even on the backside, at as low a cost as possible.
That low cost should not, however, come at the cost of
broad defect type capture or a low false count rate, not
to mention accuracy. Manual inspection approaches
have been shown to be inadequate for both product
disposition and process tool requalification, due to
their limitations in sensitivity to wide-ranging defect
types, in repeatability, and in agreement to engineering
decisions. Manual inspection has also been shown to
not meet the demands of factory automation and good
documentation of the results.
Automated gross production disposition and tool qualification have been in use in many wafer fabs, and have
been shown to be capable of meeting the requirements
of advanced 300 mm fabs. Today’s systems benefit from
the learning and development gleaned from prior generations while offering new innovations such as backside
inspection. Automated inspection has been shown to
be a consistent and reliable tool for fast disposition in
litho, CMP, etch, and other process modules. Furthermore, automated disposition inspection in fabs has been
shown to result in cost savings from reduced scrap and
improved yield, and through increased fab productivity.
Acknowledgement
The authors would like to thank Scott Ashkenaz for his
contributions to this article.
References
1.Electronic News, “Automation to Proliferate in Intel Manufacturing,” December 2005
2.KLA-Tencor, Think Shrink, 2004
3.KLA-Tencor, Start Yield Enhancement from the Wafer Backside, 2005
74
Spring 2006
Yield Management Solutions
➤ 0MXLS'LEPPIRKI
;LIRSYV'(W[IRXSYXSJ
GSRXVSP[IHMHRXNYWXRIIH
XSORS[XLEXWSQIXLMRK[EW
[VSRK;IRIIHIHXSORS[
[L]ERHLS[XS´\MX
ˆ
( %8%
ˆ
- 2 * 3 6 1 %8 - 3 2
ˆ
/ 2 3; 0 ) ( + )
ˆ
()'-7-327
2S[MXWRSXETVSFPIQ
➤ 0MXLS7SPYXMSR0MRI[MHXL'SRXVSP
-RXSHE]«WGSRWXERXP]GLERKMRKQERYJEGXYVMRKIRZMVSRQIRXTVIHMGXEFPIPMRI[MHXLGSRXVSPMWI\XVIQIP]
HMJ´GYPX=IXPMXLSIRKMRIIVWLEZIJSYRHXLEXF]YWMRKXLIVMKLXQIEWYVIQIRXXSSPW[MXLXLIVMKLX
HEXEEREP]WMWXLI]GERWMKRM´GERXP]VIHYGIEGVSWWGLMTPMRI[MHXLZEVMEXMSRW-RGVIEWMRKXVERWMWXSV
WTIIH-QTVSZMRKPIEOEKIGYVVIRX%RHQE\MQM^MRK]MIPHERHIPIGXVMGEPHIZMGITIVJSVQERGI=SY
XSSGERQEOIVETMHORS[PIHKIEFPIHIGMWMSRWVIKEVHMRK'(IVVSVWERHLS[XS´\XLIQ8SKIXLIV
[IGEREGLMIZIELMKL]MIPHMRKVIPMEFPITEXXIVRMRKTVSGIWW
I'(
¦-RPMRI'(7)1QIXVSPSK]JSVMRHIZMGI'(GSRXVSPERHHIWMKRZIVM´GEXMSR
ˆ
¦-RPMRIWTIGXVSWGSTMG'(QIXVSPSK]JSV'(ERHPMRI[MHXLTVS´PI%4'
7TIGXVE'(
ˆ
¦%YXSQEXIHPMXLSGIPPGSVVIGXMFPIWJSV'(ERHSZIVPE]VIKMWXVEXMSR
/8%REP]^IV
ˆ
¦0MXLSKVETL]WMQYPEXMSRJSVTVSGIWWSTXMQM^EXMSR
4630-8,
ˆ
%TTPMGEXMSRWI\TIVXMWI
ˆ
=MIPHGSRWYPXMRK
ˆ
81
81
81
81
ˆ *SVQSVIPMRI[MHXLGSRXVSPWXVEXIKMIWZMWMX
[ [ [ O P E X I R G S V G S Q P M X L S
=SYVSRPMRIPMXLSKVETL]VIWSYVGIGIRXIV
Œ/0%8IRGSV'SVTSVEXMSR
Product News
SpectraCD-XT
Cost-effective Optical CD Metrology
SpectraCD-XT provides high-performance 2D/3D CD metrology for the 90-nm
and 65-nm nodes at the lowest cost per yield-relevant measurement. Optimized to
meet chipmakers’ increased sampling requirements without compromising sensitivity, the SpectraCD-XT features sub-two-second move-acquire-measure (MAM)
time and a two-fold increase in throughput (to more than 100 wph) compared to
previous-generation platforms. Leveraging spectroscopic ellipsometry technology, the system provides leading measurement precision for advanced applications
where device structures are highly complex and require multiple types of measurements, such as shallow trench isolation (STI), gate, and spacer. Oblique illumination optics enable the identification of smaller structural anomalies—such as
notching and footing of gate profiles—that more accurately correlate to end-of-line
device performance and yield. Improved model setup and analysis tools available
with the SpectraCD-XT speeds library generation by 30 to 60x, providing significant savings in time and engineering resources.
SpectraCD-XT
TeraScan STARlight-2
Cost-effective Reticle Contamination Inspection
The TeraScan STARlight-2 provides cost-effective inspection of contamination
and electrostatic discharge on advanced 65-nm node photomasks at high resolution and throughput. Its enhanced capabilities are uniquely suited for inspecting
mainstream XRET (extreme resolution enhancement technique) photomasks,
as well as detecting crystal growth and other progressive defects, which lead to
gradual yield roll-off. Smaller pixel sizes (125 nm and 90 nm) provide the resolution and sensitivity needed to detect extremely small mask contaminants well
before they impact the process window. Improved algorithms enable contamination detection in high-density patterned areas. TeraScan STARlight-2’s full-field
contamination inspection capability allows detection in scribes and borders —
where progressive defects generally emerge—as well as in high density patterned
areas for both single-die and multi-die reticles. After defects are captured,
a ReviewSmart option on the TeraScan STARlight-2 dramatically reduces the
number of defects that require review by grouping similar defects into common
bins to facilitate disposition and speed corrective measures.
TeraScan STARlight - 2
Viper 2435
Automated Wafer and Tool Dispositioning System
As KLA-Tencor’s newest automated 300-mm wafer and tool dispositioning system,
the Viper 2435 delivers quick go/no-go decisions to help chipmakers avoid production delays and minimize yield loss. Able to capture the broadest range of defect
types at high throughput, the Viper 2435 can be implemented inline quickly to
monitor the lithography, chemical mechanical planarization (CMP), and etch process
modules. Automated dispositioning with the Viper 2435 delivers a 67 percent lower
cost of ownership than manual inspection. Process engineers are equipped to identify critical defects through the tool’s enhanced signal-to-noise ratio and adaptive
thresholding techniques, which result in stronger nuisance suppression capabilities.
Engineers can then quickly determine which wafers need to be scrapped or sent for
rework and to promptly flag problematic tools or process modules. Extendible and
fully 300-mm factory automation-compatible, the Viper 2435 can help chipmakers
accelerate time to decisions.
Viper 2435
76
Spring 2006
Yield Management Solutions
2367
UV Line Monitor for Rapid, Low Cost of Ownership Yield Ramp
Extending the widely adopted 2365, the 2367 is the latest-generation UV
brightfield patterned wafer inspection solution from KLA-Tencor. With increased
sensitivity and a 2X faster data rate, the 2367 is equipped to deliver higher sampling
rates and more effective capture of yield-impacting defects on critical FEOL and
BEOL layers such as gate etch and Cu CMP. The 2367 complements the 2800
DUV brightfield inspector in a mix-and-match inspection strategy that enables
chipmakers to customize the illumination wavelength applied to each unique
layer for more effective defect capture. For faster tool and yield learning as well as
production ramps, the 2367 features a common UI with the 2800 as well as the
Puma 9000 darkfield inspection solution and the eS32 e-beam inspection tool.
2367
eS32
e-Beam Inspection for Faster Decision-making
Providing the widest capture of systematic, yield-impacting electrical and physical
defects in FEOL and BEOL applications, the eS32 is integral in an inspection
strategy facilitating faster decision-making. With its unique ability to find and
fix electrical defects earlier, before production delays or yield loss result, the eS32
enables chipmakers to continue innovating to maximize transistor performance.
Methodologies such as the proprietary, on-board µLoop along with algorithms for
advanced binning facilitate faster time to root cause. Enhancements in sensitivity
and throughput, along with its ease of setup and use, enable the eS32 to provide
a low cost of ownership. Rather than spending time calibrating the tool, for
example, yield engineers can focus on implementing applications that can help
them improve yield.
eS32
DesignScan
Lithography-aware Design Inspection
DesignScan is the industry’s first full-chip process window inspection system for
inline post-RET reticle design layout inspection. Using DesignScan, chipmakers
can catch systematic design defects on critical layer mask designs at the 90 nm
and below nodes, before building the reticles. The system was developed with
KLA-Tencor’s high speed image computing platform, proprietary calibration
process, highly accurate physics-based process models, and defect review and
disposition methodologies.
Across the process window and at best focus and exposure conditions, the system
delivers the fastest time to results for comprehensive design data defect detection.
The tool facilitates communication of design-related information between foundries and the fabless design community. Easy to use and available in different speed
configurations, DesignScan provides the lowest cost per inspection while helping
chipmakers reduce their frequency of mask respins.
Spring 2006
www.kla-tencor.com/magazine
DesignScan
77
Product News
Candela CS20
High Brightness LED Production Monitor
The Candela CS20 is the first automated wafer inspection system from KLA-Tencor
designed to address the defect management requirements of the rapidly growing
high brightness light-emitting diode (HB-LED) market. Leveraging a proprietary,
multi-channel detection architecture, the Candela CS20 can inspect transparent
wafers and epi layers for micro-pits and other defects non-destructively at
throughputs of up to 25 wafers per hour—enabling, for the first time, a true
production line monitor for wafers used to produce HB-LED devices. During
HB-LED production, contaminants such as particles and stains can alter film
characteristics or cause adhesion problems for subsequent layers. Surface and
sub-surface defects, crystal dislocations, and excessive roughness can impact
subsequent processes and substantially degrade device performance and yield.
Traditional inspection methods rely on manual review by an operator—making
them extremely slow, unreliable, and often destructive. In addition, these
methods are not easily scaled up to meet increasing production volumes. The
Candela CS20 offers the sensitivity, versatility, and throughput needed for both
process development and epi-growth production control for HB-LED manufacturing. The Candela CS20 is the first new product to come out of KLA-Tencor’s
recent acquisition of Candela Instruments, a leading supplier of inspection systems
for the data storage, compound semiconductor, and digital imaging markets.
Candela CS20
P-16 and P-16OF
Contact Stylus Profilers
The P-16 and P-16OF (Open Frame) are new contact stylus profilers designed for
automated step height, surface contour, and roughness measurements. Providing
detailed 2D and 3D topography analysis for a variety of surfaces and materials,
these programmable surface metrology tools are utilized in a wide range of applications and industries. The profilers’ new sequencing feature, included as a standard
configuration, is key to capturing the needs of those customers who want the
added convenience of automated wafer mapping.
With their user friendly and powerful Apex software, the P-16 and P-16OF
support the creation of custom reports; apply a variety of ISO standard filtering
methods and unique leveling techniques; set tolerance limits for statistical process
control metrics; and calculate stress, bearing ratio, distance, volume, density,
flatness, peak count distribution as well as an extended list of parameters for
step height, roughness and waviness measurements. Apex software is available
in several languages. Common applications include CMP monitoring and bump
metrology (semiconductor), HDD head and disk tribology characterization (data
storage), etch rates and film stress analysis (MEMS), general thin films and chemical coating analysis as well as surface characterization of electronic components,
opto electronics, flat panels, and biomedical devices.
P-16
P-16OF
78
Spring 2006
Yield Management Solutions
MRW3 Quasi-static Tester
Measurement System for MRAM and HDD Industries
The MRW3 quasi-static wafer tester measures the magneto-resistive
characteristics of MRAM (magneto-resistive random access memory),
magnetic recording heads, and other magneto-resistive devices. The MRW3
is used to test the devices while they are still in wafer form — prior to the
wafer being cut into individual magneto-resistive devices or chips.
The MRW3, configurable for both 200 mm and 300 mm wafers, delivers
increased productivity compared with other quasi-static wafer testers, with
industry-leading repeatability (<0.5 Oe) and resistance (1000Kohms). The
tester speeds MRAM stack development and tightens the process control loop
by enabling access to MRAM bitcell read/write performance without the need
for CMOS integration. The MRW3 applies external magnetic fields in the
plane of the wafer, and measures device or thin film electrical response. It
obtains wafer-level transfer and enables early access to patterned Magnetic Tunnel
Junction (MTJ) performance for control of magnetic processes. By reducing
set-up time, the MRW3 system also offers dramatic improvements in the software. The system integrates with standard semiconductor probers, and is Class
10 compatible. These features, combined with its superior magneto resistive
characterization and analysis accuracy, make the system ideal for both inline
monitoring and engineering analysis.
MRW3 Quasi-static Tester
KT Analyzer
Parametric Analysis Solution
KT Analyzer is KLA-Tencor’s new family of parametric analysis tools that help
chip manufacturers make faster and more accurate decisions to achieve a high
yielding, reliable patterning process. The solution combines automated analysis
capabilities embedded on KLA-Tencor’s latest-generation overlay, optical CD,
and CD SEM metrology systems with off-line engineering analysis. KT Analyzer
provides the enhanced information needed to achieve real-time process control,
including lot and tool disposition, run time and preventative maintenance
feed-back and feed-forward, root-cause analysis, and automated fault detection.
KT Analyzer
Spring 2006
www.kla-tencor.com/magazine
79
➤ 0MXLS'LEPPIRKI
¨-LEHRSMHIEEX[LEX
TSMRXTVSKVIWWMZIQEWO
HIJIGXW[SYPHEJJIGXQ]
]MIPH2S[-ORS[LS[
ˆ
( %8%
ˆ
- 2 * 3 6 1 %8 - 3 2
ˆ
/ 2 3; 0 ) ( + )
ˆ
()'-7-327
XS´RHSYX©
➤ 0MXLS7SPYXMSR6IXMGPI5YEPMX]%WWYVERGI
*MRHMRKTVSKVIWWMZIHIJIGXWERHEWWIWWMRK[LIRXLI]«PPTVMRXMWEGLEPPIRKIJSVQER]
JEFWXSHE]=IXPMXLSIRKMRIIVWLEZIPIEVRIHXLEX[MXLXLIVMKLXVIXMGPIVIUYEPM´GEXMSR
WXVEXIK]XLI]GERHIXIGXERHGPEWWMJ]]MIPHVIPIZERXTVSKVIWWMZIHIJIGXWMRXMQI&IJSVI
XLI]TVMRX&IJSVIXLI]LYVX]MIPH%RHFIJSVIXLI]MQTEGXHIZMGIVIPMEFMPMX]=SYXSS
GERQEOIVETMHORS[PIHKIEFPIHIGMWMSRWEFSYX]SYVVIXMGPIUYEPMX]8SKIXLIV[IGER
EGLMIZIELMKL]MIPHMRKVIPMEFPITEXXIVRMRKTVSGIWW
ˆ 8IVE7GER 78%6PMKLX §,MKLWIRWMXMZMX]VIXMGPIMRWTIGXMSRXSRQ
ˆ 8IVE7XEV §,MKLXLVSYKLTYXVIXMGPIMRWTIGXMSRXSRQ
ˆ (IWMKR7GER §0MXLSE[EVIHIWMKRMRWTIGXMSRWSPYXMSR
ˆ %TTPMGEXMSRWI\TIVXMWI
ˆ =MIPHGSRWYPXMRK
ˆ *SVQSVIVIXMGPIUYEPMX]EWWYVERGIWXVEXIKMIWZMWMX
81
81
81
81
[ [ [ O P E X I R G S V G S Q P M X L S
=SYVSRPMRIPMXLSKVETL]VIWSYVGIGIRXIV
Œ/0%8IRGSV'SVTSVEXMSR