Vous êtes sur la page 1sur 19

Explain MOORES LAW:

Gordon E. Moore
1965-Cramming More Components onto Integrated Circuits, Electronics, vol. 38, no. 8, April 19,1965.
The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see
graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase.
Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will
not remain nearly constant for at least 10 years. That means by 1975, the number of components per
integrated circuit for minimum cost will be 65,000.
I believe that such a large circuit can be built on a single wafer.
1975-Progress in Digital Integrated Electronics, IEDM Tech. Digest, 1975, pp. 11-13.
The rate of increase of complexity can be expected to change slope in the next few years as shown in Figure 5.
The new slope might approximate a doubling every two years, rather than every year, by the end of the
decade.
1995-Lithography and the Future of Moores Law, Proc. SPIE, vol. 2437, May 1995.
By making things smaller, everything gets better simultaneously. There is little need for trade-offs. The speed
of our products goes up, the power consumption goes down, system reliability, as we put more of the system
on a chip, improves by leaps and bounds, but especially the cost of doing thing electronically drops as a result
of the technology.

What is the YIELD
Fraction (or percentage) of good chips produced in a manufacturing process is called the yield. Yield is
denoted by symbol Y.
A manufacturing defect is a finite chip area with electrically malfunctioning circuitry caused by errors in the
fabrication process.
A chip with no manufacturing defect is called a good chip.
Y = Prob (zero defect on a chip) = p (0)
Y = (1 + Ad / )


Here Defect density (d) = Average number of defects per unit chip area
Chip area (A)
Clustering parameter ()

Explain IC failure mechanism?
ICs have many subtle flaws that dispose them towards failure.
Very general type of IC failure of which there are 3 primary subtypes
Electrostatic Discharge (ESD)-
Most hard damage to semiconductors occurs below human sensitivity around 4000V
2 primary hard failure types
voltage punch through CMOS and MOS with very thin oxide dielectric layers
P-N junction degradation bipolar circuits excessive power dissipation
Degradation
Increased leakage current
Lower breakdown voltages of P-N junctions
Softening of the knee of V-I curve of a P-N junction
Decreased dielectric constant, Problems may not occur until later with additional stresses
Intermittent upset no hard damage, but results in data loss or noise
Electromigration
Impacting of electrons causes gradual shifting of Aluminum atoms from their normal lattice sites (see
picture)
Aluminum atoms move away from grain boundaries causing voids to form between grains
Reduced area of wire increases resistance and worsens problem

Explain ERC and DRC terms?
Design Rule Checking
Design Rule Checking or Check(s) (DRC) is the area of Electronic Design Automation that determines
whether the physical layout of a particular chip layout satisfies a series of recommended parameters
called Design Rules. Design rule checking is a major step during Physical verification signoff on the design,
which also involves LVS (Layout versus schematic) Check, XOR Checks, ERC (Electrical Rule Check) and
Antenna Checks. For advanced processes some fabs also insist upon the use of more restricted rules to
improve yield.

The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule
specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance
between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process,
with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers
having larger rules (perhaps 400 nm as of 2007).
DRC software usually takes as input a layout in the standard format and a list of rules specific to the
semiconductor process chosen for fabrication. From these it produces a report of design rule violations that the
designer may or may not choose to correct. Carefully "stretching" or waiving certain design rules is often used
to increase performance and component density at the expense of yield.

ELECTRICAL RULE CHECKING:
Geometrical design rules ensure that the circuit will be manufactured correctly by checking the relative
position, or syntax, of the final layout. However, there is nothing to ensure that this circuit will work. Correct
functionality is left to the simulators and verifiers that manipulate circuit activity and behavior. Nevertheless,
there is a middle ground between simple layout syntax and complex behavioral analysis, and it is the domain
of electrical-rule checkers, or ERC.
Electrical rules are those properties of a circuit that can be determined from the geometry and connectivity
without understanding the behavior. For example, the estimated power consumption of a circuit can be
determined by evaluating the requirements of each device and trying to figure out how many of the devices
will be active at one time. From this information, the power-carrying lines can be checked to see whether they
have adequate capacity. In addition to power estimation, there are electrical rules to detect incorrect transistor
radios, short-circuits, and isolated or badly connected parts of a circuit. All these checks examine the network
and look for inconsistencies. Thus, whereas design-rule checking does syntax analysis on the layout,
electrical-rule checking does syntax analysis on the network.
There is an unending number of specialized electrical rules. When two output signals connect such that their
values combine, they are called tied outputs. This may be purposeful, but it may also be an error. Microwave
circuit boards require that the wires be tapered to prevent incorrect impedance conditions. Some design
environments limit the number of components that an output signal can drive (fan-out limitation) or the
number of input lines that a component can accept (fan-in limitation).
All the electrical rules mentioned here can be checked at one time as the circuit network is traversed. Thus a
single electrical-rule checking program is often used to check these and other circuit properties [Baker and
Terman]. Such a tool can take a unified approach to checking by reducing everything to constraints on the
circuit network [Karplus]. Since electrical rules tend to be specific to a particular design environment, they
should be easily tailorable. The use of rule-based systems allows electrical rules to be updated easily for any
given design environment and even allows the rule checker to focus its attention on the sensitive parts of a
circuit [De Man et al.].

What is the TECHNOLOGY MAPPING?
Technology mapping is the phase of logic synthesis when gates are selected from a technology library to
implement the circuit. Technology mapping is normally done after technology independent optimization.
Basic Requirements:
1. Provide high quality solutions (circuits).
2. Adapt to different libraries with minimal effort.
3. Library may have irregular logic functions.
4. Support different cost functions.
5. Transistor count, level count, detailed models for area, delay, and power, etc.
6. Be efficient in run time.
Two Approaches:
1. Rule-based techniques
2. Graph covering techniques (DAG)
Why technology mapping?
Straight implementation may not be good. For example, F = (abcdef) as a 6-input AND gate cause a long
delay. Gates in the library are pre-designed; they are usually optimized in terms of area, delay, power, etc.

1. Base function - Base function set is a set of gates which is universal and is used to implement the
gates in the technology library. Ex. 2-input AND, 2-input OR, and NOT
2. Subject graph -Subject graph is the graph representation of a logic function using only gates from a
given base function set. (I.e., the nodes are restricted to base functions.).
All distinct subject graphs of the same logic have to be considered to obtain global optimal design.
3. Pattern graph- For any library gate, its logic function can be represented by a graph where each node
is one of the base functions. This graph is called a pattern graph for this library gate.
A pattern graph is a subject graph when the function represents a library gate.
4. Cover- A cover is a collection of pattern graphs so that:
a. every node of the subject graph is contained in one (or more) pattern graphs
b. each input required by a pattern graph is actually an output of some other pattern graph (i.e.
the inputs of one library gate must be outputs from other gates.)
5. Cost of a Cover
a. Area: total area of the library gates used (I.e. gates in the cover).
b. Delay: total delay along the critical path.
Power: total power dissipation of the cover



Why poly-silicon used in MOS?
One reason for the initial switch to poly-silicon is that fabrication processes after the initial doping
required very high temperature annealing. Metal gates would melt under such conditions whereas poly-
silicon would not. Using poly-silicon allowed for a one-step process of etching the gates compared
elaborating multi-steps that we see today in metal-gate processes.
The other reasons are that the threshold voltage of the MOSFET inversion layer is correlated with the work-
function difference between the gate and the channel. Using metal would result in a higher V
t
compared to
poly-silicon since a poly-silicon gate would be of the same or similar material composition as the bulk silicon
channel.
Why NAND gate preferred more over NOR gate?
NAND is a better gate for design than NOR because at the transistor level the mobility of electrons is
normally three times that of holes compared to NOR and thus the NAND is a faster gate.
Additionally, the gate-leakage in NAND structures is much lower. If you consider t
phl
and t
phl
delays you will
find that it is more symmetric in case of NAND (the delay profile), but for NOR, one delay is much higher
than the other (obviously t
phl
is higher since the higher resistance PMOS'S are in series connection which
again increases the resistance).
Difference between floor planning & placement?
Partitioning leads to blocks with well-defined areas and shapes (fixed blocks), blocks with approximate areas
and no particular shape, a net list specifying the connections between the blocks. The objectives of the floor
planning and placement phase is to find location of all blocks, the shapes of the flexible blocks and the pin
locations for all the blocks .Floor planning and placement can have a big effect on the total length of the
wiring used and the routing.
Explain the procedure to determine Noise Margin?
In digital logic design the general representation of input and output are High and low level (1's and 0's). In
actual case when the input signal transitions the output switches to full swing before the input has reached its
full swing. The minimum signal level to get a output High or low is called VIH, VIL (For inversion stage
Input for getting full output low swing and input for getting full output high swing). For the inter operability
of this logic device i.e. to use this output directly to feed into next stage without level shifting the VIL > VOL,
similarly VIH < VOH. In such condition when logic swing VIL is enough to get full swing on Output the
noise margin will be VOL-VIL. Similarly NM for signal transitioning high is VOH-VIH.


Explain VLSI design cycle?


Explain Latch up in CMOS?
A byproduct of the Bulk CMOS structure is a pair of parasitic bipolar transistors. The collector of each BJT is
connected to the base of the other transistor in a positive feedback structure. A phenomenon called latchup can
occur when (1) both BJT's conduct, creating a low resistance path between Vdd and GND and (2) the product
of the gains of the two transistors in the feedback loop, b1 x b2, is greater than one. The result of latchup is at
the minimum a circuit malfunction, and in the worst case, the destruction of the device.

Cross section of parasitic transistors in Bulk CMOS

Equivalent Circuit
Latch up may begin when Vout drops below GND due to a noise spike or an improper circuit hookup (Vout is
the base of the lateral NPN Q2). If sufficient current flows through Rsub to turn on Q2 (I Rsub > 0.7 V), this
will draw current through Rwell. If the voltage drop across Rwell is high enough, Q1 will also turn on, and a
self-sustaining low resistance path between the power rails is formed. If the gains are such that b1 x b2 > 1,
latch up may occur. Once latch up has begun, the only way to stop it is to reduce the current below a critical
level, usually by removing power from the circuit.
The most likely place for latch up to occur is in pad drivers, where large voltage transients and large currents
are present.
Explain Y-CHART?
The design process, at various levels, is usually evolutionary in nature. It starts with a given set of
requirements. Initial design is developed and tested against the requirements. When requirements are not met,
the design has to be improved. If such improvement is either not possible or too costly, then the revision of
requirements and its impact analysis must be considered. The Y-chart (first introduced by D. Gajski) shown in
Fig. 1.4 illustrates a design flow for most logic chips, using design activities on three different axes (domains)
which resemble the letter Y.



The Y-chart consists of three major domains, namely:
Behavioral- Abstract function
Structural- Interconnection of parts
Physical- Physical objects with size and positions
Synthesis:
Architectural-level synthesis:
Determine the macroscopic structure:
Interconnection of major building blocks.
Logic-level synthesis:
Determine the microscopic structure:
Interconnection of logic gates.
Geometrical-level synthesis(Physical design)
Determine positions and connections.


The design flow starts from the algorithm that describes the behavior of the target chip. The corresponding
architecture of the processor is first defined. It is mapped onto the chip surface by floorplanning. The next
design evolution in the behavioral domain defines finite state machines (FSMs) which are structurally
implemented with functional modules such as registers and arithmetic logic units (ALUs). These modules are
then geometrically placed onto the chip surface using CAD tools for automatic module placement followed by
routing, with a goal of minimizing the interconnects area and signal delays. The third evolution starts with a
behavioral module description. Individual modules are then implemented with leaf cells. At this stage the chip
is described in terms of logic gates (leaf cells), which can be placed and interconnected by using a cell
placement & routing program. The last evolution involves a detailed Boolean description of leaf cells
followed by a transistor level implementation of leaf cells and mask generation. In standard-cell based design,
leaf cells are already pre-designed and stored in a library for logic design use.

Explain channel length modulation effect in MOSFET?
The voltage VGS VTH is given and constant. If VDS = VDS,sat = VGS VTH, then the channel pinches
off but still the channel effective length is L. As VDS becomes larger than VDS,sat the pinch-off point begins
to slightly move towards the Source.

At the pinch-off point x = xpo we have VGS V (xpo) = VTH, so the current does not stop. The excess VDS
voltage (VDS VDS,sat) increases the size of the depletion region around the Drain region. This excess
voltage VDS VDS,sat falls across the narrow depletion region between the Drain and the channel
Electrons that are ejected at the end of the channel (near the pinch-off point) are accelerated and swept into the
Drain.

In summary changes in V
DS
cause changes in L of the channel.
Means L is in fact a function of V
DS
. This effect is called as Channel Length Modulation.
L = L - L and L/L = V
DS
And



What is Design Constraints in VLSI technology?
Constraints are type of restrictions. So if you have "N" number of ways to solve a problem, then after applying
constraints it may be that there are only few solutions available. Some time it may be that there is no solution
and it means the constraints are too much restrictive.
Constraints can be any type design related, cost related, resource related and market related. But from
technically point of view, as an engineer we only deal with technical constraint with in a Chip design cycle.
So Constraints are the instructions that the designer apply during various step in VLSI chip implementation,
such as logic synthesis, clock tree synthesis, Place and Route, and Static Timing Analysis. They define what
the tools can or cannot do with the design or how the tool behaves.
) 1 ( ) ( ) (
2
1
2
DS TH GS ox n D
v V v
L
W
C i + =
Method of exchanging the Constraints across Different tools: Standard Design Constraint (Synopsys Design
Constraint) (SDC) format is the standard method of exchanging the design timing Constraint across different
tools.
There are basically two types of Design constraints:
1. Design Rule Constraints
- Design rules constraints are defined by the ASIC vendor in the technology library (liberty file *.lib)
file (implicit constraints)
- You cannot discard or override these rules.
- You can apply more restrictive design rules, but you cannot apply less restrictive ones. This thing you
can do with the help of optimization constraints.
- Design rules constrain the nets of a design but are associated with the pins of cells from a technology
library.
- These constraints can be library specific (common to all the cells defined in that library file) or may
be individual cell specific.
2. Optimization Constraints
- Optimization constraints are explicit constraints (set by the designer).
- They describe the design goals (area, timing, and so on) the designer has set for the design.
- They must be realistic.

What is the Substrate Bias Effect?
For simplicity, it is assumed that the source and the bulk of the transistor are tied together, but in reality it is
not. It is one of the Second-Order effects in analysis of a MOSFET.
The voltage difference between the source and bulk (substrate) (V
SB
) affects V
TH
of transistor. Since the S and
D junction remain reverse biased the device continue to operate properly but certain characteristics may
change. To understand the effect suppose V
s
= V
D
= 0 and V
G
is somewhat less than V
TH
so that a depletion
region is formed under the gate but no inversion layer exist. As V
B
becomes more negative, more holes are
attracted to the substrate connection, leaving a large negative charge behind that is the depletion region
becomes wider. The threshold voltage is a function is a function of total charge in the depletion region
because the gate charge must mirror Q
d
before an inversion layer is formed. Thus as V
B
drops and Q
d

increases V
TH
also increases. This is called the Body Effect or the Back gate Effect.

where
C
Q
V
ox
dep
F MS TH , 2 + u + u = silicon gate MS u u = u
( )
i
sub
F
n
N
q
kT
ln
|
.
|

\
|
= u
sub F si dep N q Q u = c 4
( )
ox
sub si
F SB F TH TH
C
N q
V V V
c

2
, 2 2 0 = u + u + = Denotes the body effect coefficient.


What is the Electrostatic Discharge?
The phenomenon of electrostatic discharge (ESD) gives rise to images of lightning strikes or the sparks that
leap from ones fingertips when touching a doorknob in dry winter. The sparks are the result of the ionization
of the air gap between the charged human body and the zero-potential surface of the doorknob. Clearly a high
voltage discharge takes place under these circumstances with highly visible (and sometimes tangible) effects.
In the semiconductor industry, the potentially destructive nature of ESD in integrated circuits (IC) became
more apparent as semiconductor devices became smaller and more complex. The high voltages result in large
electric fields and high current densities in the small devices, which can lead to breakdown of insulators and
thermal damage in the IC.
The losses in the IC industry caused by ESD can be substantial if no efforts are made to understand
and solve the problem. Figure 1.1 shows that the distribution of failure modes observed in silicon ICs and
ESD is observed to account for close to 10% of all failures. The largest category is that of electrical overstress
(EOS), of which ESD is a subset. In many cases, failures classified as EOS could actually be due to ESD,
which would make this percentage even higher.
The significance of ESD as an IC failure mode has led to concerted efforts by IC manufacturers and
university research workers in the US, Europe, and Japan to study the phenomena. Progress has been made in
understanding the different types of ESD events affecting ICs, which has enabled test methods to be
developed to characterize their ESD. ESD prevention programs have been put in place during IC
manufacturing, testing, and handling, which have reduced the buildup of static and the exposure of ICs to
ESD. Studies have been made of the nature of destruction in IC chips and, based on this work, techniques for
designing protection circuits have been implemented, which has made it possible for the present generation of
complex ICs to be robust for ESD.
The introduction of each new generation of silicon technology results in new challenges in terms of
ESD capability and protection circuit design.
Initially the ESD performance improves as the circuit designs mature and problems are solved or
debugged. After a certain time the technology changes (i.e., LDD, silicides) cause the circuit to no longer
function to its original capability, and the introduction of new protection techniques are needed to restore good
ESD performance. CMOS ICs in automotive environments require very high ESD protection levels, which
places an even higher demand on the design of protection circuits. The speeds with which new technologies
are introduced have reduced available time for protection circuit development. In fact it is becoming more and
more important to design circuits that can be transferred into the newer technologies with minimum changes.
Hence, it is necessary to understand the main issues involved in ESD protection circuit design and the
physical mechanisms taking place in order to ensure that the design can be scaled or transferred with
minimum impact to the ESD performance.
The importance of building-in reliability demands design approaches that include ESD robustness as
part of the technology roadmap. The design and optimization of circuits with ultra small transistors (sub-0.25
m) use a large number of simulation tools prior to committing the circuits to silicon.
Thus, modeling and simulation of ESD effects in the protection circuit is important;
we discuss the main approaches here. The book is aimed at providing an overall
picture of the issues involved in ESD protection circuit design and analysis. It is
intended to provide a basis in this field for circuit design and reliability engineers
as well as process and device design engineers who have to deal with ESD in
integrated circuits.

Write the important commands use in LINUX?
The important commands which are used mostly in Windows are as follows:
- ls [option(s)] [file(s)]: If you run ls without any additional parameters, the program will list the
contents of the current directory in short form.
- cp [option(s)] source file target file: Copies source file to target file.
- mv [option(s)] source file target file: Copies source file to target file then deletes the original source
file.
- rm [option(s)] file(s):Removes the specified files from the file system. Directories are not removed by
rm unless the option -r is used.
- cd [options(s)] [directory]:Changes the current directory. cd without any parameters changes to the
user's home directory.
- mkdir [option(s)] directory name: Creates a new directory.
- rmdir [option(s)] directory name: Deletes the specified directory, provided it is already empty.
- chmod [options] mode file(s):Changes the access permissions
What is the BICOMS technology?
'BiCMOS' is an evolved semiconductor technology that integrates two formerly separate semiconductor
technologies - those of the bipolar junction transistor and the CMOS transistor - in a single integrated
circuit device.
Bipolar junction transistors offer high speed, high gain, and low output resistance, which are excellent
properties for high-frequency analog amplifiers, whereas CMOS technology offers high input resistance and is
excellent for constructing simple, low-power logic gates. For as long as the two types of transistors have
existed in production, designers of circuits utilizing discrete components have realized the advantages of
integrating the two technologies; however, lacking implementation in integrated circuits, the application of
this free-form design was restricted to fairly simple circuits. Discrete circuits of hundreds or thousands of
transistors quickly expand to occupy hundreds or thousands of square centimeters of circuit board area, and
for very high-speed circuits such as those used in modern digital computers, the distance between transistors
(and the minimum capacitance of the connections between them) also makes the desired speeds grossly
unattainable, so that if these designs cannot be built as integrated circuits, then they simply cannot be built.
This technology rapidly found application in amplifiers and analog power management circuits, and
has some advantages in digital logic. BiCMOS circuits use the characteristics of each type of transistor most
appropriately. Generally this means that high current circuits use metaloxidesemiconductor field-effect
transistor (MOSFETs) for efficient control, and portions of specialized very high performance circuits use
bipolar devices. Examples of this include radio frequency (RF) oscillators, bandgap-based references and low-
noise circuits. The Pentium, Pentium Pro, and Super SPARC microprocessors also used BiCMOS







Comparison between the BJT and CMOS?

Speed
WB, L are critical dimensions for improving speed performance
Note exponent of 2 in T equation indicates that improvement goes as factor squared; meaning
There are two reasons speed improves:
1) Shorter distance for carrier to travel
2) More "push" (steeper diffusion gradient for BJT, higher E field for MOSFET)

Factors in MOS - BJT speed performance:
1) Bulk mobility (BJT) always better than surface mobility (MOSFET)
2) Reducing critical dimension involves different process considerations
3) Trying to increase MOSFET speed by increasing VGS-Vt has two problems:
- reduces trans conductance
- carrier velocity doesn't increase as much as expected due to velocity saturation

Why BJT transconductance will always be better for roughly similar bias currents:
Thermal voltage VT is less than (VGS-Vt); trying to reduce VGS-Vt below ~ 100mV causes MOSFET to
enter subthreshold (weak inversion) region of operation

Advantages of MOS:
- Near input resistance looking into gate vs. base current for BJT (better buffer on input side)
- Lower noise for high RS signal sources
- Better analog switch; truly ohmic at origin of VDS-ID plot (sample & hold)
- Compatible with digital CMOS (process cost advantage)
- Comes out of non-active operating region more quickly (BJT slow out of saturation)
- More robust current sources (gentler "crash" than BJT into saturation)
Advantages of BJT
- More speed, transconductance per amount of bias current
- Lower noise for low RS signal sources
- Higher intrinsic gain for actively loaded stage (better Early voltage)
- Lower output resistance at emitter vs. source of MOSFET (better buffer on output side)
- "Closer" to fundamental physics (e.g. bandgap voltage reference)
- Follows exponential model over 5 - 8 orders of magnitude (analog computation; multipliers)
- Higher output resistance current sources
How do you size NMOS and PMOS transistors to increase the threshold voltage?
The threshold voltage in a MOS device follows the next expression:
Vth=ms +2f + Qdep/Cox

where:
ms=difference between the work functions of the polysilicon gate and the silicon substrate
f=(KT/q)ln(Nsub/ni), where q is the electron charge, Nsub is the doping concentration of the substrate, Qdep
is the charge in the depletion region and Cox is the gate oxide capacitance per unit area. For typical
dimensions of the oxide thin layer. The value of the Cox parameter is Cox6.9fF/um.
So, you must state the influence of the MOS device dimensions in the mentioned parameters (mainly
Cox) and size it properly.
Incresing the length vt will decrease.Incresing the width vt will increase.
PMOS:Incresing the width vt will decrease.Incresing the length vt will increase.
As a designer you cant change NMOS and PMOS size in order to change Vt.
The only way you can change Vt as a designer is by changing the Body Bias.

Write a C program of BFS & DFS.
1.C language program of BFS algorithm:
#include<stdio.h>
#include<conio.h>
int a[20][20],q[20],visited[20],n,i,j,f=0,r=-1;
void bfs(int v)
{
for(i=1;i<=n;i++)
if(a[v][i] && !visited[i])
q[++r]=i;
if(f<=r)
{
visited[q[f]]=1;
bfs(q[f++]);
}
}
void main()
{
int v;
clrscr();
printf("\n Enter the number of vertices:");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
q[i]=0;
visited[i]=0;
}
printf("\n Enter graph data in matrix form:\n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%d",&a[i][j]);
printf("\n Enter the starting vertex:");
scanf("%d",&v);
bfs(v);
printf("\n The node which are reachable are:\n");
for(i=1;i<=n;i++)
if(visited[i])
printf("%d\t",i);
else
printf("\n Bfs is not possible");
getch();

2.C language program of DFS algorithm:
#include<stdio.h>
#include<conio.h>
int a[20][20],reach[20],n;
void dfs(int v)
{
int i;
reach[v]=1;
for(i=1;i<=n;i++)
if(a[v][i] && !reach[i])
{
printf("\n %d->%d",v,i);
dfs(i);
}
}
void main()
{
int i,j,count=0;
clrscr();
printf("\n Enter number of vertices:");
scanf("%d",&n);
for(i=1;i<=n;i++)
{
reach[i]=0;
for(j=1;j<=n;j++)
a[i][j]=0;
}
printf("\n Enter the adjacency matrix:\n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%d",&a[i][j]);
dfs(1);
printf("\n");
for(i=1;i<=n;i++)
{
if(reach[i])
count++;
}
if(count==n)
printf("\n Graph is connected");
else
printf("\n Graph is not connected");
ge

Vous aimerez peut-être aussi