Académique Documents
Professionnel Documents
Culture Documents
73
Fig. 5.4.2: U1A + U2A – Divide-by-two stage
74
Fig. 5.4.3: I/O of Divide-by-1000 stage
75
Fig. 5.4.4: PN generated sequence OCO clocked by CA = 2 MHz
76
Fig. 5.4.5: Modulo-2 addition spreading stage – see that OCO is inverse of OCO x DA,
when DA = 1
77
Fig. 5.4.6: BPSK output signal at U7D pin 11, when DA = 0
78
5.4.2 Observed and Measured Results
Refer to Fig. 3.3.5 for an oscilloscope printout comparison of BPSK signal OCO
x DA x CA to PN sequence OCO when DA =0. Notice that the phase changes correspond
directly to the OCO chip value.
79
Fig. 5.5.2: NAND logic truth table.
If at any time synchronization between OCO and LCO is lost, the SR Latch can be
reset and U10 is double-speed clocked again with the simple press of S2 RESET
PUSHBUTTON.
2. GND or VCC each of the seven output pins on U9 and U10 in the same order.
And acknowledge that a low logic level is received at U8D pin 11. If not, there is
an issue with the cascading logic circuit and it must be addressed before moving
on.
3. Acknowledge that the SR NAND latch is performing logically and test that after
reset Q’ stays logic high until LED illuminates, and U8D pin 11 go logic low.
4. Remove GNDs and VCC connections placed for testing in step 2, and check that
U9 and U10 are producing a similar frequency PRBS.
6. Test that bilateral switches U17A and U17B are not closed simultaneously and
that when U17A is closed, U16D’s inputs are (2 x CA). When U17A is open,
U17B must be closed, and U16D will see CA at its input.
7. Acknowledge that the SR NAND latch sets U17A and U17B open and closed
depending on the value of Q’.
81
5.5.3 Simulation Results
Testing to ensure each stage of the two subsystems was operational was at both CA =
2 MHz and CA = 10 kHz. Data from these tests shows the great difference a change in
frequency has on several parts of the circuit. In particular the multiply-by-two stage of U12D
and U18A,B,C saw great fluctuations in operation between the two CA frequencies. The
original design had provided suggested R and C values for U12D inputs but experimental results
showed differently. In Fig. 5.5.4 the I/O of U12D given suggested R = 3 kΩ and C = 50 pF.
After several attempts, a better combination of capacitance C= 22pF || 10 pF = 33 pF provided a
more satisfactory output at CA = 2 MHz in Fig. 5.5.6.
Similar tests were run for this U15D stage with the final 10 kHz CA. The results
shown in Fig. 5.5.7 proved the best (2 x CA) output at 10 kHz. The R value remained 3 kΩ but
the C value varied considerably to 0.033 uF.
The finished multiply-by-two stage, however, requires some more waveform shaping
to the signal produced by U12D. This wave shaping process is completed through the Schmitt
Trigger created through U18A,B,C. Consult Fig. 5.5.8 to see the final design I/O of the Schmitt
Trigger U18A,B,C. The values of R used within the Schmitt Trigger have been modified as well
to compensate for the slower frequency CA. The new values are 56 kΩ and 100 kΩ as shown in
the schematic of Fig. 3.2.7.
Fig. 5.5.6: U12D Multiply-by-two Stage w/ R = 3 kΩ, C = 0.033 uF, and CA = 10 kHz.
Fig. 5.5.8: CA = 10 kHz produced at U10 pin 8 while “S2 RESET” Open.
Fig. 5.5.9: (2 x CA) = 20 kHz produced at U10 pin 8 while “S2 RESET” Closed.
82
Fig. 5.5.4: U12D Multiply-by-two Stage w/ R = 3 kΩ, C = 50 pF, and CA = 2 MHz.
83
Fig. 5.5.5: U12D Multiply-by-two Stage w/ R = 3 kΩ, C = 33 pF, and CA = 2 MHz.
84
Fig. 5.5.6: U12D Multiply-by-two Stage w/ R = 3 kΩ, C = 0.033 uF, and CA = 10 kHz.
85
Fig. 5.5.7: Schmitt Trigger shaping 20 kHz (2 x CA).
86
Fig. 5.5.8: CA = 10 kHz produced at U10 pin 8 while “S2 RESET” Open.
87
Fig. 5.5.9: (2 x CA) = 20 kHz produced at U10 pin 8 while “S2 RESET” Closed.
88
5.2DSSS Demodulation Subsystem
Team member who designed this subsystem: Matt Elder
Team member who wrote this subsection: Matt Elder
89
5.6.2 Design Procedure
The design procedure for this subsystem is fairly simple as the main part
involved is biasing the Balanced Modulator U10 correctly for accurate
demodulation and recovery of DA.
3. Ensure proper filtering at the collector output of Q1. Take several pictures of
waveforms as they traverse the low-pass filters and inverter ICs.
4. Match the recovered DA output with that of the input DA from DSSS TX
Subsystem.
Consult Figs. 3.3.5, 3.3.6, and 3.3.7 for oscilloscope and spectrum plots of the
resultant BPSK signal. The demodulated DA from DSSS Demodulation subsystem is compared
with input DA from DSSS TX Subsystem for effect in Fig. 3.3.8.
90
5.3Hamming Channel Code Subsystem
Team member who designed this subsystem: Ryan Ginter
Team member who wrote this subsection: Ryan Ginter
The Hamming (7, 4) Subsystem was decided upon to be responsible for the system’s
error detection and correction. This system is composed of two parts. One part is the Hamming
Parity Generator Subsystem, which is used in the Baseband Transmit Data Subsystem. The other
is the Hamming Error Detection and Correction Subsystem that is used in the Baseband Receive
Data Subsystem. Together these systems work to prevent any bit errors to cause false alarms or
resets in the overall system.
|1000011|
G=|0100101|
|0010110|
|0001111|
Take the data vector D times G, where D = [D1 D2 D3 D4]. The result of this multiplication will
be a 1x7 vector made up of the original data vector plus 3 additional parity bits, D’= [D1 D2 D3
D4 P1 P2 P3]. The parity bits are formed via the following equations:
P1 = D2 + D3 + D4 (5.7.1)
P2 = D1 + D3 + D4 (5.7.2)
P3 = D1 + D2 + D4 (5.7.3)
Note these additions are done modulo 2. Based on these formulas the following theoretical
mapping is determined to relate each possible set of data to its correspond parity bits.
91
Table 5.7.1: Theoretical Hamming Parity Bit Generation.
Theoretical
D1 D2 D3 D4
P1 P2 P3
0000 000
0001 111
0010 110
0011 001
0100 101
0101 010
0110 011
0111 100
1000 011
1001 100
1010 101
1011 010
1100 110
1101 001
1110 000
1111 111
One characteristic of the parity generator that should be noted is how each 3 bit parity
combination is repeated no more than two times. This characteristic is what allows the parity bits
to map the 16 possible outcomes of data to only 8 possible outcomes. Along with this realization
it should be noted that in order for the 3 bit parity combination to accidentally be correct, there
would need to be a bit error at D1, D2, and D3. In other words the minimum Hamming distance
is three. To display these characteristic clearer the data from Table 5.7.1 has been rearranged and
placed in Table 5.7 2.
D1 D2 D3 D4 P1 P2 P3
0000 000
1110 000
0011 001
1101 001
0101 010
1011 010
0110 011
1000 011
0111 100
1001 100
0100 101
1010 101
0010 110
92
D1 D2 D3 D4 P1 P2 P3
1100 110
0001 111
1111 111
The detecting and correcting component takes advantage of the mapping that the parity
generator provides. When decoding the 7 bit combination of parity bits and data bits, a parity
check matrix must be designed appropriately, known as H. When this matrix is multiplied by the
7 bit vector, containing the 4 data bits and 3 parity bits, a 3 bit vector known as the syndrome is
created. When each bit of this syndrome is zero then there was no error detected. If it is not a
vector of zeros then the column of H that it matches indicates an error occurred at that respective
bit.
In the generator matrix each parity bit was determined by a specific linear combination of
data bits, now to do the parity check those equations are verified one by one to determine the
syndrome. Thus the Hamming parity check matrix will be:
|0111100|
H=|1011010|
|1101001|
Notice now that when computing D’T times H the resulting syndrome will be:
| D2 + D3 + D4 + P1 |
SYN = | D1 + D3 + D4 + P2 |
| D1 + D2 + D4 + P3 |
Note that the first three addends of each row are what generated the parity bit that is the fourth
addend. The reason why the syndrome will be zero when there is no error is because each parity
bit is essentially an even parity bit of the three data bits that determined it. To clarify if there is
an odd number of logic “1s” in the 3 data bits of the generating parity bit equations then the
respective parity bit will be logic “1”. This makes the four combined bits, 3 data 1 parity, have
an even number of logic “1s” in total. If there is and even number of logic “1s” in the initial 3
data bits then the respective parity bit is set to logic “0”. Again an even number of logic “1s” will
be represented within the combination of the 3 data bits and 1 parity bit. The syndrome simply
checks that this even parity is maintained and if so the sum modulo 2 will be zero.
The reason that the syndrome is able to indicate where the error occurred is because of
the way the parity bits overlap. Notice how each data bit other then D4 is use in exactly two
parity generating equations. This means if there is an error in any two equations then it must
have been the data bit that is used in both of those equations. If there is an error in all 3 equations
then it must have been an error in D4 because it is the only bit used in all 3. And if there is an
error in just one equation then it must have been the parity bit used in that equation.
93