Vous êtes sur la page 1sur 8

Microprocessor Systems (Design) : SESSION 98/99 Overview: Microprocessor Selection Microprocessors have had a profound impact on the design

of electronic systems. Software is stored in standard memory devices, enabling systems with identical hardware to perform different functions. The hardware design determines the facilities available to the software, such as memory, and I/O, while the software determines the functions performed Several benefits of microprocessor based design are common to a wide range of applications. Software control allows easier modification and allows complex control functions to be implemented far more simply than with other implementations. The computational capabilities allow analysis and interpretation of data to be performed by instruments that previously could only display raw data. The fundamental microprocessor based system structure is a connection of micro processor, memory and I/O devices with address, data and control buses. Through out this course we will examine may of the available devices in each of these fundamental categories and the design of the interconnections between them. Microprocessor Selection The selection of the optimal microprocessor for a particular application is a challenging task. The higher the anticipated production volume, the more important it is that the lowest cost device be selected. In low volume applications, it is most important to have a device that has more than enough power and is easy to program and debug. The selection must begin by narrowing the field. First, the needs of the application must be carefully analysed. Second the class of the device (i.e. microprocessor or microcontroller) is selected. Third, the word size is selected, and finally, the field is narrowed to a few devices that are compared in detail. Other considerations, such as the experience of the staff, the development tools and peripheral chips available, and the reputation of the vendor, are also important. Completing this selection effectively requires knowledgeable, unbiased personnel and a significant amount of time and energy. The selection should always take into account the anticipated needs of the application and the fact that these needs are likely to increase during the development. Unless cost is very critical, it is always better to have too much than too little.

Micro Selection 1

AWLect1

Microprocessor Systems (Design) : SESSION 98/99 Picking the Processor In most organizations the hardware designers select a processor that meets certain performance criteria and cleanly interfaces to the rest of the system. Software concerns are just as important as hardware issues and should be just as influential. A crucial decision must be made - can you reuse your old code? If 70% of the system of the system can be stolen from current projects, especially if coded in assembly, fight to keep the current architecture! The software is probably the biggest part of the NRE budget - preserve your investment. Some aspects of CPU selection are entirely unrelated to technical issues. The product's marketability is a function of its total life cycle costs, including hard to measure yet critical expenses like inventory handling. If your company used Z80s and Z80 peripherals in all of its products, do you really want to stock another family of components? Inventory is costly, inventory is taxed, and inventory management requires yet more money. What experience does the design team have? Retraining an army of 8088 programmers in the intricacies of the 68000 is bound to be expensive. They'll be learning while doing, struggling with new hardware, instruction sets and tools; making mistakes while the project's meter is running. The production, repair, and support groups will need additional equipment and training. If, however, the company's long range plans involve a move to the new architecture, then taking the plunge is unavoidable. Try to pick a simple project for starters and budget enough time to overcome the inevitable learning curve. Intangibles play an important part. Does the chip, and all of its associated peripherals, have reliable second sources? Is it really available, in quantity? If the part is old, will it continue to be available in the future? If you can't buy the part, you can't ship the product. How hard will the CPU be to program? While there is a way around every shortcoming, do you have time to fight an inadequate architecture? The old 1802 didn't have a call and return instructions, the 6805's index registers is only 8 bits long, and the 8008's stack was only 7 bytes deep. A high-level language can mask hardware deficiencies but only at the expense of bigger and slower code. If the processor has limited stack manipulation instructions (like the 8051), then a C compiler will generate a lot of code to handle automatic variables. Using a new processor is generally a mistake unless there are compelling technical and business reasons to do so. Engineers want to work with cutting edge components - we're all victims of a sort of high tech lust. Subordinate emotional decisions to cold analytical analysis. Costs Micro Selection 2 AWLect1

Microprocessor Systems (Design) : SESSION 98/99

The cost of tools is an important consideration. Even if you ultimately decide to use the same processor employed in all of the company's products, the start of a project is a good time to reevaluate the development environment. Maybe now is the time to move from assembler to C or to upgrade to an ANSI compatible compiler. Are the debuggers adequate? Should you try a source code management program? Programmer time is hideously expensive - look for ways to spend a few pounds up front to save lots of time downstream. Hardware engineers should look at their equipment as well. Do you use an emulator to bring up prototypes? Better be sure it handles the CPU and clock rates. Reevaluate the logic analyzer; is it fast enough? Obviously the costs of the processor itself is important but dont be lulled into looking at its price alone. Total system cost is the issue the 50 pence CPU that needs a 20 UART is no bargain. It is interesting that in the 8 bit and the 16 bit worlds new CPU architectures have generally been failures. Most embedded systems are designed using chips whose ancestors reach almost all the way back to the beginning of recorded microprocessor history. The Z80 was introduced in 1976, and the 6800 in 1975, the 8088 in 1978, and the 8048 (8051's precursor) in 1977. These processors and their derivatives continue to account for most embedded designs. Perhaps the tremendous demand for some sort of computing power, however little, was largely satisfied by this generation of processor. Most new architectures have failed or have failed to gain significant market share. (Note. most 4 bit systems are dominated by Japanese micros, and new RISC microcontrollers such as the MicroChips PIC range are starting to make heavy in roads into the low end 8 bit micro markets). To harness the industry's fantastic ability to cram ever more transistors on a chip but continue to sell old (i.e., successful) processor architectures, the vendors have developed a number of highintegration components. The Z80 family is a prime example. The 64180/Z180 is a Z80 at heart with lots of on-board peripherals (counters, DMA, UARTs, etc.). The chip can often replace quite a few system components, yet the software is still Z80 machine code. Another example is the 8051, which early on became a very broad family with a chip for every purpose (also see Motorola 6805 and 68HC11/05). Dozens of proliferation chips let you select an 8051 core that includes just the right peripheral mix for your application. Similarly, Intel positioned the 8088 family in the embedded controller the 80188 and 80186. After years of legal wrangling, NEC emerged as an important provider of other variants of the 8088 (the V series).(Also AMD). Even the 68k has entered the high-integration fray: both Motorola and Philips sell single chip versions. In all of these cases new life is being breathed into old chips; using tried and true backbones, more and more peripherals (and in some cases memory) are being moved onto the CPU. Micro Selection 3 AWLect1

Microprocessor Systems (Design) : SESSION 98/99

These high integration parts are excellent choices for embedded designs. Most of the trade-offs are exclusively hardware concerns, like reducing printed circuit board 'Real Estate, power consumption, and the like, but important software issues are involved. Carefully analyse the peripheral selection included on the chip. Is it really adequate? In an interrupt driven environment, be sure that the interrupt structure is useful. If the device is a microcontroller, does it include enough ram and ROM? This may impact language selection. Fast data transfers are sometimes important. The on-board DMA controllers are frequently used to move memory data around. If this is a requirement, be sure that the chip supports memory-to-memory DMA. Some restrict DMA to memory-to-I/O cycles. A lot of embedded systems depend on a real-time operating system. Context switching is sequenced by regular timer interrupt. Does the chip have a spare timer? Don't let the hardware team allocate all spare timers to their more visible needs. Few realise that these types of hardware resources are sometimes needed to make the software run. Address Space Whether a microcontroller or microprocessor is used, one of the biggest software issues in processor selection is the CPU's address space. Will your program fit into memory? Microcontrollers have only minuscule amounts of on-board memory, sometimes only a few thousand bytes. Remember: once the chip choice is made, you are committed to making the code work on that CPU. A simple remote data logger might be coded in a high-level language, while complex applications can be a nightmare to shoehorn in. Be very sure about memory needs before casting the processor choice in concrete! The huge address space of 16 bit microprocessors are more than adequate for most programs. This is not the case in the 8-bit world, which usually limits addresses to 64k. Once this seemed like an unlimited ocean we could never fill. Embedded projects are getting bigger, and less efficient tools are regularly used to reduce the NRE costs. 64k might not be enough. Some designs use bank switching schemes or memory management units (MMUs) to let the program disable one section of Ram or ROM and bring in another. While potentially giving access to immense memory arrays, neither of these approaches yields the nice huge linear address space we all yearn for. (The same could be said about segmentation on the 80x86.) A number of trade-offs (discussed later) come into play, not least of which is your compiler - will it support a MMU? Few compilers automatically use the memory manager to squeeze a big program into the Micro Selection 4 AWLect1

Microprocessor Systems (Design) : SESSION 98/99 project's virtual address space. In general you'll have to handle the MMU manually, perhaps tediously issuing lots of MMU commands throughout the code. Still, the memory manager does offer a reasonable way out of the memory constraints imposed by a 16-bit address bus. Performance In selecting the processor, most companys first look at performance, the industry's elusive but hotly pursued Holy Grail. Semiconductor vendors are happy to ship you crateloads of comparative benchmarks showing how their latest CPU outperforms the competition. Drystones, Whetstones, MIPs, and Linpacks - their number are legion, baffling and usually meaningless. An embedded CPU needs just enough horsepower to solve its one, specific problem. Only two questions are relevant: 1. Will this constraints? processor get the job done within specified time

2. Will it satisfy performance needs imposed on the product or its derivatives in the future? The answers are sometimes not easy to determine. If the project is an incremental upgrade of an existing product, consider instrumenting the current code to measure exactly how much free processor time exists. The results are always interesting and sometimes terrifying. Hardware performance analysers will non-intrusively show the percentage of time spent in each section of code, particularly the idle loop. You can do the same with an oscilloscope. Add code to set an I/O bit high only when the idle loop is running (don't forget to bring it low in the interrupt service routines.) The scope will immediately show the bit's state; it's a simple matter to then compute the percentage of the CPU utilisation. In a performance-critical application, it's a good idea to build in this code from the beginning. It can be 'IF'ed (compiler directive) out prior to shipment but easily re-enabled at any time for maintenance. Consider making these measurements to close the feedback loop on the design process when you finish a project. Just how accurate were your performance estimations? If, as is often the case, the numbers bear little resemblance to the original goals, then find out where the errors were and use this information to improve your estimating ability. Without feedback, you work forever in the dark. Strive to learn from your successes as well as the failures. Most studies find that 90% of the processor's time is spent executing 10% of the code. Identify this 10% in the design (before writing code) and focus your energies on this section. Modeling the critical sections is always a good idea. Try writing similar code on a PC, being sure to use the same language as in the

Micro Selection 5

AWLect1

Microprocessor Systems (Design) : SESSION 98/99 final system. Set up an experiment to run the routine hundreds or thousands of times, so you can get accurate elapsed time measurements. The trick is to port these execution figures to estimates on the target hardware. Well-heeled companies buy a development board for each of the CPUs being evaluated. Smaller firms can use simulators (although these cost as much as the boards!), or you can "guestimate" a conversion factor. Compare instruction sets and timing. Include wait states and DMA activity. You can get quite accurate numbers this way, but a wise designer will then add in another 50% in the interest of conservative design. Model real-time operations, especially those synchronised to external devices, the same way. The PC has an extensive interrupt structure-use it! Its software interrupts can be used to simulate external hardware events. Use this opportunity to debug the algorithms. A PC or other system with a friendly interface makes it easy to workout conceptual bugs while also estimating the code's performance. It is surprising how often actually making something work will turn up problems that demand much more memory performance. Hardware engineers prototype hardware; we should prototype software. Sometimes your code will have to respond to very fast external devices that stretch the processor's capability to the limit. Success depends on confining the fast code to a single small routine that can be studied carefully and accurately before proceeding. You may have little choice but to write assembly code on paper and count instruction execution times. The great peril in this is proceeding under the assumption that code will work - it seldom does. Have an colleague evaluate the code, no matter how simple it may be. If this critical section has a bug in it then the entire project might collapse like a house of cards. Forget about computing timing with complex processors. Some, like the H16 and Z280, have on-board cache, prefetchers, and other hardware that make it all but impossible to measure instruction times. The timing tables included in the manufacturer's data books seem to be designed to obfuscate. When making the trade-offs, be sure to factor in special processor features. A multiply instruction can speed the code up considerably - if it is useful. Sometimes the highly touted multiply or divide runs surprisingly slowly. Check the timing! It's also common to find that the math instructions work with very limited precision. The MUL and DIV take only 8 bit arguments. If you are working in C, does the language support the CPU's special features? Even in the highly standardised environment of the 8088, some C's make no provision for 8087 numeric co-processors. Micro Selection 6 AWLect1

Microprocessor Systems (Design) : SESSION 98/99

Picking the processor is not easy. Consider using a feature matrix (value engineering) with weighting factors scaled to the project's trade-offs. When there is no clear deciding factor between competing CPUs, it makes sense to just make the best possible decision without agonising over it. (See table below)

Micro Selection 7

AWLect1

Microprocessor Systems (Design) : SESSION 98/99 Processor Selection Trade-Offs Inventory Costs Second sources for CPU and the peripherals Availability Can you reuse your old code? Retraining costs and time Engineers, production and support crew Tools Cost Total System Cost : CPU and all peripherals High-Integration parts Adequate I/O and memory? Reasonable on-board peripherals? Timer for context switching? Memory address space Performance Will you need special hardware, like an FPU?

Micro Selection 8

AWLect1

Vous aimerez peut-être aussi