Vous êtes sur la page 1sur 11

1)Application Specific Instruction Set Processors (ASIP)

An application-specific instruction-set processor (ASIP) is a component used in system-on-achip design. The instruction set of an ASIP is tailored to benefit a specific application. This specialization of the core provides a tradeoff between the flexibility of a general purpose CPUand the performance of an ASIC.Some ASIPs have a configurable instruction set. Usually, these cores are divided into two parts: static logic which defines a minimum ISA and configurable logic which can be used to design new instructions. The configurable logic can be programmed either in the field in a similar fashion to an FPGA or during the chip synthesis.

2)Retargetable Compilation
Current generation microprocessors employ a multitude of features to exploit ever increasing levels of performance with high efficiency. This include employing a large number of functional units to exploit ILP, clustered datapaths to provide highly scalable designs, specialized functional units to execute custom instructions, SIMD units to handle data parallelism, and support for multiple threads to exploit task level parallelism. Storage structures like special purpose register files, local memories, and stream buffers are distributed through out the data path to provide low latency high bandwidth data access. Moreover, to reduce the complexit y, customized connectivity is provided between the storage structures and the functional units. To handle such complex architectures, compilers are required which can extract the required levels of parallelism from the program and orchestrate the code so as to efficiently use the underlying hardware. The importance of a compiler is accentuated in an automated processor design system like CCCP. In order to explore a large space of architectures for an application, retargetable compilers are required which can be tuned with minimum effort to generate code to evaluate the target architecture. A retargetable compiler is a compiler that has been designed to be relatively easy to modify to generate code for different CPU instruction set architectures. The machine code produced by these is sometimes of lower quality than that produced by a compiler developed specifically for a single processor.

A retargetable compiler is a kind of cross compiler. Often (but not always) a retargetable compiler is portable (the compiler itself can run on several different CPUs) and self- hosting. The goal of easily retargeting the compiler conflicts to some degree with the goal of providing fast execution and small code size. The optimization of code for some high performance processors requires a detailed and specific knowledge of the architecture and how the instructions are executed. The optimizations usually done by a retargetable compiler are only those applicable to any processor. This is unless developers have taken the large amount of time necessary to write a code generator specifically for an architecture. A general-purpose global optimizer followed by machine-specific peephole optimization can work well. Examples of retargetable compilers:

The Small-C compiler was the first retargetable compiler. GCC ACK lcc VBCC Portable C Compiler SDCC LLVM

3)Simulation and Emulation


Emulator:In computing, an emulator is hardware and/or software that duplicates (or emulates) the functions of a first computer system in a different second computer system, so that the behavior of the second system closely resembles the behavior of the first system. This focus on exact reproduction of external behavior is in contrast to some other forms ofcomputer simulation, in which an abstract model of a system is being simulated. For example, a computer simulation of a hurricane or a chemical reaction is not emulation. Benefits

Emulators maintain the original look, feel, and behavior of the digital object, which is just as important as the digital data itself.

Despite the original cost of developing an emulator, it may prove to be the more cost efficient solution over time.

Reduces labor hours, because rather than continuing an ongoing task of continual data migration for every digital object, once the library of past and present operating systems and application software is established in an emulator, these same technologies are used for every document using those platforms.

Many emulators have already been developed and released under GNU General Public License through the open source environment, allowing for wide scale collaboration.

Emulators allow video games exclusive to one system to be played on another. For example, a PlayStation 2 exclusive video game could (in theory) be played on a PC or Xbox 360 using an emulator.

Emulation vs simulation
The word "emulator" was coined in 1963 at IBM during development of the NPL (IBM 360) product line, using a "new combination of software, microcode, and hardware". They discovered that using microcode hardware instead of software simulation, to execute programs written for earlier IBM computers, dramatically speeded up simulation. Earlier in 1957, IBM provided the IBM 709 computer with an interpreterprogram (software) to execute legacy programs written for the IBM 704 to run on the IBM 709 and later on the IBM 7090. In 1963, when microcode was
3

first used to speed up this simulation process, IBM engineers coined the term "emulator" to describe the concept. It has recently become common to use the word "emulate" in the context of software. However, before 1980, "emulation" referred only to emulation with a hardware or microcode assist, while "simulation" referred to pure software emulation. For example, a computer specially built for running programs designed for another architecture is an emulator. In contrast, a simulator could be a program which runs on a PC, so that old Atari games can be simulated on it. Purists continue to insist on this distinction, but currently the term "emulation" often means the complete imitation of a machine executing binary code. Logic simulators Logic simulation is the use of a computer program to simulate the operation of a digital circuit such as a processor. This is done after a digital circuit has been designed in logic equations, but before the circuit is fabricated in hardware. Functional simulators Functional simulation is the use of a computer program to simulate the execution of a second computer program written in symbolicassembly language or compiler language, rather than in binary machine code. By using a functional simulator, programmers can execute and trace selected sections of source code to search for programming errors (bugs), without generating binary code. This is distinct from simulating executio n of binary code, which is software emulation. The first functional simulator was written by Autonetics about 1960 for testing assembly language programs for later execution in military computer D-17B. This made it possible for flight programs to be written, executed, and tested before D-17B computer hardware had been built. Autonetics also programmed a functional simulator for testing flight programs for later execution in the military computer D-37C. Video game console emulator Video game console emulators are programs that allow a computer or modern console to emulate a video game console. They are most often used to play older video games on personal computers and modern video game consoles, but they are also used to translate games into other
4

languages, to modify existing games, and in the development process of home brew demos and new games for older systems. Theinternet has helped in the spread of console emulators, as most - if not all - would be unavailable for sale in retail outlets. Examples of console emulators that have been released in the last 2 decades are: Dolphin, Zsnes, Kega

Fusion, Desmume, Epsxe, Project64, Visual Boy Advance, NullDC and Nestopia. Terminal emulators Terminal emulators are software programs that provide modern computers and devices interactive access to applications running onmainframe computer operating systems or other host systems such as HP-UX or OpenVMS. Terminals such as the IBM 3270 or VT100 and many others, are no longer produced as physical devices. Instead, software running on modern operating systems simulates a "dumb" terminal and is able to render the graphical and text elements of the host application, send keystrokes and process commands using the appropriate terminal protocol. Some terminal emulation applications include Attachmate Reflection, IBM Personal Communications,Stromasys CHARON-VAX/AXP and Micro Focus Rumba

4)Models

System Design = Specifying Functionality + Implementing the functionality using a set of physical components Model = Method for decomposing a complex functionality into functionality of subsystems (what the pieces should be) and rules for composing the whole from the parts. Model = formal system consisting of objects and rules A model is an abstracted view of the system A language can capture more than one model A model can also be captured by different languages Efficiencies and expressiveness differ

Models of Computation State-oriented models represent a system as a set of states and a set of transitions between states, e.g. a finite state machine. They are well- suited for control systems, such as protocols and controllers. Finite-State Machines Petri Nets Hierarchical Concurrent Finite State Machines Activity-oriented models describes a set of activities related by data or execution dependencies, such as dataflow graphs. They are well-suited for transformational systems, such as modems and codecs. Dataflow graphs Flowcharts Structure-oriented models describe the system's physical modules and the interconnections between them. They are well- suited at describing a particular architecture, such as a fourprocessor implementation with shared memory and an eight-processor implementation with cross-bar communication. Data-oriented models represent systems as collections of data related by their attributes, class membership, and so forth. These models are well-suited for information systems, such as databases and graphical representations of object class hierarchies. Entity-relationship diagram Jackson's diagram Heterogeneous models represent a composition of models previously listed since one model is generally insufficient to describe an entire complex system. Models of computation are often not formally composed, but instead combined in an ad hoc manner. Improper mixing of models of computation can lead to unpredictable behavior by the system.

Control/dataflow graph Structure chart Programming languages Object-oriented model Program-state machine Queueing model

5)Embedded Software in Real-Time Signal Processing Systems: Design Technologies


INTRODUCTION Software is playing an increasingly important role in the design of embedded systems. This is especially true for personal telecommunications and multimedia systems, which form extremely competitive segments of the embedded systems market. In many cases the software runs on a processor core, integrated in a very large scale integrated (VLSI) chip. Recent studies indicate that up to 60% of the development time of an embedded system is spent in software coding While this figure is a confirmation of an ongoing paradigm shift from hardware to software,at the same time it is an indication that the software design phase is becoming a bottleneck in the system design process. A.A Paradigm Shift from Hardware to Software By increasing the amount of software in an embedded system, several important advantages can be obtained. First, it becomes possible to include late specification changes in the design cycle. Second, it becomes easier to differentiate an existing design, by adding new features to it. Finally, the use of software facilitates the reuse of previously designed functions, independently from the selected implementation platform. The latter requires that functions are described at a processor- independent abstraction level (e.g., code).There are different types of core processors used in embedded systems. General-purpose processors. Several vendors of off the- shelf programmable processors are now offering existing processors as core components, available as a library element in their silicon foundry . Both microcontroller cores and digital signal processor (DSP) cores are available. From a system designers point

of view, general-purpose processor cores offer a quick and reliable route to embedded software, that is especially amenable to low/medium production volumes. Application-specific instruction-set processors. For high- volume consumer products, many system companies prefer to design an in- house applicationspecific instruction-set processor (ASIP). By customizing the cores architecture and instruction set, the systems cost and power dissipation can be reduced significantly. The latter is crucial for portable and network-powered equipment. Furthermore, in-house processors eliminate the dependency from external processor vendors. Parameterizable processors. An intermediary between the previous two solutions is provided by both traditional and new fabless processor vendors as well as by semiconductor departments within bigger system companies. These groups are offering processor cores with a given basic architecture, but that are available in several versions, e.g., with different register file sizes or bus widths, or with optional functional units. Designers can select the instance that best matches their application. B. Software, a Bottleneck in System Design? The increasing use of software in embedded systems results in an increased flexibility from a system designers point of view. However, the different types of processor cores introduced above typically suffer from a lack of supporting tools, such as efficient software compilers or instruction-set simulators. Most general-purpose microcontroller and DSP cores are supported with a compiler and a simulator, available via the processor vendor. However, in the case of fixed-point DSP processors, it is well known that the code quality produced by these compilers is often insufficient. In most cases these tools are based on standard software compiler techniques developed in the 1970s and 1980s, which are not well-suited for the peculiar architecture of DSP processors. In the case of ASIPs, compiler support is normally nonexisting. Both for parameterizable processors and ASIPs, the major problem in developing a compiler is that the target architecture is not fixed beforehand. As a result, current days design teams using generalpurpose DSP or ASIP cores are forced to spend a large amount of time in handwriting of machine code (usually assembly code). This situation has some obvious economical drawbacks. Programming DSPs and ASIPs at such a low level of abstraction leads to a low designers productivity. Moreover, it results in massive amounts of legacy code that cannot easily be transferred to new processors.
8

This situation is clearly undesirable, in an era where the lifetime of a processor is becoming increasingly short and architectural innovation has become key to successful products. All the above factors act as a brake on the expected productivity gain of embedded software. Fortunately, the research community is responding to this situation with a renewed interest in software compilation, focusing on embedded processors. Two main aspects deserve special attention in these developments: Architectural retargetability. Compilation tools must be easily adaptable to different processor architectures. This is essential to cope with the large degree of architectural variation, seen in DSPs and ASIPs. Moreover, market pressure results in increasingly shorter lifetimes of processor architectures. For example, an ASIP will typically serve for one or two product generations only. In this context, retargetable compilation is the only solution to provide system designers with supporting tools. Code quality. The instruction and cycle count of the compiled machine code must be comparable to solutions designed manually by experienced assembly programmers. In other words, the compiled solution should exploit all the architectural features of the DSP or ASIP architecture. A low cycle count (or high execution speed) may be essential to cope with the real-time constraints imposed on embedded systems. A low instruction count (or high machine code density) is especially required when the machine code program is stored on the chip, in which case it contributes to a low silicon area and power dissipation. Note that although cycle count and instruction count are different parameters, compilers usually try to optimize both at the same time. Summary As motivated in the companion paper on application and architectural trends, embedded processor cores represent a key component in contemporary and future systems for telecommunication and multimedia. Core processor technology has created a new role for general-purpose DSPs. In addition, there is a clear and important use of ASIPs. For products manufactured in large volumes, ASIPs are clearly more cost efficient, while power dissipation can be reduced significantly. These advantages are obtained without giving up the flexibility of a programmable solution. The lack of suitable design technologies to support the phases of processor development and of application programming however remains a significant obstacle for sys- tem design teams. One of the goals of this paper was to motivate an increased research effort in the area of CAD for embedded system design.This paper focused primarily on the issue
9

of software compilation technologies for embedded processors.In this the starting point was the observation that many commercially available C compilers, especially for fixed-point DSPs, are unable to take full advantage of the architectural features of the processor. In the case of ASIPs, compiler support is nonexisting due to the lack of retargeting capabilities of the existing tools. Many of these compilers are employing traditional code generation techniques, developed in the 1970s and 1980s in the software compiler community. These techniques were primarily developed for general-purpose microprocessors, which have highly regular architectures with homogeneous register structures, without many of the architectural peculiarities that are typical of fixed-point DSPs. In the past five years, however, new research efforts emerged in the area of software compilation, focusing on embedded DSPs and ASIPs. Many of these research teams are operating on the frontier of software compilation and high- level VLSI synthesis. The synergy between both disciplines has already resulted in a number of new techniques for modeling of (irregular) instruction-set architectures and for higher quality code generation. Besides code quality, the issue of architectural retargetability is gaining a lot of attention. Retargetability is an essential feature of a software compilation environment in the context of embedded processors, due to the increasingly shorter lifetime of a processor and due to the requirement to use ASIPs. This paper outlined the main architectural features of contemporary DSPs and ASIPs, that are relevant from a software compilation point of view. A classification of architectures has been presented, based on a number of elementary characteristics. Proper understanding of processor architectures is a prerequisite for successful compiler development. In addition, a survey has been presented of existing software compilation techniques that are considered relevant in the context of DSPs and ASIPs for telecom, multimedia, and consumer applications. This survey also covered recent research in retargetable software compilation for embedded processors. In addition to retargetable software compilation, there are several other important design technology issues, that have not been discussed in this paper. The authors believe thefollowing will become increasingly important in the future. System level algorithmic optimizations. Many specifications of systems are produced without a precise knowledge of the implications on hardware and software cost. Important savings are possible by carrying out system level optimizations, such as control- flow transformations to optimize the memory and power cost of data memories.

10

System partitioning and interface synthesis. Whereas the problems of hardware synthesis and software compilation are reasonably well understood, the design of the glue between these components is still done manually, and therefore error-prone. Synthesis of real-time kernels. A kernel takes care of run-time scheduling of tasks, taking into account the interaction with the systems environment. In some cases, general-purpose operating systems are used. However, these solutions are expensive in terms of execution speed and code size. Recent research is therefore focusing on the automatic synthesis of lightweight, applicationspecific kernels that obey userspecified timing constraints.

11

Vous aimerez peut-être aussi