Académique Documents
Professionnel Documents
Culture Documents
ARINC 664 Frame: An ARINC 664 Frame describes the data packet that is
submitted across the network, inclusive of the protocol bit layers, as well as the payload.
ARINC 664 Message: An ARINC 664 Message is a data item that is packed into the
payloads of one or more ARINC 664 frames. If a message is larger than the max payload
size for a frame, then the message data is split between multiple frames before
transmittal, and then re-joined into a single message upon receipt of all frames for that
message.
Bandwidth Allocation Gap: A mechanism for controlling the amount of information that
an LRM/LRU can transmit.
CCS LRU/LRM: The elements within the system boundary of the CCS. This includes the
CCS LRMs in the CCR, CDN Switches and the RDCs. It does not include Hosted
Functions or LRU/LRMs connected to CDN Switches or RDCs.
This is where the Common Core System (CCS) comes into its own. Consisting of a
Common Data Network (CDN) and ARINC 664 protocol communications, the CCS is
characterised by the following features:
• An integrated high Integrity avionics platform, providing computing, communication
and Input/Output (I/O) services
• A network centralised communications environment
• Real-time deterministic system
• Configurable and extensible architecture
• Robust partitioning
• Fault containment
• Fail-passive design
• Asynchronous component clocking
• Compatibility with legacy LRUs
• Single LRU/LRM Part Numbers for basic platform components
• Open system environment
The utilisation of this type of architecture by the CCS has supported the three major
design goals of the aircraft:
The CCS architecture presents a ‘Virtual LRU’ concept to replace the systems packaged
as physical LRUs in a federated architecture system. Figure 2 portrays four (4) ‘Virtual
Systems’ that are equivalent to the four ‘physical’ systems shown in the Figure 1. As
shown, the Virtual System consists of the same logical groupings of components as
contained by a physical system:
• Application software
• Infrastructure / Operating System (OS)
• Processor
• System bus
• I/O
Therefore, a key difference between the CCS architecture and the federated architecture is
the definition of the logical system. In a federated architecture the logical system is the
physical system. In the CCS architecture, the logical system is different from the physical
system and is thus referred to as a ‘virtual system’.
In a federated architecture system the target computer and the software application are
typically packaged in a ‘physical’ system embodied by an LRU. The application is
typically linked with the OS and other support software and hardware, the resulting
executable software being verified as a single software configuration item. Multiple
‘physical’ systems are then integrated in order to perform a specific set of aircraft
functions.
The architecture utilised by the CCS hosts the software application on a General
Processor Module (GPM) which is a computing resource shared between several software
applications.
The GPM hardware and platform software, along with configuration data developed by
the system integrator, forms the equivalent of a target computer. When a software
application is integrated with the target computer, it forms a ‘Virtual System’. Multiple
‘Virtual Systems’ are provided by a single GPM (see Figure 2). The distinction between
the application ‘Virtual System’
in the GPM and an application LRU (physical system) in the federated environment is
that the application ‘Virtual System’ in the GPM is a software configuration item (no
hardware).
To provide all the ‘Virtual Systems’ that are required to be part of the CCS, a number of
GPMs are necessary and these are all housed in a single unit called a ‘Common
Computing Resource’ (CCR). To ensure system integrity of 10-9 there are two (2) CCR
cabinets to allow for system redundancy.
The ‘Virtual System’ concept extends to the Common Data Network (CDN). Many
‘Virtual Systems’ share the CDN as a data transport medium, with Virtual Link (VL)
addressing providing network transport partitioning for the application data messages.
Each VL address is allocated network bandwidth (data size and rate), and a maximum
network delivery latency (i.e. delay) and jitter - parameters that are all guaranteed.
The CDN consists of switches and a CDN harness. The switches are electronic devices
that manage the data traffic on the network between the connected Line Replacement
Modules (LRMs), CCRs, and other system ‘subscribers’. The switches receive data from
any CDN subscriber, or from other switches, analyse and route it to one, or several,
appropriate recipients through the CDN harness.
The CDN harness is a ‘Full Duplex’ physical link between a CDN subscriber and a CDN
switch, and between two (2) CDN switches. The term ‘Full Duplex’ means that the CDN
subscriber can simultaneously transmit and receive on the same link.
For availability reasons, the CCS implements a redundant network. All CDN subscribers
have a connection to both networks A and B thanks to the redundant switches. Moreover,
at the systems level the CCS supports the Side 1/Side 2 segregation principle.
Conventional type LRUs and systems that cannot communicate directly with the CCS are
connected to an RDC. These devices convert the digital, analogue or discrete data into the
correct format for connection to the CDN.
The ‘Virtual System’ concept also extends to the RDC, which is configured to provide I/O
services for multiple ‘Virtual Systems’. Through scheduled read/write operations, the
RDC employs temporal partitioning mechanisms. The actual partitions vary depending
upon specific ‘Virtual System’ usage, providing output signals to effectors, or reading
inputs signals from sensors, for a specific ‘Virtual System’ at a specific point in time.
To aid system integrity, the RDC allows for physical separation between I/O signals
contained within multiple Independent Fault Zones (IFZs) in order to segregate functional
signals. These IFZ boundaries ensure that RDC faults do not affect I/O interfaces outside
of the faulted IFZ.
Each CCR and RDC is interconnected using the CDN, which allows the CCS and/or
conventional avionics to exchange data using the ARINC 664 data protocol. This protocol
is based on technology developed from the commercial Ethernet standard and adapted to
aviation constraints.
CCS Architecture
The CCS is an IMA solution to provide common computing, communications and
interfacing capabilities to support multiple aircraft functions.
The CDN switches and RDCs are distributed throughout locations within the aircraft to
facilitate separation and minimise wiring to subsystems, sensors and effectors.
An ‘open system’ environment is used within the CCS to enable independent suppliers to
design and implement their systems on the CCS by complying with industry standard
interfaces at all levels within the system.
The CCS is an asynchronous system, ensuring that each components operation schedule
is independent of the other components. Each unit internally controls when data is
produced, there is no attempt to order operations between units at the platform level. This
helps to prevent individual unit behaviour from propagating through the system, affecting
the operation of other units. Also, this unit level independence emulates the federated
system environment, producing the same system level characteristics.
The CCS is a configurable resource system. Functions are allocated the resources they
require to perform their task, in the form of sufficient processing time, memory, network
I/O communication and interface resources for both analogue signals and other digital bus
types.
These resource allocations are implemented within the CCS through specific
configuration tables loaded into each CCS unit. The configuration tables represent the
resource allocations that are guaranteed to each function to perform its task. These
resource guarantees, along with the system partitioning characteristics, form the corner
stone of the hosted system independence and, therefore, change containment within the
system. These properties allow individual functions to change without collateral impact to
other functions.
Hosted Function
A Hosted Function (HF) is defined as a system that directly interfaces with the CCS at
one or more of the following CCS communication and/or I/O interfaces:
• CDN
• ARINC 429
• CAN
• Analogue/Digital I/O
The HF is similar to that of a ‘federated system’ LRU.
The HF may include LRUs, or Application Specific Modules (ASM) that can utilise the
CDN;
and/or LRUs resident on the A429 busses or Controller Area Network (CAN) subnets that
utilise the RDC gateway function for interfacing with the CDN.
Partitioning services are provided for both the CDN and RDC. The VLs configured on the
network provide partitioning services for data communicated between networked devices.
The RDC provides partitioning services for its gateway operations.
Hosted Application
A Hosted Application (HA) is defined as a software application that utilises the
computing resources of the platform and can consist of one or more partitions. The HA is
an Operation Flight Program (OFP) which resides within one target computer.
The target computer for a HA is defined as the processor and resources that execute a
computer program in its intended target hardware. Configuration data and platform
software is included as part of the target computer to enable computer programs to
execute on the intended target hardware.
X2
x2 x3 x2 x2 x3
x2 x2
x4 x3
CCR x2 Forward E/E bay X1
Aft E/E bay
RDC – 21
REMOTE SWITCHES – 6
Component Features
CCR Cabinet
Shown Partially populated with GPM, GG, and ACS Switch
Forced Air Cooled with Backup Fans (case mounted)
Component Features
Remote Data Concentrator
• The RDC (Remote Data Concentrator)
acts as a remote interface unit to provide
input / output consolidation across an
AFDX network.
• Provides a high speed interface that
reduces the amount of aircraft wiring,
thereby reducing aircraft weight, cost, and
recurring maintenance costs.
• Acts as an interface unit between a
multitude of sensor types and the AFDX
bus. While collecting data from the sensor
suites and encoding it to AFDX message
format, it also decodes AFDX messages
and writes them onto associated outputs.
• Physically located throughout the aircraft
as necessary to provide local interface
points for aircraft systems.
• Completely interchangeable
Component Features
AFDX Switches
RDC
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)
RDC
RDC
RDC
RDC converts Auto tune data to ARINC 429 format, sends both
channels to the DME interrogator
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)
RDC
ARINC 429 Data
RDC
ARINC 429 Data
ARINC 664 Data
(Electrical)
DME Distance Data
RDC Converts Arinc 429 Signal to Arinc 664 Data, sends data to
ARS switches via 2 redundant data channels
Functional Overview – Sample Signal Flow
Distance Monitoring Equipment (DME)
RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data
RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data
RDC
ARINC 429 Data
ARINC 664 Data ARINC 664 Data
(Optical) (Electrical)
DME Distance Data
RDC
RDC
Head
Up
Primary Flight Display
Display
RDC
Head
Up
Primary Flight Display
Display
ARINC 664 Data
(Electrical)
RDC
Head
Up
Primary Flight Display
Display
ARINC 664 Data
(Electrical)