Vous êtes sur la page 1sur 7

Data Architecture – 4

data architecture is composed of models,


policies, rules or standards that govern which
data is collected, and how it is stored,
arranged, integrated, and put to use in data
systems and in organizations.[1][2] Data is
usually one of several architecture
domains that form the pillars of an enterprise
architecture or solution architecture.[3]
• A data architecture should[neutrality is disputed] set data standards for all its data systems as a vision or a
model of the eventual interactions between those data systems. Data integration, for example,
should be dependent upon data architecture standards since data integration requires data
interactions between two or more data systems. A data architecture, in part, describes the data
structures used by a business and its computer applications software. Data architectures address
data in storage and data in motion; descriptions of data stores, data groups and data items;
and mappings of those data artifacts to data qualities, applications, locations etc.
• Essential to realizing the target state, Data Architecture describes how data is processed, stored,
and utilized in an information system. It provides criteria for data processing operations so as to
make it possible to design data flows and also control the flow of data in the system.
• The data architect is typically responsible for defining the target state, aligning during development
and then following up to ensure enhancements are done in the spirit of the original blueprint.
• During the definition of the target state, the Data Architecture breaks a subject down to the atomic
level and then builds it back up to the desired form. The data architect breaks the subject down by
going through 3 traditional architectural processes:
Elements of data architecture

• Certain elements must be defined during the design phase of the data architecture schema. For
example, administrative structure that will be established in order to manage the data resources
must be described. Also, the methodologies that will be employed to store the data must be
defined. In addition, a description of the database technology to be employed must be generated,
as well as a description of the processes that will manipulate the data. It is also important to
design interfacesto the data by other systems, as well as a design for the infrastructure that will
support common data operations (i.e. emergency procedures, data imports, data backups,
external transfers of data).
• Without the guidance of a properly implemented data architecture design, common data
operations might be implemented in different ways, rendering it difficult to understand and control
the flow of data within such systems. This sort of fragmentation is highly undesirable due to the
potential increased cost, and the data disconnects involved. These sorts of difficulties may be
encountered with rapidly growing enterprises and also enterprises that service different lines
of business (e.g. insuranceproducts).
• Properly executed, the data architecture phase of information system planning forces an
organization to precisely specify and describe both internal and external information flows. These
are patterns that the organization may not have previously taken the time to conceptualize. It is
therefore possible at this stage to identify costly information shortfalls, disconnects between
departments, and disconnects between organizational systems that may not have been evident
before the data architecture analysis.[5]
Master data management

• Master data management (MDM) is a comprehensive


method of enabling an enterprise to link all of its critical
data to one file, called a master file, that provides a
common point of reference. When properly done, master
data management streamlines data sharing among
personnel and departments. In addition, master data
management can facilitate computing in multiple system
architectures, platforms and applications.
• At its core Master Data Management (MDM) can be viewed
as a "discipline for specialized quality improvement"defined
by the policies and procedures put in place by a data
governance organization. The ultimate goal being to
provide the end user community with a "trusted single
version of the truth" from which to base decisions.
• There are several ways in which master data may be
collated and distributed to other systems.[6] This includes:
• Data consolidation – The process of capturing master data
from multiple sources and integrating into a single hub
(operational data store) for replication to other destination
systems.
• Data federation – The process of providing a single virtual
view of master data from one or more sources to one or
more destination systems.
• Data propagation – The process of copying master data
from one system to another, typically through point-to-
point interfaces in legacy systems.
Data Quality
• Data quality refers to the condition of
a set of values of qualitative or quantitative variables. There are
many definitions of data quality but data is generally considered
high quality if it is "fit for [its] intended uses in operations, decision
making and planning.".[1] Alternatively, data is deemed of high
quality if it correctly represents the real-world construct to which it
refers. Furthermore, apart from these definitions, as data volume
increases, the question of internal data consistency becomes
significant, regardless of fitness for use for any particular external
purpose. People's views on data quality can often be in
disagreement, even when discussing the same set of data used for
the same purpose. Data cleansing may be required in order to
ensure data quality.[2]

Vous aimerez peut-être aussi