Vous êtes sur la page 1sur 19



The Cyber Physical Systems Security (CPSSEC) project addresses security concerns for cyber physical
systems (CPS) and internet of things (IoT) devices. CPS and IoT play an increasingly important role in
critical infrastructure, government and everyday life. Automobiles, medical devices, building controls and
the smart grid are examples of CPS. Each includes smart networked systems with embedded sensors,
processors and actuators that sense and interact with the physical world and support real-time,
guaranteed performance in safety-critical applications. The closely related area of IoT continues to
emerge and expand as costs drop and the confluence of sensors, platforms and networks increases.
Whether referencing the forward-collision prevention capability of a car, a medical device’s ability to
adapt to circumstances in real-time or the latest IoT innovation, these systems are a source of
competitive advantage in today’s innovation economy and provide vast opportunities for DHS and
Homeland Security Enterprise missions. At the same time, CPS and IoT also increase cybersecurity risks
and attack surfaces. The consequences of unintentional faults or malicious attacks could have severe
impact on human lives and the environment. Proactive and coordinated efforts are needed to strengthen
security and reliance for CPS and IoT.


Secure configuration

Secure configuration refers to security measures that are implemented when building and installing
computers and network devices in order to reduce unnecessary cyber vulnerabilities.

Security misconfigurations are one of the most common gaps that criminal hackers look to exploit.
According to a recent report by Rapid 7, internal penetration tests encounter a network or service
misconfiguration more than 96% of the time.

Both the SANS Institute and the Council on CyberSecurity recommend that, following an inventory of
your hardware and software, the most important security control is to implement secure configuration.

Why is secure configuration important?

Manufacturers often set the default configurations of new software and devices to be as open and multi-
functional as possible. In the case of a router, for example, this could be a predefined password, or in the
case of an operating system, it could be the applications that come preinstalled.
It’s easier and more convenient to start using new devices or software with their default settings, but it’s
not the most secure. Accepting the default settings without reviewing them can create serious security
issues, and can allow cyber attackers to gain easy, unauthorised access to your data.

Web server and application server configurations play a crucial role in cyber security. Failure to properly
configure your servers can lead to a wide variety of security problems.

Computers and network devices should also be configured to minimise the number of inherent
vulnerabilities and provide only the services required to fulfil their intended function.

For computers and network devices, your organisation should routinely:

Remove and disable unnecessary user accounts;

Change default or guessable account passwords to something non-obvious;

Remove or disable unnecessary software;

Disable any auto-run feature that allows file execution without user authorisation; and

Authenticate users before enabling Internet-based access to commercially or personally sensitive data,
or data critical to the running of the organisation.

For password-based authentication, your organisation should:

Protect against brute-force password guessing by limiting attempts and/or the number of guesses
allowed in a certain period;

Set a minimum password length of at least eight characters (but not a maximum password length);

Change passwords promptly when the user knows or suspects they have been compromised; and

Have a password policy that informs users of best practices.

Threats to Information Security

In Information Security threats can be many like Software attacks, theft of intellectual property, identity
theft, theft of equipment or information, sabotage, and information extortion.

Threat can be anything that can take advantage of a vulnerability to breach security and negatively alter,
erase, harm object or objects of interest.
Software attacks means attack by Viruses, Worms, Trojan Horses etc. Many users believe that malware,
virus, worms, bots are all same things. But they are not same, only similarity is that they all are malicious
software that behave differently.

Malware is a combination of 2 terms- Malicious and Software. So Malware basically means malicious
software that can be an intrusive program code or a anything that is designed to perform malicious
operations on system. Malware can be divided in 2 categories:

1.)Infection Methods

2.)Malware Actions

Malware on the basis of Infection Method are following:

Virus – They have the ability to replicate themselves by hooking them to the program on the host
computer like songs, videos etc and then they travel all over the Internet. Ther Creeper Virus was first
detected on ARPANET. Examples include File Virus, Macro Virus, Boot Sector Virus, Stealth Virus etc.

Worms – Worms are also self replicating in nature but they don’t hook themselves to the program on
host computer. Biggest difference between virus and worms is that worms are network aware. They can
easily travel from one computer to another if network is available and on the target machine they will
not do much harm, they will for example consume hard disk space thus slowing down the computer.

Trojan – The Concept of Trojan is completely different from the viruses and worms. The name Trojan
derived from the ‘Trojan Horse’ tale in Greek mythology, which explains how the Greeks were able to
enter the fortified city of Troy by hiding their soldiers in a big wooden horse given to the Trojans as a gift.
The Trojans were very fond of horses and trusted the gift blindly. In the night, the soldiers emerged and
attacked the city from the inside.

Their purpose is to conceal themselves inside the software that seem legitimate and when that software
is executed they will do their task of either stealing information or any other purpose for which they are

They often provide backdoor gateway for malicious programs or malevolent users to enter your system
and steal your valuable data without your knowledge and permission. Examples include FTP Trojans,
Proxy Trojans, Remote Access Trojans etc.

Bots –: can be seen as advanced form of worms. They are automated processes that are designed to
interact over the internet without the need of human interaction. They can be good or bad. Malicious
bot can infect one host and after infecting will create connection to the central server which will provide
commands to all infected hosts attached to that network called Botnet.
Malware on the basis of Actions:

Adware – Adware is not exactly malicious but they do breach privacy of the users. They display ads on
computer’s desktop or inside individual programs. They come attached with free to use software, thus
main source of revenue for such developers. They monitor your interests and display relevant ads. An
attacker can embed malicious code inside the software and adware can monitor your system activities
and can even compromise your machine.

Spyware – It is a program or we can say a software that monitors your activities on computer and reveal
collected information to interested party. Spyware are generally dropped by Trojans, viruses or worms.
Once dropped they installs themselves and sits silently to avoid detection.

One of the most common example of spyware is KEYLOGGER. The basic job of keylogger is to record user
keystrokes with timestamp. Thus capturing interesting information like username, passwords, credit card
details etc.

Ransomware – It is type of malware that will either encrypt your files or will lock your computer making
it inaccessible either partially or wholly. Then a screen will be displayed asking for money i.e. ransom in

Scareware – It masquerades as a tool to help fix your system but when the software is executed it will
infect your system or completely destroy it. The software will display a message to frighten you and force
to take some action like pay them to fix your system.

Rootkits – are designed to gain root access or we can say administrative privileges in the user system.
Once gained the root access, the exploiter can do anything from stealing private files to private data.

Zombies – They work similar to Spyware. Infection mechanism is same but they don’t spy and steal
information rather they wait for the command from hackers.

Theft of intellectual property means violation of intellectual property rights like copyrights, patents etc.

Identity theft means to act someone else to obtain person’s personal information or to access vital
information they have like accessing the computer or social media account of a person by login into the
account by using their login credentials.

Theft of equipment and information is increasing these days due to the mobile nature of devices and
increasing information capacity.

Sabotage means destroying company’s website to cause loss of confidence on part of its customer.
Information extortion means theft of company’s property or information to receive payment in
exchange. For example ransomware may lock victims file making them inaccessible thus forcing victim to
make payment in exchange. Only after payment victim’s files will be unlocked.

These are the old generation attacks that continue these days also with advancement every year. Apart
from these there are many other threats. Below is the brief description of these new generation threats.

Technology with weak security – With the advancement in technology, with every passing day a new
gadget is being released in the market. But very few are fully secured and follows Information Security
principles. Since the market is very competitive Security factor is compromised to make device more up
to date. This leads to theft of data/ information from the devices

Social media attacks – In this cyber criminals identify and infect a cluster of websites that persons of a
particular organisation visit, to steal information.

Mobile Malware –There is a saying when there is a connectivity to Internet there will be danger to
Security. Same goes to Mobile phones where gaming applications are designed to lure customer to
download the game and unintentionally they will install malware or virus in the device.

Outdated Security Software – With new threats emerging everyday, updation in security software is a pre
requisite to have a fully secured environment.

Corporate data on personal devices – These days every organization follows a rule BYOD. BYOD means
Bring your own device like Laptops, Tablets to the workplace. Clearly BYOD pose a serious threat to
security of data but due to productivity issues organizations are arguing to adopt this.

Social Engineering – is the art of manipulating people so that they give up their confidential information
like bank account details, password etc. These criminals can trick you into giving your private and
confidential information or they will gain your trust to get access to your computer to install a malicious
software- that will give them control of your computer. For example email or message from your friend,
that was probably not sent by your friend. Criminal can access your friends device and then by accessing
the contact list he can send infected email and message to all contacts. Since the message/ email is from
a known person recipient will definately check the link or attachment in the message, thus
unintentionally infecting the computer.


The most common way to identify someone is through their physical appearance, but how do we
identify someone sitting behind a computer screen or at the ATM? Tools for authentication are used to
ensure that the person accessing the information is, indeed, who they present themselves to be.

Authentication can be accomplished by identifying someone through one or more of three factors:
something they know, something they have, or something they are. For example, the most common
form of authentication today is the user ID and password. In this case, the authentication is done by
confirming something that the user knows (their ID and password). But this form of authentication is
easy to compromise (see sidebar) and stronger forms of authentication are sometimes needed.
Identifying someone only by something they have, such as a key or a card, can also be problematic.
When that identifying token is lost or stolen, the identity can be easily stolen. The final factor, something
you are, is much harder to compromise. This factor identifies a user through the use of a physical
characteristic, such as an eye-scan or fingerprint. Identifying someone through their physical
characteristics is called biometrics.

A more secure way to authenticate a user is to do multi-factor authentication. By combining two or more
of the factors listed above, it becomes much more difficult for someone to misrepresent themselves. An
example of this would be the use of an RSA SecurID token. The RSA device is something you have, and
will generate a new access code every sixty seconds. To log in to an information resource using the RSA
device, you combine something you know, a four-digit PIN, with the code generated by the device. The
only way to properly authenticate is by both knowing the code and having the RSA device.


Many times, an organization needs to transmit information over the Internet or transfer it on external
media such as a CD or flash drive. In these cases, even with proper authentication and access control, it
is possible for an unauthorized person to get access to the data. Encryption is a process of encoding data
upon its transmission or storage so that only authorized individuals can read it. This encoding is
accomplished by a computer program, which encodes the plain text that needs to be transmitted; then
the recipient receives the cipher text and decodes it (decryption). In order for this to work, the sender
and receiver need to agree on the method of encoding so that both parties can communicate properly.
Both parties share the encryption key, enabling them to encode and decode each other’s messages. This
is called symmetric key encryption. This type of encryption is problematic because the key is available in
two different places.

An alternative to symmetric key encryption is public key encryption. In public key encryption, two keys
are used: a public key and a private key. To send an encrypted message, you obtain the public key,
encode the message, and send it. The recipient then uses the private key to decode it. The public key can
be given to anyone who wishes to send the recipient a message. Each user simply needs one private key
and one public key in order to secure messages. The private key is necessary in order to decrypt
something sent with the public key.


In many organizations, application development is not limited to the programmers and analysts in the
information-technology department. Especially in larger organizations, other departments develop their
own department-specific applications. The people who build these are not necessarily trained in
programming or application development, but they tend to be adept with computers. A person, for
example, who is skilled in a particular software package, such as a spreadsheet or database package, may
be called upon to build smaller applications for use by his or her own department. This phenomenon is
referred to as end-user development, or end-user computing.

End-user computing can have many advantages for an organization. First, it brings the development of
applications closer to those who will use them. Because IT departments are sometimes quite
backlogged, it also provides a means to have software created more quickly. Many organizations
encourage end-user computing to reduce the strain on the IT department.

End-user computing does have its disadvantages as well. If departments within an organization are
developing their own applications, the organization may end up with several applications that perform
similar functions, which is inefficient, since it is a duplication of effort. Sometimes, these different
versions of the same application end up providing different results, bringing confusion when
departments interact. These applications are often developed by someone with little or no formal
training in programming. In these cases, the software developed can have problems that then have to be
resolved by the IT department.

End-user computing can be beneficial to an organization, but it should be managed. The IT department
should set guidelines and provide tools for the departments who want to create their own solutions.
Communication between departments will go a long way towards successful use of end-user computing


As new systems are brought online and old systems are phased out, it becomes important to manage the
way change is implemented in the organization. Change should never be introduced in a vacuum. The
organization should be sure to communicate proposed changes before they happen and plan to
minimize the impact of the change that will occur after implementation. Change management is a critical
component of IT oversight


The system development life cycle, known as the SDLC, is the industry-standard approach to managing
phases of an engineering project. Think of it as the equivalent to the scientific method for software
development and other IT initiatives. The common breakdown of the SDLC includes seven phases that
trace a product or project from a planned idea to its final release into operation and maintenance.
There is flexibility within the SDLC. In recent decades a number of
different models and methods have gained popularity. Consider one
of the following six approaches when establishing an SDLC in your organization.

1. Waterfall

The waterfall approach is one of the oldest SDLC models, but it has fallen out of favor in recent years.
This model involves a rigid structure that demands all system
requirements be defined at the very start of a project.
Only then can the design and development stages begin.

Once development is complete, the product is tested against the

initial requirements and rework is assigned. Companies in the
software industry typically need more flexibility that what the
waterfall methodology offers, but it still remains a strong solution for certain types of projects especially
government contractors.

2. Iterative

The iterative methodology takes the waterfall model and cycles through it several times in small
increments. Rather than stretching the entire project across the phases of the SDLC, each step is turned
into several mini-projects that can add value as the product evolves.

The iterative approach shares many of the same goals as the agile model, except external customers are
less involved and the scope of each increment is normally fixed.

3. DevOps

DevOps is one of the newest SDLC methodologies and is being adopted by many software companies
and IT organizations. As its name suggests, the premise of DevOps is to bring development teams
together with operational teams in order to streamline delivery and support.

The advantages of such an approach are that changes become more fluid, while organizational risk is
reduced. Teams must have flexible resources in order for a DevOps arrangement to succeed.
4. V-Model

An evolution of the classic waterfall methodology, the v-

model SDLC process steps are flipped upwards after the coding
phase. The v-model has a very strict approach, with the next
phase beginning only when the previous phase is complete.

This lack of flexibility and higher-risk method isn’t recommended

for small projects, but the v-model is easier to manage and control. For projects where requirements are
static and clearly stated, and where early testing is desired, this approach can be a good choice.

5. Spiral

The spiral methodology allows teams to adopt multiple SDLC models

based on the risk patterns of the given project. A blend of the
iterative and waterfall approaches, the challenge with the spiral
model is knowing when is the right moment to move onto the next

Business that aren’t sure about their requirements or expect major

edits during their mid to high-risk project can benefit from the
scalability of this methodology.

6. Lean

The agile and lean approaches are closely interconnected, as they both focus on delivery speed and
continuous improvement. In contrast, the lean model is rooted in manufacturing best practices where
excess waste and effort are seen as the largest risk to an organization.
When it comes to software and projects, the lean SDLC methodology focuses on reducing waste in every
phase, including scheduling, cost, and scope. This approach is most compelling for organizations with
strict hardware requirements and other procurement needs.

7. Agile

The agile methodology is the opposite of the waterfall approach. Rather than treating requirements,
design, and testing as large sequential steps, an agile model makes them all ongoing processes that
require involvement from developers, management, and customers.

Work is typically broken into 2-4 week segments known as “sprints,” in which the responsible teams
tackle the major needs of their customers and perform testing as they go. Agile tends to work well in
small organizations, especially startups, where speed and flexibility is essential.

8. Prototyping

In the prototyping methodology, the design team's focus is to produce an early model of the new
system, software, or application. This prototype won’t have full functionality or be thoroughly tested, but
it will give external customers a sense of what’s to come. Then, feedback can be gathered and
implemented throughout the rest of the SDLC phases.

The prototyping approach works well for companies in emerging industries or new technologies.


Application development is the process of creating a computer program or a set of programs to perform
the different tasks that a business requires. From calculating monthly expenses to scheduling sales
reports, applications help businesses automate processes and increase efficiency. Every app-building
process follows the same steps: gathering requirements, designing prototypes, testing, implementation,
and integration.


To customize software, business owners turn to service providers, who build apps to their specifications.
However, such solutions are both cost and time-intensive, as they impose a high degree of dependence
on the providers for upgrades and support. And the final product may not be in tune with the actual
requirements when it's built by someone who's unfamiliar with the business.
These problems can be overcome with a procedure that allows business owners to build their own apps,
with minimal programming and investment.

That's where low-code custom application development comes in.


Low-code custom application development empowers novice developers to build and implement apps
without having to acquire deep programming knowledge.

It dramatically simplifies the app development process, masking all the programming that goes into it
and presenting users with ready-to-use, intuitive development tools.

These custom apps come to the rescue where the one-size-fits-all dogma fails.

Benefits of custom application development:


The apps can be tailored to complement ever-changing business needs.


The pay-as-you-need approach effectively cuts costs by eliminating expenditure on redundant features.


It enables users to scale their apps to keep up with growing business demands.

Rapid Development

Since business owners can build apps themselves with minimal programming, they save the time spent
on coding and explaining the processes to third-party developers.


Conventionally, Rapid Application Development (RAD) is a software development model, in which

individual application modules are developed in parallel and assembled into a finished product.

Low-code platforms represent the latest trend in RAD methodology, wherein these platforms are used to
swiftly create and develop web and mobile apps. It is a relatively new approach to development, and is
characterized by intuitive, easy-to-use user interfaces.

Why do businesses need RAD?

The low-code revolution is responsible for empowering citizen developers by making it possible for them
to quickly create custom apps.
To businesses, this means their IT departments get to focus on more productive projects, rather than
fielding queries from non-technical staff. On a large scale, this saves time and effort, which translates to
an overall boost in productivity for the business as a whole. Commercially, RAD tools allow vendors to
develop, test, and publish apps faster, giving them a much-needed edge over competitors.

Features of the ideal RAD platform:


An intuitive UI operating on a drag-and-drop basis, preferably with a high click-to-code ratio.

Ample Customization

Editing apps using built-in functions and without having to code extensively.

Cross-platform Usability

The ability to access applications across native, web, or hybrid interfaces via mobile.


Implementing the functionality of third-party services within user-created apps.


Easily definable user permissions to regulate data visibility in applications, restricting access based on


Nearly 75% of all large organizations reported an increase in productivity after adopting the enterprise
mobility paradigm*. A recent report on workforces revealed that mobile devices saved field staff an
average of 240 hours a year* as a result of being seamlessly connected to their workplaces.

This statistic drives home the fact that mobile application development has the utmost relevance today.
Simply put, it's the development of software modules for use on mobile platforms, either independently
or as part of a larger ecosystem.

Why go mobile?

Mobile devices permit employees to complete tasks on the go, which is why enterprise mobility is such a
popular concept. In many cases, apps also act as a medium between a business and its customers,
making them indispensable tools for building customer relationships.

However, conventional mobile app development is a rigorous and effort-intensive process, requiring
considerable time and money to be spent on individual apps.

Enter low-code mobile app development platforms.

Low-Code: What's in it for your business?

Efficient Resource Utilization

Employees can build apps to suit their specific requirements, allowing IT teams to focus on core

Rapid Development

Intuitive drag-and-drop interfaces and guided scripting significantly boost app development speeds.

Elimination of Shadow IT

Since solutions can be created quickly and easily, the use of unsanctioned software can be curbed.


Database applications - because with great data comes great responsibility

As a business expands, its operations grow in complexity. Relying on spreadsheets to tackle burgeoning
information and processes could lead to a data pile-up, and ultimately slow the business down.

Database applications come to the rescue when spreadsheets become unmanageable. With the business
expanding, managing structured information with relationships across tables is paramount, and that's
exactly what databases are designed for. With a database app, users can define custom roles, provide
user based authorizations, implement business specific workflows, etc., by adding their own code to the
data modelled in the database.

What do database applications offer that spreadsheets don't?

Data relationships

Data from disparate systems can be related, and searched with dropdowns, checkboxes, etc.


You can access apps on web, mobile, and tablet with equal ease, courtesy of the Cloud.


Databases can be scaled to suit business requirements. No matter how huge the data gets, databases
contain it.

Selective sharing

Share just the information users need, instead of giving them access to the entire data set.

The caveat in traditional database applications

Data entry and manipulation in these databases require commands coded as queries. This calls for
programming in SQL (Structured Query Language), which is why most small and medium-scale
businesses resort to paper-based systems to manage their data. While this might work well for small
amounts of data, using paper to handle an increasing flow of information can culminate in chaos.


Enterprise applications are on-premises software or cloud-based apps that serve the needs of large,
often global, organizations such as corporations, banks, and government agencies

Challenges in enterprise app development:

Traditional application development is time-consuming, cost-intensive, and requires a sizeable workforce

of developers. Outsourcing of app development frequently leads to less than desirable results and can
cost more to fix than to develop. To make issues more complex, each app has to be designed for web
browsers, tablets, and smartphones individually, increasing costs and reducing efficiency. These
challenges can be overcome with the low-code approach to enterprise app development.

Enterprise application benchmarks:


Must work across multiple devices, networks, and teams distributed over continents and time zones.


Should allow users with varying levels of permissions to access information they need, while securing
confidential data.


Must be easy to build and update. Should reliably handle large amounts of data and many concurrent


Application controls

These are manual or automated procedures that typically operate at a business process level and apply
to the processing of transactions by individual applications. Application controls can be preventative or
detective in nature and are designed to ensure the integrity of the accounting records.

Accordingly, application controls relate to procedures used to initiate, record, process and report
transactions or other financial data. These controls help ensure that transactions occurred, are
authorised and are completely and accurately recorded and processed (ISA 315 (Redrafted)).
Application controls apply to data processing tasks such as sales, purchases and wages procedures and
are normally divided into the following categories:

(i) Input controls

Examples include batch control totals and document counts, as well as manual scrutiny of documents to
ensure they have been authorised. An example of the operation of batch controls using accounting
software would be the checking of a manually produced figure for the total gross value of purchase
invoices against that produced on screen when the batch-processing option is used to input the invoices.
This total could also be printed out to confirm the totals agree.

The most common example of programmed controls over the accuracy and completeness of input are
edit (data validation) checks when the software checks that data fields included on transactions by

reasonableness check, eg net wage to gross wage

existence check, eg that a supplier account exists

character check, eg that there are no alphabetical characters in a sales invoice number field

range check, eg no employee’s weekly wage is more than $2,000

check digit, eg an extra character added to the account reference field on a purchase invoice to detect
mistakes such as transposition errors during input.

When data is input via a keyboard, the software will often display a screen message if any of the above
checks reveal an anomaly, eg ‘Supplier account number does not exist’.

(ii) Processing controls

An example of a programmed control over processing is a run-to-run control. The totals from one
processing run, plus the input totals from the second processing, should equal the result from the
second processing run. For instance, the beginning balances on the receivables ledger plus the sales
invoices (processing run 1) less the cheques received (processing run 2) should equal the closing
balances on the receivable ledger.
(iii) Output controls

Batch processing matches input to output, and is therefore also a control over processing and output.
Other examples of output controls include the controlled resubmission of rejected transactions, or the
review of exception reports (eg the wages exception report showing employees being paid more than

(iv) Master files and standing data controls

Examples include one-for-one checking of changes to master files, eg customer price changes are
checked to an authorised list. A regular printout of master files such as the wages master file could be
forwarded monthly to the personnel department to ensure employees listed have personnel records.


The process (activity) whereby a work activity or a larger organizational setting is facilitated by
introducing a new socio-technical information system or modifying or expanding an existing one. ISD
includes sub-activities of analysis, design, development, implementation, and evaluation. Depending on
the viewpoint, it can be seen as a software engineering process of a software producer, an application
acquisition process of a software user, or a works development process.



In IBM and other corporations, the term "workstation" is sometimes used to mean "any individual
personal computer location hooked up to a mainframe computer." In today's corporate environments,
many workers have such workstations. They're simply personal computers attached to a local area
network (LAN) that in turn shares the resources of one or more large computers. Since they are PCs, they
can also be used independently of the mainframe assuming they have their own applications installed
and their own hard disk storage. This use of the term "workstation" (in IBM, sometimes called a
"programmable workstation") made a distinction between the earlier "terminal" or "display terminal" (or
"dumb terminal") of which the 3270 Information Display System is an example.


What Can a Database Do?

A database has broad searching functionality. For example, a sales department could quickly search for
and find all sales personnel who had achieved a certain amount of sales over a particular time period.

A database can update records in bulk – even millions or more records. This would be useful, for
example, if you wanted to add new columns or apply a data patch of some sort.
If the database is relational, which most databases are, it can cross-reference records in different tables.
This means that you can create relationships between tables. For instance, if you linked a Customers
table with an Orders table, you could find all purchase orders from the Orders table that a single
customer from the Customers table ever processed, or further refine it to return only those orders
processed in a particular time period – or almost any type of combination you could imagine.

A database can perform complex aggregate calculations across multiple tables. For example, you could
list expenses across multiple retail outlets, including all possible sub-totals, and then a final total.

A database can enforce consistency and data integrity, which means that it can avoid duplication and
ensure data accuracy through its design and a series of constraints.

What Is the Structure of a Database?

At its simplest, a database is made up of tables that contain columns and rows. Data is separated by
categories into tables in order to avoid duplication. For example, a business might have a table for
Employees, one for Customers and another for Products.

Each row in a table is called a record, and each cell is a field. Each field (or column) can be designed to
hold a specific type of data, such as a number, text or a date. This is enforced by a series of rules to
ensure that your data is accurate and dependable.

The tables in a relational database are linked through a key. This is an ID in each table that uniquely
identifies a row. Each table has a primary key column, and any table that needs to link to that table will
have a foreign key column whose value will match the first table's primary key.

A database will include forms so that users can input or edit data. In addition, it will have the facility to
generate reports from the data. A report is simply the answer to a question, called a query in database-
speak. For instance, you might query the database to find out a company's gross income over a
particular time period. The database will return to you the report with your requested information.

Common Database Products:

Microsoft Access is one of the most popular database platforms on the market today. It ships with
Microsoft Office and is compatible with all Office products. It features wizards and an easy-to-use
interface that guides you through the development of your database. Other desktop databases are also
available, including FileMaker Pro, LibreOffice Base (which is free) and Brilliant Database.

If you are considering a database for a medium to large business, you may want to consider a server
database based on Structured Query Language (SQL). SQL is the most common database language and is
used by most databases today.

Server databases like MySQL, Microsoft SQL Server, and Oracle are enormously powerful – but also
expensive and can come with a steep learning curve.

COBIT (Control Objectives for Information Technology)

COBIT is a widely utilized framework containing best practices for both ITGC and application controls. It
consists of domains and processes. The basic structure indicates that IT processes satisfy business
requirements, which is enabled by specific IT control activities. It also recommends best practices and
methods of evaluation of an enterprise's IT controls.


The Committee of Sponsoring Organizations of the Treadway Commission (COSO) identifies five
components of internal control: control environment, risk assessment, control activities, information and
communication and monitoring, that need to be in place to achieve financial reporting and disclosure
objectives; COBIT provide a similar detailed guidance for IT, while the interrelated Val IT concentrates on
higher-level IT governance and value-for-money issues. The five components of COSO can be visualized
as the horizontal layers of a three-dimensional cube, with the COBIT objective domains-applying to each
individually and in aggregate. The four COBIT major domains are: plan and organize, acquire and
implement, deliver and support, and monitor and evaluate.


Information Technology Operations, or IT operations, are the set of all processes and services that are
both provisioned by an IT staff to their internal or external clients and used by themselves, to run
themselves as a business. The term refers to the application of operations management to a business's
technology needs. Operations work can include responding to tickets generated for maintenance work or
customer issues. Some operations teams rely on on-call responses to incidents during off-hours periods.

Every organization that uses computers has at least loosely-defined IT operations, based on how it tends
to solve internal and client needs. Elements of IT operations are chosen to deliver effective services at
the required quality and cost. IT operations are usually considered to be separate from IT applications. In
a software development company, for example, IT operations include all IT functions other than software
development and management. However, there is always some overlap between the departments. IT
operations determine the way an organization manages software and hardware and includes other IT
support, such as network administration, device management, mobile contracting and help desks of all
kinds. IT operations management (ITOM) and IT operations analytics (ITOA) help an organization refine
the way that IT approaches services, deployment and support and help to ensure consistency, reliability
and quality of service. Current IT trends affecting IT operations include cloud computing, machine-to-
machine (M2M) communications and the Internet of Things (IoT). The efficiency of cloud computing
typically means that IT operations for a given organization require fewer administrators. The increasing
interconnectivity and automation of M2M and IoT require adaptations to the traditional IT operations
skill sets and business processes. Different organizations define IT operations in various ways; the term is
also used to describe the department that manages IT operations as well as the collection of services
and processes and how the department operates as a standardized procedure.

The IT Operations Process:

Some methods will choose to prescribe a single approach, such as capturing architectural requirements
in the form of epics or pre-building “architectural runways,” but the Disciplined Agile framework
promotes an adaptive, context-sensitive strategy. DA does this via its goal-driven approach that indicates
the decision points that you need to consider, a range of techniques or strategies for you to address each
decision point, and the advantages and disadvantages of each technique. In this section we present the
goal diagram for the IT Operations process blade and overviews its decision points.