Vous êtes sur la page 1sur 51

Module 2: Information technology governance: Organization

and planning for IS


Overview
In the previous module, you learned about the importance of information systems (IS) to modern organizations
and of linking IT plans to business plans in order to get the most value out of investments in these tools. You
also learned about some of the ways to think about organizations and their environments (for example, the
value chain and Porters five forces).
In this module, you focus on how to ensure key decisions about IS (including authority, responsibility,
enablement, compliance, planning, structuring, and organizing) are effectively made to maximize business
value. IT governance is concerned with these issues at the broadest level. The module begins with an overview
of IT governance, outlines the key decisions to be made, and highlights key challenges of IS planning,
technology planning, data planning, IS economics, and developing a strategic plan.
You will develop your ability to advise on issues of corporate governance. You will learn how to identify, assess,
and advise on information required for management decision making, to evaluate the interrelationship of an
issue on different functions of the organization, and to apply concepts and approaches within and across
functional areas to develop integrative solutions, thus enabling you to evaluate implications and assess the
appropriateness of solutions beyond the immediate or short term.
The next three modules focus on the process of implementing a particular system. After identifying the types of
applications you wish to develop through strategic planning, you move into the process of managing IS
projects. Module 3 provides an overview of this process, while Modules 4 and 5 present more specific details
on the different activities involved in systems development.
2.1 IT governance
2.2 Short- and long-range technology planning
2.3 Data and information management issues
2.4 IS economics
2.5 Developing an IS strategic plan
Module summary
Print this module
Module scenario: Planning is a whole business
Your Wednesday morning managers session is over. You presented some good ideas, and explained IS
positions on a number of topics. Now the managers are all returning to their responsibilities. You hope some of
what you said will stick, and help them see where IS can work for them. But right now you have another
problem: your own department needs updating. Their technical skill set is excellent, but the kind of updating
they need is big picture, a strategic framework. You need them to understand the responsibilities of IS as a
whole, not just their own area of expertise. And you need their help.
Part of budgeting for IS requires input from the department. Strategy is driven from the senior level down to
the departments, and each department must decide how to best align itself. This takes time, planning, and a
Course Schedule Course Modules Review and Practice Exam Preparation Resources
lot of discussions. And with the economy in a downturn, both budget and capital dollars have been reduced,
and the competition for them will be fierce. In the past, IS automatically got a budget number equivalent to a
percentage of net profit. That is gone. IS must now understand the economics of its projects the same as
other departments do. To do this successfully, your department must understand the impact of short- and
long-range planning, of the data- and information-management issues within IS, and of developing an IS
strategy. However, surrounding these concepts is the topic of governance, and what your department needs to
do to ensure compliance with industry-recognized procedures.
Back in your office, you send a reminder to everyone that the IS strategy planning session is this Friday at 9:30
a.m.
2.1 IT governance
Learning objectives
Evaluate the critical decisions about IT that organizations make, and the different governance
arrangements for making those decisions. (Level 1)
Identify the roles of the financial manager and other stakeholders in IT governance. (Level 1)
Assess the challenges of taking an IS plan from theory to practice. (Level 1)
Required reading
Reading 2-1: Six IT Decisions Your IT People Shouldnt Make (This reading is included in the MS2
casebook.)
Chapter 5, Section 5.5, Management Issues
Chapter 10, Section 10.3, Selecting Projects
Review Chapter 9, Section 9.1, Systems as Planned Organizational Change
LEVEL 1
IT governance considers the key decisions that must be made in managing and aligning IT with an
organizations objectives, and the different ways in which those decisions can and should be made. Formally,
IT governance refers to the assignment of decision rights and an accountability framework for IT to encourage
desirable behaviour in the use of IT (Weill & Ross, 2004).
1
The structures and processes that are established
for making IT decisions should support the firms overall goals. Thus, for a firm pursuing a low cost strategy,
the governance model should encourage behaviours that lead to the lowest reasonable cost for IT. For a firm
pursuing a differentiated strategy, the framework would be designed to promote decisions supporting the basis
of differentiation.
Governance has become a key topic in information systems in recent years, for three significant reasons:
The continuing need to align IT and business and ensure that IT provides appropriate business
value.
The rise of concern over governance in general following the accounting scandals and collapse of
share prices for a number of major U.S. corporations.
Ethical considerations with respect to governance. (For example, Reading 2-1 Six IT Decisions
Your IT People Shouldnt Make mentions Princeton using applicant data to access the Yale
database on admissions. This raises an interesting issue about the ethics of Princeton admissions
personnel using data from applicants improperly, including the issue of intent and disclosure with
respect to data gathering.)
Legislation such as the Sarbanes-Oxley Act of 2002 and the Basel II Capital Accord require firms to be much
more accountable for their decision-making processes and for the risks they assume. Sarbanes-Oxley (SOX)
was enacted in the U.S. in response to a number of high profile corporate scandals that led to significant losses
for investors. The legislation applies to all publicly traded companies in the U.S. (including Canadian companies
that are listed on either the NYSE or the NASDAQ, as well as Canadian subsidiaries of U.S. public companies).
One of the key provisions is the requirement for the CEO and CFO to certify that all financial disclosures are
accurate (and holding them personally accountable for any errors or omissions). Basel II is a similar system for
corporate accountability that applies to financial institutions that operate internationally. While neither SOX nor
Basel II makes specific requirements about IT systems, IT systems provide much of the reporting data for
complying with these laws. Moreover, IT is a key source of operational risk in firms, and as such is subject to
Course Schedule Course Modules Review and Practice Exam Preparation Resources
reporting rules as well. As a result, significant IT process changes have been required in many companies to
support the reporting requirements of the legislation. For example, ensuring the security and auditability of
data has been a major task for many companies, often requiring the assistance of best practices or governance
frameworks.
Governance frameworks: COBIT and COSO
COBIT and COSO are two frameworks to assist organizations with the implementation of IT governance. Both
are supportive of one another as control frameworks, with COBIT taking more of an IT focus and COSO
addressing the controls needed for ensuring confident financial processes. Together, these frameworks help
verify the movement of data throughout an organization, through its initial stages of transaction entry to the
financial reports required by senior executives like the CEO and CFO. Data moves and transforms throughout
organizations: assuring regulatory agencies, investors, and customers that the information reported is sound
and verifiable is an ongoing concern of todays business.
COBIT
COBIT (Control Objectives for Information and Related Technology) is the framework developed by the
Information Systems Audit and Control Association (ISACA). COBIT is a series of components that assist
organizations to increase regulatory compliance and control over IT, integrate global IT standards, and reduce
business risks. First developed in 1996, COBIT 4.01 was published in 2007, and COBIT 5.0 is scheduled for
release in 2011. COBIT is composed of five key components:
A framework to organize IT governance practices by process and then link them to the actual
business needs of the organization
Mapping of business process descriptions and templates
Control objectives that support effective security and control over IT processes
Guidelines to maintain focus on objectives, performance measures, and the integration of IT
governance measures with other areas of governance
Maturity models to support process benchmarking and continuous improvement
COSO
COSO (The Committee of Sponsoring Organizations of the Treadway Commission) is a framework that provides
executive management the ability to assess and enhance internal controls in order to improve risk
management. IT governance is implied with the monitoring of internal control systems. The first publication of
the COSO framework was in 1992. Since then COSO has published improved guidance on monitoring internal
controls and enterprise risk management. COSO is composed of five key framework components:
The control environment that establishes the processes for managing and developing employees
in an organization.
The control activities that are directives to ensure management policies and procedures are
followed.
Information and communications policies developed to ensure governance compliance.
Risks assessments from both internal and external sources including objectives to manage the
risks.
The continuous monitoring of control systems to assess deficiencies and the need for
improvements.
ISO 38500
In 2008, the ISO (International Organization for Standardization) and the IEC (International Electrotechnical
Committee) created ISO 38500 a corporate IT governance standard that provides guidelines on the most
efficient, effective, and best use of an organizations information technologies.
Where COBIT can be categorized as a foundation for IT governance, ISO 38500 acts as a top-down
assessment of the IT governance structure. It does not replace the COBIT framework (which is usually an IS
directive); instead, its goal is to complement COBIT by bringing a senior-executive perspective to IT
governance. The aim of ISO 38500 is to provide the directors of an organization with a standard structure for
the six principles of good IT governance a standard that can be referred to when evaluating the use of IT in
an organization. The six principles are applicable to most organizations, and reflect the behaviour that should
be adopted to guide better decision making. These principles do not provide a how-to framework, but they do
provide guidance, so that each organization can adopt and apply the principles as they see fit.
Six principles of IT governance
Responsibility: Groups and individuals within an organization understand and accept their
responsibilities in respect of both supply of, and demand for, IT. Those with responsibility for
actions also have the authority to perform those actions.
Strategy: The organizations business strategy takes into account the current and future
capabilities of IT; the strategic plans for IT satisfy the current and ongoing needs of the
organizations business strategy.
Acquisition: IT acquisitions are made for valid reasons, on the basis of appropriate and ongoing
analysis, with clear and transparent decision-making. There is appropriate balance between
benefits, opportunities, costs, and risks, in both the short term and the long term.
Performance: IT is fit for purpose in supporting the organization, providing the services, levels of
service and service quality required to meet current and future business requirements.
Conformance: IT complies with all mandatory legislation and regulations. Policies and practices
are clearly defined, implemented, and enforced.
Human Behaviour: IT policies, practices, and decisions demonstrate respect for human behaviour,
including the current and evolving needs of all the people involved the process.
Whereas the principles set the behaviour for decision making, the model for ISO 38500 states that directors
should govern IT through a focus on three tasks:
Evaluate the current and future use of IT. Directors should consider the external or internal
pressures acting upon the business, such as technological change, economic and social trends,
and political influences, and should view evaluation as a continual process.
Direct preparation and implementation of plans and policies to ensure that use of IT meets
business objectives. Directors should assign responsibility for the preparation and implementation
of plans and policies. Plans set the direction for investments in IT projects and IT operations, and
policies should establish the acceptable use of IT (which includes the timeliness of information,
and compliance with the six principles of IT governance: responsibility, strategy, acquisition,
performance, conformance, and human behaviour).
Monitor conformance to policies, and performance against the plans. The performance of IT
should be monitored through established and appropriate measurement systems. Directors should
reassure themselves that performance complies with approved plans and business objectives.
Source: www.ISO.org. Reproduced with permission. All rights reserved.
Once the IT governance structure is undertaken, the tactical structuring and requirements for meeting the
goals of the governance framework must be defined. The executive committee, created during the
implementation of ISO 38500, assumes the governance role and exercises input and decision rights across the
gamut of IT domains.
Governance map
In his 2004 article, Peter Weill outlined five broad classes of decisions about IT that are made in
organizations:
2
IT principles: decisions about the broad principles for applying IT in the firm (such as we will be
a fast follower or we will use only standard technologies)
IT architecture: decisions about the organizing logic for data, applications, and infrastructure
(process and data standardization)
IT infrastructure strategies: decisions about the main shared and enabling services used in the
firm (such as networks and data centres)
Business application needs: decisions that focus on specifying the business need for applications
IT investment: how much to spend and on what sorts of processes
Each of these decisions requires different sorts of information. For example, IT investment focuses on the
cost/benefit trade-offs of different approaches and the overall potential for business value creation, whereas IT
infrastructure strategies might be more concerned with the emergence of new standards and the risks of
obsolescence from aging technology. As such, they require involvement from different members of the
organization.
Weill also outlines different decision-making models that are commonly used in these situations. The six
dominant models are:
Business monarchy: Decisions are made by the executive committee. Organizations that have not
yet adopted a governance framework will rely on senior business leaders such as the CEO.
IT monarchy: Decisions are made by senior IT leaders such as the CIO, Director of IS, or the
systems manager depending on who is the most senior within an organization.
Duopoly: Decisions are made jointly by senior business and IT leaders.
Federal: Decisions are made jointly by central business leaders along with business leaders from
the different areas of the firm, such as strategic business units (SBU) or functional departments.
IT may or may not be represented in this model.
Feudal: Decisions are made locally by each SBU or functional unit.
Anarchy: Decisions are made in multiple locations, typically by individuals with little accountability.
This strategy is rarely seen, and unlikely to ever be considered effective.
Weill and Ross conducted a research study to examine these models in terms of both their prevalence and
effectiveness when used to make a variety of decision types (2004). The results can be expressed in terms of
the governance map shown in Exhibit 2.1-1. The map shows which styles were most commonly used for the
different decision types, and which governance mechanisms had input and influence on those decisions. The
map also indicates which styles were found to be most effective for a particular decision type. Note that results
on the effectiveness are not definitive, because the firms (that Weill and Ross studied) did not vary sufficiently
to let the researchers evaluate the effectiveness of the strategies, only how the strategies are made.
As the exhibit illustrates, decisions based on IT principles tend to be made by a combination of senior business
and IT leaders (duopoly). Such a structure ensures that the right information is brought to the table for
decision-making, and that the right considerations are included. The worst model for making IT principles
decisions is the federal model, typically because there are so many stakeholders that it becomes difficult to
make focused decisions.
Exhibit 2.1-1: Governance map
For IT infrastructure and architecture decisions, the most common decision model is the IT monarchy. These
decisions focus on the more technical elements of IT planning, and thus are most usefully located within the IT
domain. Here again, the federal model is the least effective approach.
For business application needs, either a federal or a feudal model is most common, and neither model is
universally more effective. Here it depends on the business needs of individual firms. For related business units
that need to share infrastructure, a more centralized model (such as the federal model) is helpful. But for
unrelated units where pooling of resources is not particularly helpful, a feudal model with local decision-making
can result in greater responsiveness and agility. In this decision-making area, the IT monarchy is the worst
approach, since this area is closest to the business needs and thus requires significant input from business
leaders. Locating the decision-making outside of the business units decreases both involvement and
accountability on the business side.
In terms of IT investment, a great many models were followed. The best approach seems to be a duopoly,
with active involvement from both senior and IT leaders. As with IT principles, this allows for tightly focused
planning that takes into account the needs of the firm as a whole as well as the capabilities of the technology.
The worst model here was the federal model, because the combination of central decision makers and
decentralized decision makers often results in turf battles and political gamesmanship. The investment decision
is the responsibility of the executive committee and a subgroup of senior officers, including the CIO. It is
important to note the various decision models are more pervasive in an organization and drive all
decision-making, not just IT decisions. Corporate culture also often drives decision-making, at least as much as
decision type.
It is, however, significant that many IT decisions today (45%, as reported by Gartner
3
) are the responsibility of
the CFO and not the CIO. IT Governance principles are as much about leadership as control, and investment
decisions that apply a cost to a corporate direction are typically the domain of the CFO, regardless of whether
the responsibility is under IT or another department. Gartner also reports that 42% of IT organizations report
to the CFO.
Module scenario: Governance with hesitation
The first topic on the Friday meeting agenda with your team is IT governance. The team is receptive to the
idea in areas of strategy alignment, but fears the added work involved in proving accountability on the financial
side of information. One of your developers asks why it isnt accountings responsibility to prove financial
statements. He says, I mean, we use an ERP system with integrated financials, so how could they go wrong?
You explain that, Yes, the software is tightly integrated, but the point of governance is accountability. Can we
actually say that we know with 100% certainty who has access to which programs, and whose entry
contributes to the final statements? And with our CFO legally responsible for numbers, would you sign without
knowing how those numbers get generated?
Thats great, says your network analyst, but that doesnt help us connect our needs to the business. I need
new cabling in the warehouse and another length of fibre. What business objective does this satisfy? I dont
want some other manager saying what we do and dont need in IT based on some uninformed committee
recommendation.
So, you say, lets talk about how we can change our thinking about IT decisions to reflect a more business-
oriented approach, rather than a purely technological one. Some IT decisions are ours, but others should be
decided with input from other departments. Let me show you some slides of how this could work. You then
take your team through the decision-making models and structures of a well-governed company.
Implementing governance arrangements
A variety of different mechanisms are used to implement these different governance arrangements. These
include different decision-making structures, processes for aligning, measuring, and valuing IT decisions with
business needs, and communication strategies for ensuring that structures and policies are respected.
IT decision-making structures
Among the most common decision-making structures are:
executive or senior management committees which provide ongoing business leadership and
involvement in IT, often referred to as IT steering committees
IT leadership committees composed of senior IT leaders from different areas of IT, such as
infrastructure and development
process teams made up of IT members and business/IT relationship managers
IT councils composed of business and IT executives
architecture committees
capital approval committees
Of these, CIOs rate IT leadership committees and business/IT relationship managers as the most effective
means of ensuring adequate decision-making. In both cases, IT expertise is matched with business knowledge
so that decisions are more comprehensive than departmental.
IT decisions for process effectiveness, alignment, and value
An effective IT department is the first step towards alignment. This means measuring and valuing the success
factors on IT projects such as (i) completed on time and (ii) within budget. When IT is better able to account
for its own efficiencies, it is better able to approach alignment, and vice-versa through a reciprocal relationship.
Consistent project management success is the first step for IT towards understanding business priorities and
knowing there is adequate staff to respond to those needs.
To assist the organization in measuring its IT effectiveness, alignment, and value, these processes help track
IT decisions against business needs:
Various systems for tracking IT projects and resources consumed From Microsoft Project to
open-source web solutions like Redmine, Basecamp, and Huddle project management software is
mature and specific. Due dates, milestones, Gantt and Pert Charts, and resource tracking and
costing are all available. The framework Val-IT also provides project tracking.
Service level agreements SLAs help IT establish guidelines for its users on how quickly IT will
respond to a variety of service requests. Help desk and trouble ticket systems help IT create
these agreements, which can then be tracked and evaluated. Although SLAs technically measure
the effectiveness of IT to meet its service commitments, their introduction may signal an
alignment issue with an overall governance commitment to improve customer service.
Formally tracking the business value of IT The completion of projects requires an assessment
of the deliverables to see that the approved value that drove the project is realized upon project
completion. Some project software solutions can assist here, but so can project frameworks like
Val-IT from the IT Governance Institute, which aims to value IT initiatives through three
principles: portfolio, program, and project.
Chargeback arrangements Chargeback agreements are difficult agreements to manage;
however, they can be successful in reshaping the behaviour of departments that routinely rely on
IT project/infrastructure support. If IT is allowed to recoup costs by charging back departmental
projects responsible for large investments, then departments will better understand the business
value proposition before committing to IT resource. This usually means the project is better
understood.
Balanced scorecard The balanced scorecard is a type of performance measurement framework.
It drives strategy by using performance and follow-up measurables. It is best designed
simultaneously with the business case of a project, and may include visual devices such as
dashboards. The Balanced Scorecard Institute
4
provides comprehensive tools and information.
Systems for tracking IT projects and resources remain the most important as far as CIOs are concerned.
IT choices for communications vehicles
Any project can be hampered by poor communications. If people cant follow the progress or cant see the
problems and how they are being addressed, then there is little chance of getting the organizational support
that projects need to succeed. There are many ways to post information, which may be shared through the
following key communication vehicles:
senior management announcements
web-based portals, blogs, wikis, and social media
intranets for IT
informal communication
Having a CIO (or equivalent) as a member of senior management is also important in terms of communicating
the role and importance of IT. Without this, the responsibility falls on the IT manager, who is often immersed
in the day-to-day running of the department, and is not focused on the forward thinking CIO tasks outlined in
the six IT governance principles responsibility, strategy, acquisition, performance, conformance, and human
behaviour. However, as Gartner reported, CFOs are now taking a more active role in the decision making in IT,
providing a shift in the role and focus of IT.
Specific roles for business leaders
As a non-IS manager, you will be involved in IS decision making in a number of ways. First, you will suggest
needs for information and systems within your department that will drive the development agenda of IS.
Second, you will participate in determining the costs and benefits of the systems that will support your
department. Third, you may be asked to participate in priority setting committees to determine which of many
opportunities should be pursued. (There are usually more of these than it is possible to undertake, either
financially or operationally.) Finally, depending on the budgeting approach used in your organization, you may
be required to include the costs of IS projects in your departmental budget. To ensure you are making good
decisions for your department and your organization, you need to understand the issues involved in planning
and realizing the IS infrastructure, which is the relationship between how IS sees itself in the organization, and
its roles, responsibilities, and interactions towards other departments as it tries to enhance its value to the
company. Since more of these tasksmay continue to become the responsibility of the finance department, the
better accountants understand the methods employed in IT decision making the better they will be in assessing
the value of those decisions both from a business and financial perspective.
The IS plan
In theory, the IS plan begins with a consideration of the business strategic plan and compares this plan to the
current IS infrastructure. This comparison leads to the articulation of new systems development plans and
plans for overall management of the IS function, which in turn results in budget implications for the
organization. This top-down approach is illustrated in Table 10-1 of your textbook, which outlines what should
be included in an IS plan: the purpose of the plan, strategic business rationale for the plan, current system
inventory and documentation, new developments, management strategy, implementation plan, and budget
requirements.
Challenges of putting an IS plan into practice
The first challenge relates to the issue of strategic alignment (Module 1). If IS strategy is to be fully aligned
with business strategy, the implied sequence in the theoretical IS planning process described above business
strategy first, then IS strategy does not hold. Business strategy is not always determined first, followed by IS
strategy. Sometimes, particularly in times of great technological change, IS possibilities drive business strategy.
Thus, while the ideal view suggests that business plans come first, the organizational reality is more iterative
than this ideal suggests. As a manager, you need to learn to work in this iterative fashion, in partnership with
IS and IT specialists, to achieve an IS/IT strategy that is aligned with that of the organization as a whole. This
is not to say that business needs are secondary to the technology; business value still is the primary
determinant of the ultimate strategic choices. But from a process perspective, a less linear approach is quite
common and helpful, and for IS, almost a necessity.
The second challenge is in determining the time horizon for IS planning. Historically, organizations have
engaged in long planning cycles. Long-term plans might focus 10 years out, medium-term plans might focus
on three to five years, and short-term plans would focus on the next year. This planning horizon has been
challenged over the past 20 or so years, because the amount of change taking place in the business
environment makes looking this far ahead difficult.
Some argue that long-term planning is not possible, given this pace of change. But long-term plans are
necessary for outlining the broad strategic direction of the firm and the means to get there. Moreover, some
projects that IS undertakes would not be completed within a one-year cycle. Long-term plans simply cannot be
dismissed. Certain IT infrastructure projects continue to realize value years after the completion of the project.
Without a long-term plan, these could be dismissed as failed returns on investment for the short term, without
proper consideration of the longer term accumulative value.
In practice, few organizations look more than three to five years ahead when defining IS plans. This seems
reasonable, given the rate of both organizational and technological change. The three-to-five-year time horizon
forces a look beyond the immediate priorities of the organization and moves thinking into the sort of broad
strategic issues that must be considered to enable proactive, rather than simply reactive, strategic choices.
The final challenge to the theoretical view of the IS plan relates to its implementation. It is tempting to view
the IS plan as the blueprint for what is to happen in the organization. Once the planning document (assuming
there is a document) is complete, simply following the steps as outlined will assure success. But this view is
simplistic. It ignores both the need to adjust the plan as business realities change and the weaknesses inherent
in the assumptions on which these plans are often based. Therefore, you need to think of an IS plan as a
living document, one which will be adjusted over time as needs demand.
1
Weill, P. & Ross, J . (2004). IT Governance: How Top Performers Manage IT Decision Rights for Superior
Performance. Boston, MA: Harvard Business School Press.
2
Peter Weil, Dont J ust Lead, Govern: How Top-Performing Firms Govern IT, Massachusetts Institute of
Technology, Center for Information Systems Working Paper No. 341, March 2004.
3
Computerworld UK: CFOs making the IT decisions in nearly half of businesses Gartner. By Antony Savvas.
J une 11, 2012. http://www.computerworlduk.com/news/it-business/3283161/cfos-making-the-it-decisions-in-
nearly-half-of-businesses--gartner/
4
Balanced Scorecard Institute accessed April 18, 2013, http://www.balancedscorecard.org/.
2.2 Short- and long-range technology planning
Learning objectives
Assess the principal drivers of hardware and software decisions in organizations. (Level 1)
Identify the specific issues in hardware and software planning. (Level 1)
Evaluate the advantages and disadvantages of centrally determined technology standards.
(Level 1)
No required reading
LEVEL 1
At the heart of IT decision making are decisions about what technologies (hardware, software, networks, and
databases) will be used to support the information requirements of the firm. This topic examines the key
questions that should be asked in making decisions among competing technologies to support a given
requirement.
Why this matters to accountants
Advances in technology have two drivers; one is strategic, the other is cost. New technologies always carry a
premium price tag. Accountants need to know what new technologies are coming so they can assess the
financial and company risk of adopting a technology before it has matured or before market forces will bring
the cost down. Again, depending on the decision-making structure of the organization and its strategic
position, certain technologies, although attractive, may represent too big a risk despite ITs willingness to
adapt.
The drivers of technology decisions
Application demands
Technology decisions are often driven by application requirements. For example, suppose you are working for a
medium-sized public accounting firm. An analysis of business requirements suggests that building and
maintaining client relationships will be of increasing importance in the future. At present, however, the firm
maintains little computerized data on clients and has experienced problems ranging from keeping addresses
current to knowing what kinds of additional services might be useful to clients. To address these needs, the
organization decides to build a single, integrated client database.
The organization can implement this system in a number of ways:
It can purchase an off-the-shelf system for customer tracking.
It can build an in-house system.
It can use the offering of a SaaS provider.
Depending on how the firm decides to proceed, there may be requirements to upgrade existing computers.
Perhaps the preferred off-the-shelf system requires more powerful computers to run, more storage space, or
networks. If a local area network is not already in place, the goal of creating a single repository of client
information will demand networking. Even if a network is in place, it may not have the application-sharing
capabilities that this system would require, and will need improvements over and above the cost of the
software. With a SaaS provider, although upgrades to existing equipment may be minimal, a thorough analysis
of the service level agreement with the provider and a testing of security and privacy settings must be made.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Hardware lifecycles
Sometimes technology decisions are driven not by specific application requirements but rather by the need to
maintain an appropriate basic level of technology as a platform on which to build applications in general. Any
computer has a finite life, and upgrades must be undertaken before reliability and performance are
compromised. Networking equipment must similarly be replaced to avoid failures that will hurt the ability of the
business to communicate. The issue of locking into a particular hardware standard is something that IT must
assess. Desktop vendors provide compatible model numbers for IT departments to ensure processor and
chipset compatibility over a guaranteed timeframe. This way, IT departments can verify the workings of a
specific vendor model with their software, if compatibility is an issue, and then only order those specific
models. Networking hardware is equally compatible, in the sense that a particular vendor (Cisco, Dell, and so
on) has its own network OS that connects and controls its devices. The mixing and matching of network
equipment may save money, but provide a management nightmare for IT.
Why this matters to accountants
Understanding that price is not the only factor in IT is the first step toward assessing IT infrastructure costs.
Not all network hardware is interchangeable. IT is responsible for mapping the current and future state of a
network so that decision makers can understand how the future direction will support the future business.
However, purchase decisions must be flexible enough so that lowest price equipment is not the only deciding
factor. For example, future business plans may include shop floor automation, and IT has sourced the best
hardware for incorporating RFID. Or remote financial branches may require secured connections with
virtualization, requiring a hardware choice in future plans to accommodate this direction.
Customer and supplier demands and other external forces
Finally, some technology decisions are made in response to the demands of customers or suppliers. For
example, if your business partners make changes in their technology, you may need to make changes to
continue to communicate with them. Also, if technology suppliers stop supporting particular pieces of hardware
or software, you may need to upgrade, not because what you have no longer performs to specification, but
because you cannot obtain replacement parts or additional components or because they no longer interface
with other applications.
These changes often have no particular benefits in terms of improved information or business processes. They
are often frustrating for managers and users in organizations, who do not see the need for change, and who
find themselves forced to change their behaviours without reaping benefits from the extra work. To manage
these projects effectively, you need to communicate the reasons for the change and minimize the disruptions
at the user level.
For example, Walmart demands compliance with all its suppliers to their supply chain network, and this has
meant incurring costs sometimes for both hardware and software compatibility.
Software planning
Licenses and upgrading
When you purchase software (as opposed to building it in house), you purchase the right to use the software
under specific conditions, rather than owning the intellectual property of the software itself. In short, you are a
licensed user of the software rather than an owner.
In the past, many software agreements were for indefinite use and often included source code, so that
customers could modify the software to their own particular needs. This is no longer the case. Software
vendors are increasingly moving toward defined license periods. This is sometimes accomplished directly, in
that your license to use the software simply expires and you are no longer able to access it. At other times, it is
accomplished more indirectly. The vendor releases annual versions of the product, and while you are technically
still licensed to use the old version, upgrading to a current version requires going through intervening versions.
If you do not regularly upgrade, you will have to purchase new versions at the full cost rather than the upgrade
price, and the difference between these two costs is substantial. From a risk-management perspective, the
amount of savings over multiple years could outweigh purchasing a new version if the organization can take
the risk of running an older version for a longer period. This is one of the areas where the strategic value of IT
is able to help the business make that decision. This essentially forces organizations to upgrade their licenses
on a regular basis. Of course, the advantage of regular upgrading is that you get the latest functionality
available, both from a user perspective and in terms of performance, reliability, and security. Even if you
experienced a software version level that was complete and error free for your needs, you would not be
advised to remain at that level indefinitely. It is not practical given the pace of change in IT around your
company. Compatibility issues with suppliers and customers would eventually surface, and the cost then to
upgrade to a supported level would be prohibitive.
This move towards defined license periods reflects a broader trend in the industry toward SaaS. In the software
as a service model, applications are hosted remotely from the firm (typically by the software vendor) and the
firm pays a monthly fee for the use of the software sometimes including upfront license costs. SaaS models
have a number of benefits and limitations. In terms of benefits, SaaS requires less internal IT infrastructure
since the application is typically housed at an outside firm. It is typically accessed via the Internet. Version
upgrades are made by the vendor on a regular basis, so that users are always working in the most current
release (at least as far as incremental releases are concerned; major changes are a bit more complicated). This
can be a very cost-effective model for the firm. On the other hand, there is less control over the application
and as a result there is a higher degree of dependence on the outside vendor. With a highly qualified vendor
this may not represent a problem. (It may actually be a benefit, since the vendor specializes in the provision of
the software and may therefore be more capable than internal staff.) SaaS typically doesnt allow a lot of
customization, and thus can be somewhat inflexible. Many managers worry about the security risks of hosting
data remotely and accessing it via the public Internet. While not wanting to make light of these issues,
experience suggests that both risks can be adequately managed (as will be discussed in Module 9).
SaaS has been an important model for organizations to judge the effectiveness of critical data stored outside
the confines of the corporate network. It has opened the way for the newest metaphor of hosted processes:
cloud computing. Where SaaS was concerned with hosting specific business application software, cloud
computing extends the hosting to include desktop applications and backups, and treats services as licensed
utilities, where the web model of web services can be more vastly employed to all devices mobility and
flexibility define key characteristics of a cloud solution. Owned software may become less and less of a
business model, and be replaced with utility pay-per-use software services. On the other hand, in the cloud-
computing model, customization is typically not allowed and software upgrades take place at regular, pre-
determined times. J ust prior to the upgrades, an organization would review all its internal processes because
the organizations systems need to be in sync with the outsourced software. For example, suppose a small-to-
medium business uses CRM in a cloud-computing environment. The CRM hosting service announces a major
upgrade that requires synchronized upgrades to internal financial systems and office-productivity applications.
From this moment on, the business has a limited time to upgrade all additional systems; otherwise, it will be
left out from the cloud.
License considerations in an IS plan
An IS plan must consider the current state of software licenses and the requirements for license renewal, and
must make trade-offs between faster and slower upgrade cycles. To learn more about software licensing rules
for individuals and organizations for one vendor, visit the Volume Licensing page on the Microsoft website.
Many organizations license Microsoft applications through agreements like Open, Select, and Enterprise,
dependent on the number of licenses through the size of the organization. Plus, in keeping with other cloud-
based offerings like Google Docs, Microsoft offers Office 365 its own online cloud version of MS-Office,
closely tied to its Sky Drive cloud storage.
Furthermore, an IS plan must include a review of all software to ensure it is not pirated software. The use of
pirated software could be a very expensive problem for your company, not only in dollars and cents, but also in
reputation. You could end up paying for software you cannot update or software that lacks the security features
of legitimate software. It is good governance to ensure your company uses legitimate software. There are
many choices to assist companies in tracking their software licensing as part of a software audit. Lansweeper
(http://www.lansweeper.com) is one such licensed software application, and Spiceworks
(http://www.spiceworks.com) an open-source version. Both provide audits of licensed software so that IT
managers can verify both the legitimacy of their purchases and the numbers of licenses.
Custom-designed applications
The discussion of licenses applies to the situation where you are purchasing software components. However,
when you are working with custom-designed applications, there are different aspects of planning to be
considered. Essentially, you move from deciding whether or not to adopt an existing upgrade to focusing on
how much to include in an upgrade, how to build it, and how to implement it. An upgrade becomes yet
another systems development project.
You will learn more about these projects in Modules 3 to 5. What is important to learn here is that software
planning includes both plans for new systems ideas and for the maintenance of existing software and that the
need to upgrade exists whether you purchase applications or build your own.
Business process reengineering showed many organizations that their own custom developed software, over
time, was just as proprietary as purchased software. Upgrade compatibility issues, application lock-ins, poor
documentation and support all the reasons provided to avoid software vendors were inherent in custom-
built software solutions too.
Upgrading to support future business requirements
Perhaps the most important question to ask when examining existing applications and making decisions about
what to upgrade is: If we maintain the current versions of our information systems without upgrading, will this
produce a risk to our organization (risk in this case is financial, process, people, security, etc.)?
Suppose you operate a retail business selling craft supplies. Based on your analysis of the business
environment, you want to provide customer ordering on the Internet to expand your potential market. Your
current ordering software may or may not support this requirement. It may not be able to link with the kinds
of tools that would be needed to build the front-end interface, and it may not have sufficient error control to
be used by customers. Designing a system for a small set of highly trained users demands less software error
detection and correction than does designing software to be used by large numbers of global untrained users.
Even if your current ordering software works well today and your license is current, you should consider
upgrading this software now in preparation to support a future business requirement.
The open-source software movement
Most software on the market today follows proprietary standards. Packages are built and sold by a particular
vendor whose value proposition rests on providing capabilities that will be of benefit to adopters at a
reasonable cost. For example, Microsoft Office suites integrate with other Microsoft-based solutions like
SharePoint Portal. This way the benefit of adopting a Microsoft vision of computing ensures compatibility with
other Microsoft products in a sense reinforcing a proprietary model.
There are those who argue that the proprietary model is inappropriate. This model discourages information
sharing and application integration since different proprietary systems dont generally communicate with each
other. For the same reason, it makes it difficult for users to adopt the best software to meet specific needs.
Consider enterprise systems. These are large, integrated software packages designed to meet most of the
data-processing needs of the organization. They include modules for accounting, inventory, human resource
management (HRM), and so on.
Different vendors produce certain modules better than others. For example, PeopleSoft has long been a leader
on the HRM side of enterprise systems. An organization might want to purchase the PeopleSoft (Oracle) HRM
module along with the inventory and accounting modules of a different vendor (perhaps SAP). The trouble is
that the modules from different vendors (best-of-breed) are not designed to work seamlessly with each other.
Databases can also be incompatible. Because each vendor sells all of the modules, it is not in their interest to
make it easy for users to do this kind of mixing and matching.
Enter the open-source software movement. What if the software source code (the human readable program
instructions) was made available to developers who wanted to build related applications? What if developers all
over the world could look at software source code to find errors and security holes? The expected result would
be software that is cheaper, performs better, and can be tailored more precisely to the needs of individual
users. This does not necessarily make all open-source software free. For example, the Linux operating system
one of the better-known open-source products is typically purchased through one of a number of
vendors. Red Hat is one of the biggest vendors. Google offers Google Docs, once free but now at a nominal
cost to organizations. Cloud-based solutions extend the delivery of open-source software. Although cloud-based
storage solutions (such as Dropbox.com) offer a minimum amount of free storage to individuals, the business
community must pay for the service. Many users opt to buy the software rather than download it for free
because they can obtain support and documentation more easily through this route.
The open-source community, however, rejects this criticism of a lack of support. They provide support options
available 24/7, including bug tracking and fixes, how-to questions and answers, blogging, forums, and live
online support. Their view is there is little the proprietary software vendors can offer that they cant match.
Sustainability of open-source software
Some argue that open-source software cannot be sustained as a business model because you can receive the
software for free. Yet, the open-source movement has gained momentum in the past decade. Linux is now one
of the most commonly used operating systems in IT department servers, and Apache (an open-source web
server) is the most commonly used web server. Linux, however, has yet to replace the desktop operating
system, where the vast share is still supported by Microsoft or other corporate servers running financial, ERP,
and file-server applications.
But even Microsoft is releasing more of its source code to developers of applications, and is working toward
involving the developer community in the development of its .NET applications. Hardware companies see open-
source software as a way to increase revenue and market share. IBM and HP have very strong Linux programs,
and Apple has based its operating system on another open-source Unix variant, Darwin. The open-source
software movement may not be making money in the traditional sense, but it affects how companies use and
purchase hardware and software.
What open-source offers organizations today is choice. From operating systems (Ubuntu, openSUSE), to
browsers (Firefox), to business applications (SugarCRM, Compiere, Openbravo), open-source software solutions
are offering organizations more choice than just traditional vendors.
However, a threat to the immediacy of open-source solutions today are downloadable applications (apps).
These apps are portable, mobile, single solution, easy to install, and immediately accessible. Many are free, but
those that require payment are often less than $10. The success of the app solution to more traditional
application coding has been duplicated beyond Apples original App Store to now include Google Android
Market, Blackberry App World, and Windows Store. All sites offer apps for their own proprietary devices from
tablets to smartphones to mp3 players available instantaneously over WiFi. This change represents not only
a single-solution/single-licensed approach but also threatens the open-source market because proprietary
solutions, once the hallmark of what open-source fought to eliminate, is now back stronger than ever in app
stores.
Hardware planning
Keeping current
As explained earlier, changes in a firms hardware may be precipitated by application demands, hardware
lifecycles, or by external forces. Whatever the reason for the change, numerous factors have to be considered
in planning.
The issue of evergreening keeping hardware current with the latest developments is particularly relevant
to personal computing hardware. Knowledge workers demand up-to-date hardware to support their personal
computing. Often, this is driven by the pace of change in software. For example, upgrading from Office 2003 to
Office 2007 drives a need to upgrade the PC to meet the basic requirements of the application software.
However, cloud computing may offer a solution to the issues and cost of evergreening within IT. With cloud
computing, the push is away from locally installed applications on current hardware to browser-based
operations. If the browser becomes the de-facto application for hosted business applications, then the hardware
itself becomes secondary, or at least less important in terms of raw processing power and compatibility. The
compatibility will be in the browser software, potentially removing the need for certain evergreening operation
models in place in IT departments today.
Upgrading When and why?
What do you do when people demand faster, more powerful desktop computers? What do you do when the
business demands higher security or adherence to Green IT initiatives? How often do you upgrade these
machines, and how do you finance them? What do you do with the technology that you replace?
There are more choices today when it comes to information technology. Personal computer (PC) technologies
probably should be replaced on an ongoing basis, depending on the necessity of the applications they run
(CAD, engineering, large data set analysis) and not simply as a matter of course. Large organizations, however,
still replace PCs on a schedule, which makes sense when calculating the total cost of ownership (TCO) of PC
investments. But as SaaS makes gains in the hosting of enterprise software, the need for high-powered
desktop PCs will fade because the primary means of connectivity is an Internet browser. Companies are
increasingly opting for thin-client solutions (desktop virtualization) where the replacement cost is cheaper than
a new PC purchase and software version control is centralized. Plus, web-based cloud OS environments, such
as J olicloud, offer inexpensive netbook computing and extension into the traditional desktop by providing an OS
complete with common desktop apps. However, in organizations where replacement computers follow an
established schedule, upgrading can be accomplished in two basic ways.
The upgrading plan and cycle
The first basic approach to upgrading is to replace each computer every three years. During the upgrade year,
everyone gets a new computer. The advantage of this approach is having a common standard for machine
types (desktop, laptop, server, etc.), which makes supporting them much cheaper. The downside is that the
upgrading effort and cost is huge in the upgrade year and non-existent in other years. When you consider a
large organization such as Public Works and Government Services Canada, a department of the federal
government that has over 18,000 desktop computers in operation, this approach can become unwieldy.
Large organizations where hundreds of desktop computers may be replaced in a single upgrade often contract
the services of consulting firms to assist in the transition. These firms have vast work areas where multiple
computers can be connected and simultaneously tested and loaded with pre-authorized software and
networking scripts. This way, when the replacement weekend occurs, the transition can be made with
computers already loaded, tested, and configured for their intended user.
The second basic approach to upgrading is to develop a staggered upgrading plan ; this is the norm for most
organizations. If your goal is to replace your computers every three years, then on average, you need to
replace one-third of the computers each year. You generate a list of employees and upgrade one-third this
year, one-third next year, and one-third the year after. At any given time, the computers will be one and a half
years old on average, and as computers become truly problematic in their third year, they are ready to be
replaced. This approach makes financial sense, as it evens out the investment over time. It is also operationally
easier, because now you can devote a certain portion of the IS organization to maintaining and upgrading the
PC infrastructure, and this group will have a relatively stable demand for service rather than huge peaks and
valleys.
The disadvantage of this approach is that you end up with multiple hardware and operating system
configurations that must be supported. Some computer manufacturers, like Dell, offer business lines that
guarantee a hardware/software lock-in for at least a year to help businesses maintain consistency across their
upgrade cycles. Another issue is that not all users require upgrades quite as often. Users whose tasks demand
greater computer usage, with applications that are more taxing on the hardware (that is, more memory
intensive, requiring more storage or speed, and so on), will need upgrades more often to support their work. A
financial analyst who spends six hours each day working on complex spreadsheets and building financial
models will notice performance degradations from older hardware sooner, for example, than a manager who
uses a computer only for e-mail and basic word processing.
Within the upgrading cycle, then, it is possible, and even desirable, to have different rules for different parts of
the organization, driven by the nature of the job tasks being supported by the technology and their importance
to the organization overall. Again, IS plans today must be flexible and more fluid with the culture of the
organization than overly governed by hard rules.
Hardware purchase options
The preceding discussion leads to an interesting side issue. Often, as users of technology, we see the hardware
as a commodity, and in some ways it is. The features provided to us as users are similar (for example, all Intel-
based machines of a particular era can run the same OS and applications). However, from a technical
perspective, there are differences in quality and reliability that make hardware not truly a commodity product.
Brand name or white box systems
One critical choice you (or at least your organization) will make around hardware is whether or not to buy
brand name machines (such as Dell, Toshiba, HP, or Apple). Many smaller vendors build their own systems,
mixing and matching components and producing white box systems. These systems are often cheaper than
brand name computers (by as much as 20%), so there are significant cost advantages to purchasing them as
opposed to brand name machines.
If you decide to purchase a white box system, you need to have someone you trust who understands the
components fairly well and can decide whether a machine with a particular brand of hard drive (such as
Western Digital or Seagate) combined with a particular motherboard (such as, Asus, Intel, or IBM) is a good
combination. This person may be an internal employee, the vendor, or a consultant, but the key is whether you
can trust the persons judgment.
Many users have experienced the frustration of buying a new machine and finding it to be unstable it
crashes often, it runs slowly, or it just doesnt seem quite right. Sometimes this is a configuration issue that can
be resolved with good technical support. Often, the problem goes back to a conflict between elements of the
PC that just dont work and play well together. These problems are slightly more common with white box
systems than with brand names, and are sometimes difficult to resolve. Brand name machines tend to use
fewer different component configurations, and vendors and technical support personnel have worked out the
possible conflicts. That is the trade-off for a lower price.
Hardware technical support
A related issue is what kind of support you will get for your hardware purchases. What warranty is available
with the machine, and how do you access warranty service? Some companies find it easier to simply replace a
poorly functioning machine while others will try to repair them. Some warranties provide on-site service, while
others require you to take the machine to a dealer. It is important to consider these warranty and service
issues when you are buying hardware, over and above the cost of support.
Users who come to rely on the technology in performing their work need to get problems dealt with quickly
and efficiently, with minimal disruption of their work. If a machine is cheaper at the outset but has limited
warranty coverage or poor service options, it may end up being more expensive in the long term.
Depending on your organizations model of IT governance, these decisions may fall to IT alone to decide on
the companys behalf. The user community will quickly inform IT if the decision is an incorrect one, and needs
to be adjusted.
The role of standards
Organizations tend to diverge between the need for centrally controlled standards for hardware and software
(where all employees are supplied with the same tools) and the need for local managers to make decisions to
accommodate particular requirements. A user might want a different laptop than the standard machine or a
whole division might want a different PC than the standard. In the advertising industry, for example, the
creative teams prefer to work on Apple computers while the administrative applications are usually on
Windows-based PCs. Whatever the reason, there is a tension between meeting local needs and providing
centralized control. This often comes across as a variation on Henry Fords marketing message from years ago.
You can have any colour, as long as its black has now become you can run any word processor you want as
long as it is Microsoft Word, or you can have any laptop you want as long as its a Dell and so on. This is
frustrating for users and managers who feel that their needs are not driving technology decisions. In an
environment directed more through a governance framework, business users express business needs. IT
solves the problems with technology.
Reasons for standards
Standards limit the range of technologies in use in an organization and can result in lower costs to the
organization, better integration of applications, and fewer problems in ensuring adequate performance.
Standards reduce variability and are backed by industry best practices.
Imagine, for example, a help desk that had to support every possible word-processing technology that users
wanted. In a large organization you would have to support everything from Word 2003 to Word 2010 for PC,
Word for Mac, OpenOffice (an open-source application), Google Docs, and probably a few others, because
some users will be reluctant to upgrade no matter what. The cost of training staff to support these applications
would far outweigh the benefits to the local users of being able to match their needs precisely.
Even if you consider the idea of supporting two different word processors (Pages and Word), each of which is
better for some tasks than the other, the costs are likely to be high. Also imagine that users need to exchange
files and work on documents together. It is problematic to try to go back and forth between different
applications, with different formats and features. Even if both software packages have the capability to read
files from the other, the formatting may not translate well.
Business considerations
With regard to business requirements, it is important to consider not just the information or processing
requirements of individuals or groups, but the overall cost and performance trade-offs at the firm level. At the
end of the day, organizations typically find that having the same applications or hardware technologies across
the board (or at least a very limited number of choices) is better than having the best applications in each
area. Keep in mind, however, that there could be valid reasons for supporting several different technologies.
For example, does the software allow for efficiencies or support a competitive advantage? This must be a
conscious decision based on business needs, though. If the benefits outweigh the costs, then the decision is
sound.
However, business reasons for limiting application support may outweigh cost alone. Many organizations
standardize on a single browser version, such as Microsofts Internet Explorer. Users want choice, with either
Firefox or Googles Chrome available to them also. But IS, in securing the network with anti-virus and intrusion
software, is aware of Internet Explorers particular vulnerabilities, and constantly scans for breached activity
before it becomes widespread. Introducing a new browser with new security concerns now doubles the work of
IS, and makes it more vulnerable to browser security issues because now they have to support different vendor
instances. Users may see the request for more choice as an innocent one, whereas it is really much more
complex.
Technology scanning
This topic has presented some of the current issues and challenges in planning for technology. But now that it
has been written, it will quickly become out of date. New issues will arise that have not been considered or
discussed. While understanding the issues involved in technology planning in a broad sense is important, you
must also have a plan in place to engage in continuous technology scanning, looking at emerging technologies
and trends that are not relevant today but may be important later on.
Why organizations need constant technology scanning
Organizations easily become too focused on what they are doing today and ignore emerging technologies and
trends that, although not particularly relevant today, may someday be critical. Today, organizations are
wrestling with how to use a variety of cloud-based Web 2.0 technologies, in particular the social networking
sites such as Facebook and Twitter, and mobility applications, such as Foursquare and Skype, available through
different vendor app stores. Many managers argue such technologies have no place in the world of business,
but they may be too quick in their judgment. One of the keys to technology scanning is not to be always
correct, but to quickly recognize when you are wrong and to take the appropriate steps to remedy the situation.
Technology scanning as part of the process
Technology scanning is sometimes done by a separate group within the IS organization, but often is not
managed formally. The reason is that the direct benefits of technology scanning are hard to measure and may
not be realized for years. However, increasing the formality of this process may make it easier to achieve good
results, by making someone or some group directly responsible for the activity. Without accountability, it is too
easy for such a future-focused activity to be pushed to the bottom of everyones to do list, as current
problems exert more immediate pressure for action.
Caution in technology scanning
The need to remain competitive in IS through assessing the impact of emerging technologies comes with a
caution. One danger of technology scanning is that vendors often publicize new technologies well before they
are able to deliver the products. Similarly, when a new technology is released, there are often multiple
standards for how it can work. For example, HD DVD and Blu-ray Disc emerged as competing standards for
high-definition DVDs, and yet only one format (Blu-ray) survived. Also, there are different standards today for
cellular telephones, wireless networking, and a host of other technologies. It is unlikely that all will survive.
Technology planners need to be careful that they are not jumping too quickly onto a new technology. There is
a fine line between being on the "leading edge" and being on the bleeding edge of technology. One may
provide a temporary competitive advantage, while the other may put your company at a disadvantage
similar to the risk in using beta software before a stable release is available.
2.3 Data and information management issues
Learning objective
Relate the issues of data integrity, security, and integration as they apply to IS planning.
(Level 2)
Required reading
Chapter 6, Databases and Information Management
Module scenario: Tell me what I want to hear
Tuesday afternoon, when you return from lunch, the controller is waiting in your office. You exchange
pleasantries. Look, she says. I need your help understanding something. If we were to integrate a
competitive division, could we not just hook our systems together or pull their data into ours? There is not a lot
of margin in this purchase so we cant afford to waste everything on making systems work. Well, you say
after removing your coat, do you know what software theyre using? I dont know, replies the controller,
but they have a server room that looks like ours. You dont like your department being portrayed as the bad
guy when there is no information available for you to make an assessment. More time and study are needed.
You need to know what software they use, what database management system, the structure of the data, how
data are used, updated, and maintained, and many other questions. And you need this information to help the
finance department understand how complex their simple question may be. J enny, you say after a moments
quiet, I cant answer your question right now, but what I will do is write down the things you need to find out
for me so I can better understand what they have. Then we can talk about the tasks I will have to do to
ensure data compatibility, and how we may be able to converge our systems, if at all. Fair?
LEVEL 2
So far, you have learned about the hardware and software components of an information system and the
issues involved in planning for these components. This topic turns to the data and information within the
system and considers the planning issues with respect to data management.
Data
Chapter 6 in the textbook reviews the nature of data and the advantages of database management systems
(DBMSs) for providing good access to data. These notes focus on the management of information and data
beyond the basics.
Information is the lifeblood of organizations. Without information, managers are unable to review the
performance of their organizations or make decisions about what to do in the future. Data give value to our
inventory, and meaning to our decisions. Of course, information does not have to be computerized, and
managers may have enough information in their memories or in paper records to make these decisions without
using computer-based tools; however, the size of most organizations soon outstrips the human capacity to
integrate and manage data.
IS planning is not just about what systems to build or buy and which hardware configuration to install. At the
heart of an information system is data. IS plans that do not recognize the specific challenges of data
management data integrity, data security, and data integration are likely to omit important aspects of the
IS management task.
Data integrity
Course Schedule Course Modules Review and Practice Exam Preparation Resources
To be useful to organizations, the information in computer-based systems must be:
Presented in a meaningful format
Available to the people who will need it
At the appropriate level of detail for the decision at hand
On whichever device they have available
Correct (most importantly)
This is what is meant by data integrity. When data can be used in many formats, and many functions can be
performed with the data in such a way that the characteristics (type, size, usage, metadata, ownership) are
maintained, then you have data integrity. Its not just about information being correct; for IS it is more a
forensic challenge to prove the soundness of the collection, storage, and relatedness of the data that become
information.
Challenges in data integrity
Hundreds of thousands of transactions are processed every day by companies using computer systems. Even if
people are accurate 99.9% of the time, 100 errors are introduced into the system in each 100,000
transactions. Most of us can recall incidents where data in computer systems was incorrect. Perhaps it was a
grade recorded incorrectly in a professor's grading system. Perhaps it was a company who spelled your name
wrong or had the wrong address. All of these errors cause problems for organizations. They are costly to find
and correct, and more costly (in terms of customer goodwill) to ignore. An extra zero in an inventory count
could have dramatic effects on value, scheduling, and purchasing, and could be material to the financial
statements: 100 errors could be anywhere.
The problem has gotten worse with the advent of Internet commerce. Can you guess the most common name
in website registration databases? J ohn Smith, perhaps? Actually, the most common name is Mickey Mouse.
Web users who don't want to provide personal information frequently make up information to fill in required
fields. This kind of data can cause significant problems. For example, suppose you want to sell advertising
space on your website. One of the things you would need to tell prospective advertisers is the number of
regular users of your site. If you use data that have not been properly cleaned up, you are overselling your
product. While you may want to report a larger number of users, you do not want to be accused of providing
fraudulent information, and you want to have enough credibility with buyers that they will want to do business
with you.
Even when giving accurate information, customers may give different variations at different times. Pat McGinnis
may be P. McGinnis, P Mc Ginnis (with a space), Patricia McGinnis, P.R. McGinnis, and so on. If you cannot
address these issues, you will have incomplete and conflicting information about your customers that may
result in poor customer service and, ultimately, loss of business.
It is tempting to think about technological solutions to this problem. For example, you can build software to
look at similar names and merge those listed at the same address, reasoning that they are likely to be the
same person. However, what do you do about people who are named for their parent and still live at home?
The answer is you look for other characteristics to distinguish them (for example, age information), but this
only works if customers are willing to provide this information.
Even with internal corporate information, such as inventory, there are errors. A shipment may be ordered but
never picked up. If the inventory system records the reduction in inventory at the time of order, it will appear
as if the inventory on-hand is less than it actually is. Or inventory may be lost or damaged, but not removed
from the database, resulting in an overstatement of available inventory.
Maintaining data integrity is challenging and costly. Technological solutions (such as the program to locate
duplicate records) are not sufficient to address the problem. The more we rely on computer-based systems to
support our organizations, the greater the problem of data integrity becomes both because there is more
data to control and because its importance to us to be correct is greater.
Data security
While data integrity ensures that the data we record and use are accurate, data security is concerned with
protecting the confidentiality of the data. Organizations that store detailed information about their clients
(especially sensitive information such as salary, debt, and medical information) have a responsibility to protect
the privacy of these clients by ensuring that the data are not released outside of the authority given by the
customer.
For example, if you provide your bank with information about your current debt, the bank needs to ensure that
this information is not released to other customers, other organizations (other than credit bureaus or other
organizations to which you authorize the bank to grant information), or to employees whose job does not
require the information. Similarly, organizations need to protect the information they maintain about their
employees, their suppliers, and any other stakeholder about whom they maintain information.
Cloud computing, and the storage of sensitive company information, will be seen as a test for both the Internet
and browser security. In 2009, Gartner predicted an increase in security spending for IS that includes security
information and event management (SIEM), e-mail security, URL filtering, and user provisioning.
1
Although
these areas are not directly cloud related, they do show a trend towards data protection. By 2015, Gartner
predicts that 10% of overall IT security enterprise product capabilities will be delivered in the cloud : Growth
rates for cloud-based security services are set to overtake those of traditional on-premises.
2
Customer privacy
Customer privacy issues have become increasingly important in recent years, as more people transact business
over the web and more customer information is stored electronically. Many times a year, the media reports the
inadvertent disclosure of customer information from some organization. In 2007, TJ X Corporation announced a
security breach that had involved more than 3 million customers credit card information. Lawsuits from
financial institutions were unresolved as of March 2008. Also, in December 2010, the popular technology group
Gawker announced that the source code to its website had been compromised, and that passwords were
obtained. In total, 1.3 million commenter accounts were taken, and over 50% of them were cracked. Gawker
media issued apologies, suggesting that users change any web accounts they have using the same passwords.
On February, 1, 2013, Wired.com reported that social media giant Twitter had been hacked, affecting as many
as 250,000 users.
Such reports are damaging to the trust that organizations seek to build with customers and can be harmful to
the individuals whose information is released. This harm may be to the individual's reputation, livelihood, or
even physical safety.
For example, suppose a pharmacy inadvertently releases the names of individuals who take prescription
medication for depression or another mental illness. Such information will, at a minimum, be a source of
embarrassment to the individual, but might also result in the loss of employment. How would you feel to learn
your child's kindergarten teacher or your physician was taking medication for severe depression? How would
others react?
In Canada, privacy of personal information (including customer information) is governed by two statutes the
Privacy Act, which regulates government agencies, and the Personal Information Protection and Electronic
Documents Act (PIPEDA), which governs the collection and use of personal information by organizations.
PIPEDA sets the standard by which the protection of customer information will be assessed. Effective J anuary
2004, all companies in Canada are governed by PIPEDA.
At the heart of PIPEDA are 10 fair information principles that govern the rules with which organizations must
comply. Broadly, the Act requires that Organizations covered by the Act must obtain an individuals consent
when they collect, use or disclose the individual's personal information. The individual has a right to access
personal information held by an organization and to challenge its accuracy, if need be. Personal information
can only be used for the purposes for which it was collected. If an organization is going to use it for another
purpose, consent must be obtained again. Individuals should also be assured that their information will be
protected by specific safeguards, including measures such as locked cabinets, computer passwords or
encryption.
3
As Web users, the guidelines of PIPEDA apply every time we enter personal information into a Web form for
the purpose of obtaining information, trial software, documents, and so on. In daily use, this means that a
Canada-based website must comply, as described in the scrolling agreement window and box that we all check
as read (without reading), so that we can get the goods that the site is offering in exchange for our personal
information.
In addition to PIPEDA, it is worth noting many provinces have provincial-privacy legislation that applies to
provincial-government agencies. Some examples of provincial privacy legislation are outlined on the
PrivacyInfo.ca website.
Activity 2.3-1: Privacy quiz
Review the 10 fair-information principles on the Privacy Commissioner of Canada website, and take the privacy
quiz.
Since the collection, storage, and use of personal information involves information systems in most
organizations, the introduction of PIPEDA has important implications for information systems. Organizations
need to ensure that the data collected and maintained in their systems are consistent with the fair information
principles and that organizational processes comply with the Act. This applies to both existing systems, which
may require modification, and to new systems, for which the design will have to reflect the new rules. Also,
some organizations have internal policies to collect information about their employees, which raises issues
around privacy and data security. Topic 9.5 deals further with the challenges of privacy and other ethical
implications of information systems.
Discussion Topic
A local automotive parts supply company has decided to go global with its products. As a result it will be
dealing with suppliers and customers in many different countries and languages. With a culturally diverse
workforce already within the company, senior management asks HR to create an employee survey to collect
which languages its employees speak fluently. Management intends to use this information to help the
company branch into global markets. Once collected, the data will be entered in the employees profile, and will
be accessible only to senior managers.
1. Is this a legitimate request by senior management? Why or why not?
2. Should employees be allowed to opt-in or opt-out of the collection process?
3. Under PIPEDA guidelines, is the company breaching employee privacy?
Data integration
It is often argued that the power of computers and networks derives at least in part from the ability to pool
data from different sources, thus building more complex views of organizations or people. For example,
enterprise systems are highly valued, in part, for their ability to quickly produce financial statements for large
organizations by combining data from disparate divisions far more quickly than was possible before their
adoption. Customer data warehouses are most valuable when they combine basic customer demographic
information with purchase behaviour, or Web surfing behaviour. Many companies track your click patterns
through their websites (and to and from their websites) and link these patterns with your registration. It allows
them to customize the sites to your taste.
For example, if you frequent the online bookstore Amazon.com and always go looking for information on
mystery novels, the site will begin to put new release information about mysteries on the home page whenever
you log in. This is, in theory, useful to you because it provides you with what you want most. But it is also
beneficial to the organization, which hopes to sell more books that way.
Challenges in data integration
In many organizations, data integration is the number one challenge for information systems. Legacy software,
still in use and still valuable, needs to be integrated to newer enterprise systems, but offers no easy
programming means to do this. Often the database, structure, and format of the data are incompatible with
more current systems. Middleware applications can be purchased or built to solve the issue; however, this
increases the complexity of the architecture and creates another potential area of failure. Furthermore, there is
the issue of integrating front-end systems like web servers with back-end enterprise systems which in
todays environment often refers to systems decentralized, dispersed, and housed outside the direct control of
traditional IS departments.
The idea behind data integration, then, is simple and powerful. Yet the reality remains difficult and costly to
achieve. In large organizations, especially those that are geographically dispersed or those where growth has
been through acquisition, the biggest challenge is the lack of commonly accepted definitions of data elements.
For example, a data field such as "sales" could mean either gross or net sales, and, if neither is specified, you
simply would not know. If two applications were developed at different points in time, in different languages,
by different developers, they could easily have adopted different meanings for the term sales, and integration
would not be so simple.
And the famous CIO piece on Nestls aborted first attempt to leverage SAP (Nestls ERP Odyssey, CIO
Magazine, May 15, 2002) wherein it was revealed that the multinational conglomerate had 29 different price
quotes for vanilla, all from the same vendor but referenced by different codes in different systems at different
locations.
Similarly, a part number might be a five-digit alphanumeric code in one plant, but a 27-digit numeric code in
another. Here the integration problem is not that the same field refers to different things, because in both
cases it refers to a part number. But in any programs that use these codes, error-checking routines are
necessary to protect the data integrity. The rules for error-checking will be quite different though, and
integrating the data from these two systems will require much more complex programming to provide adequate
error correction.
The main challenge here is one of scale. It is conceptually simple to look at these examples and see solutions
for fixing them. But in most organizations, there are thousands (or even hundreds of thousands) of data
elements and hundreds of thousands of records described using these elements. Dealing with the problems of
integration is much harder than it looks. For example, some organizations simply do not bother to bring their
legacy data forward when they make major changes to enterprise systems. The costs of converting and
integrating all of the existing data and then of ensuring its integrity outweighs the benefits of having it.
So, minimal data are brought forward, and programs are written to provide access to the old data on an as-
needed basis. In these cases, data integration is something that happens after the fact, often in supporting
applications such as Excel, Access, or Crystal Reports.
Ethics and data integration
Information contained in separate databases may be useful but non-identifying. Database integration can raise
privacy concerns. This is an issue frequently encountered in research contexts. For instance, a hospital might
provide non-identifying patient information to health researchers, but if one of the researchers is a clinician
with access to identifying information in a patient database, the clinician can link the information, thereby
raising privacy concerns.
Similarly, customers might provide seemingly non-identifying information to a survey company, not realizing
third parties can easily link this to identifying information through publicly available databases. Should the
company have warned customers of the potential breach of privacy? An interesting example is Statistics
Canada, which takes very seriously its role as a reliable data custodian.
Databases are no longer centrally located. And their content has changed from clearly identifiable, single
content data elements (such as name, address, and phone) to more open text-based repositories. Big data
offers a view of internal and external data sets connected to solve problems, but the underlying issue of
integration remains, no matter how big the data is. From an ethical perspective, issues around who owns your
online data are beginning to come to the forefront. On February 7, 2013, a U.S. court declared that photos of
the aftermath of the 2010 Haitian earthquake that were posted on Twitter remain the property of the
photographer and not Twitter.
4
Improving data integrity, security, and integration
If data integrity, data security, and data integration are important, what can you do about these issues? How
do you plan for them? Are new resources available to address these recurring problems?
A number of approaches, both technological and human, can be used to improve data quality and usefulness.
Technological solutions
IT security, which includes data security, is covered in detail in Module 9. Again, in IT security, a combination
of technological (passwords and system privileges) and human (training and education) approaches are needed
to ensure that private data are not accessed by unauthorized individuals.
IS design has an enormous impact on data integrity and integration. The textbook reviews the file-oriented
approach to IS, where data are not integrated across applications. This approach is one of the chief culprits of
poor data integrity because the lack of integration across applications means that the same information is
stored in multiple places (uncontrolled data redundancy) and needs to be updated in multiple places. Moving
toward more integrated views of organizational data and using a database management system to create more
integrated systems will help immensely.
Chapter 5, section 5.4 in the textbook deals with contemporary software platform trends. An important trend
for dealing with data inconsistencies is XML, which is discussed under the subheading Web Services and
Service-Oriented Architecture. As a method of data integration, XML shows great promise but requires a more
novel approach to process integration through SOA (service-oriented architecture), which uses Web services to
build solutions across platforms.
Another technological approach has to do with the design of user-input programs. Consider the example of
names that are entered differently at different points in time. If data-entry programs are designed to look up
existing records for matches to the newly entered record (perhaps matching on surname, street address, and
gender, or some similar combination of elements), the user can be prompted to confirm whether the new
record truly is new or whether it is a duplicate.
Human solutions
Technological solutions, while helpful, are only part of the picture. It is impossible, for example, to anticipate all
of the possible mistakes that users can make in data entry or all of the ways that customer information can be
presented. Human solutions, mostly involving training and education, are an essential part of any plan to
improve data quality. Educating users about the importance of data quality, and the meaning of data quality, is
an essential first step. Getting users, especially knowledge workers, to recognize the potential of integrated
information and to cease hoarding information is also important to improving data integration. Finally, training
in how to use applications and how to maintain integrity of data are also critical factors. Many organizations
often neglect the training and education components of a well-designed IS plan. Large multi-million dollar
implementations have failed in part due to the lack of technical training of staff, resulting in their inability to
adapt to a new system. One such public example is Lumber Liquidators. You can read an analysis of its story.
1
Gartner Survey Shows Security Software and Services Budgets to Increase 4 Per Cent in 2010, accessed
May 29, 2013. http://www.gartner.com/newsroom/id/1167612
2
Gartner Predicts Cloud as a Delivery Model to Shape Buying and Prioritization of Security, accessed Feb 13,
2013. http://www.gartner.com/newsroom/id/2311015
3
Privacy Commissioner of Canada, Your Privacy Responsibilities: A Guide for Businesses and Organizations to
Canadas Personal Information Protection and Electronic Documents Act. Accessed December 29, 2010.
4
Lexology.com, Its my moment, not yours U.S. Court decision clarifies who owns photos on Twitter.
Accessed February 13, 2013. http://www.lexology.com/library/detail.aspx?g=99b227f9-b432-43a1-8c73-
ba15083c52d7
2.4 IS economics
Learning objectives
Assess the business value of an information system from a financial perspective. (Level 1)
Identify the four elements that make up the total cost of ownership of IS, and defend the
importance of considering non-capital costs when evaluating IS acquisitions. (Level 2)
Relate the relevance of Moore's Law and Metcalfe's Law to IS management. (Level 2)
Required reading
Chapter 10, Section 10.4, Establishing the Business Value of Information Systems
Chapter 5, Section 5.1, IT Infrastructure (subsection on Technology Drivers of Infrastructure
Evolution)
LEVEL 1
Information systems add value to organizations in multiple ways. As you learned in Module 1, organizations
can use information systems to reduce costs, differentiate their products or services, lock in suppliers or
customers, create new products or services, and create and sustain alliances with other organizations. IS
investments can support the various elements on an organization's value chain, and the links between value
chain activities.
IS investments are by their nature different than other business investments, and sometimes more difficult to
quantify. One criticism is that IS investments are open-ended. Also, many IS investments are not readily
noticeable by the users of IS. An investment in data security offers no tangible benefit to the user community,
but helps ensure data integrity both in structure and reporting. And many users have a limited knowledge of
what IS is and what it can do for them, adding to the difficulty of delivering meaningful solutions when users
themselves struggle to articulate their needs.
Regardless, investment in IS must be categorized in terms of cost reduction, increased income, benefits
directly to the structure of IS, or quality improvements such as customer satisfaction. This is the first step in
assessing value.
Business value and valuation
Although it is clear that information systems add value to organizations through their many applications, this
course has not yet looked at how much the systems are worth. It is challenging to assess the business value of
IS from a financial perspective. However, given the rates of spending that are necessary to sustain a sound
and flexible IS infrastructure and the difficulties firms have in realizing the benefits of their IS investments, it is
essential to consider this perspective.
Section 10.4 outlines the different financial approaches to evaluating IS investments. All are variants of a cost-
benefit analysis, where you begin by determining the costs associated with a particular IS and then determine
the benefits against which these costs are compared. Payback methods, return-on-investment (ROI) and net-
present-value (NPV) calculations, cost-benefit analysis, profitability indexes, and internal rates of return are all
techniques that can be used to compare benefits realized from an IS investment to the costs in relation to
the benefits projected from other kinds of investments. You may be familiar with these approaches from
previous finance and accounting courses.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Value of intangible benefits
For the sake of historical comparison, Table 10-3 (page 326) provides a useful list of the different cost and
benefit items to consider in making an IS investment. The challenge comes in considering the value of the
intangible benefits. It is one thing to evaluate a system that is designed to reduce direct costs. For example,
Dirt Bikes Canada management believed that after valuing the benefits of a new employee skill tracking system,
there would be a savings of two hours of HR staff time per week, which could translate into headcount
reduction or hiring avoidance, and that recruiting costs would decrease by $11,000 annually. But not all
systems have cost reduction or avoidance as their goal. What about an organization whose fundamental
problem is the lack of timely information for management decision-making?
In many organizations, there is a patchwork of unconnected information systems that have proliferated over
time, making the creation of integrated financial statements tedious and time consuming, and making real-time
progress tracking impossible. What is the value of more timely information? Given a need to react to changing
markets and respond to customer demands, the value of timeliness might be seen in higher sales or in lower
costs of servicing unhappy customers (or both). But by how much would sales increase? How confident do you
think you would be in predicting such a figure with the kind of precision that capital budgeting requires?
Although the assessment of intangible benefits is difficult, there are a variety of tools that you can use to
facilitate this activity. Often such tools involve analyzing specific decisions and scenarios to see how the
intangible benefit might be connected to other, more tangible outcomes. For example, employee satisfaction
(which might be improved by introducing a new system to replace an unpleasant task or a cumbersome
process) has been shown to be related to outcomes such as absenteeism, turnover, and productivity. The costs
of absenteeism can typically be estimated by a firm, as can the costs of turnover and productivity. Thus, to the
extent that improved employee satisfaction results in lower absenteeism and turnover, or higher productivity, it
will result in a reduction of costs to the firm. However, converting intangible to tangle benefits for the sake of
calculating value is not as simple as it may seem.
Two challenges remain: The first is measuring the amount of improvement in satisfaction that will occur.
Gathering data from employees about their current level of satisfaction and how they perceive it would change
with a new system would be one way to get at least some sense of the likely influence. The second challenge
is measuring the changes in absenteeism, turnover, and productivity that are likely to result from the expected
change in satisfaction. Again, data will be needed to support this estimate. Some companies conduct exit
interviews with employees who quit, and such interviews might give a clue as to the amount of turnover that is
related to the process in question. Absenteeism rates (or turnover) could be compared between employee
groups that are affected by the unpleasant task and those that are not; if the task is the driver of absenteeism
or turnover, then you might expect rates to become more similar after the change.
Example 2.4-1: Job satisfaction and productivity
How would you estimate the influence of this improved satisfaction on productivity? What kinds of data could
you use to draw your conclusions? What challenges would you foresee?
If the organization, like some, collects information on employee satisfaction, then such information could be
related to organizational performance data. However, it is unlikely that this depth of information would exist in
most companies. Other things you could do include the following:
Do a comparison between groups who are affected and groups who are unaffected by the
unpleasant task. You would need to find groups where the work tasks are sufficiently similar in
order to make a comparison of productivity meaningful.
Compare the current situation to some prior point in history when the situation was different (if
the data exist to do so).
Guess. This is, of course, what many organizations do. The secret to effective guessing is to have
multiple people involved in determining whether the guess looks reasonable, and to conduct
sufficient sensitivity analysis to see how differences in the estimate would influence the outcome
of the system. For example, if the only way a system is valuable is if it triples productivity due to
increased employee satisfaction, then it is unlikely to be valuable. But if a 5 or 10% increase in
productivity would make the system worthwhile, then that seems more reasonable. In
organizations, where you use the special talents of people to help solve problems, Six Sigma
black belts are trained in statistical analysis and could be a valuable resource in testing the
sensitivity of various options.
Influence of IS
The second challenge stems from the role of IS as an enabler of organizational objectives. As noted in Module
1, IT alone is unlikely to be a source of sustainable competitive advantage, since it is not typically rare or
inimitable. This is particularly true as we move toward more purchased (rather than custom-developed) IS
solutions and hosted solutions, which are even less customizable. The advantage comes from what you do with
the systems and technology that you have. As such, the influence of IS on benefits is somewhat indirect, and
simply part of the day-to-day functioning of a business.
Consider, for example, benefits such as improved decision making and increased organizational learning. By
providing timely access to relevant information for decision-making, IS can provide stronger support for making
decisions and provide the opportunity to learn from past activities. But whether this opportunity actually results
in better decision-making (which, like all intangibles, needs to be made tangible for valuing) or learning
depends on the ability of the people involved to use the information in these ways. This is why the training and
education of employees is so important. Employees bring their own particular skills to the organization, which
must be combined and tempered with the goals of the business to maximize opportunities. Asking for the right
information, and being able to interpret it, are issues where IS can assist an organization and its people.
As such, it is important to recognize the limits of financial models for assessing IS value and to work with these
models within their limits to build at least partial pictures that will better inform IS decision making. Otherwise,
IS managers end up making large financial requests with little more than "trust me" as the justification. As you
will see in Module 3, past experience with IS projects does not exactly engender this level of trust.
LEVEL 2
IS costs
Table 10-3 (page 326) identifies IS costs, which consist of hardware, telecommunications, software, services,
and personnel. The first three are relatively straightforward, but the last two are more problematic and often
underestimated. The Gartner Group, a world-renowned research firm focusing on the management of IS, was
among the first to recognize the more hidden costs of IS. In 1987, Bill Kirwin, a Gartner analyst, developed the
"total cost of ownership (TCO)" model to assess the true costs of computers, and to use as a measure of that
cost, to be reduced over time. In 1996, Gartner computed that the total cost of owning a personal computer
was about $13,000 per year, or about five times the cost of purchasing the equipment! In 2008, and again in
2011, Gartner updated its TCO model to current specifications, revising the model to separate managed and
unmanaged installations.
The cost breakdown from the 2011 update is shown in Exhibit 2.4-1.
Exhibit 2.4-1:
Source: Desktop Total Cost of Ownership: 2011 Update, November 16, 2010. For a more detailed discussion of
Gartners TCO strategy, follow the source link above.
Providing technical support to users, including training and administering systems (for example, assigning
passwords, tracking inventory, and so on), were also significant costs that had to be considered. But the most
surprising finding was that nearly half the cost of PC ownership in unmanaged desktop installations was hidden
in end-user costs, which included items such as training, fixing, and downtime issues. Downtime issues could
be further broken down to secondary elements such as end-user salaries.
The total cost of ownership of a PC today is lower than it was in both 1996 and 2008. Locked and well-
managed systems (those where IT prevents users tampering with the setup) provide the lowest TCO. Gartner
reports that TCO continues to decline gradually: 0.7% to 3% depending on the scenario over 2008 numbers.
The basic ideas behind the TCO model remain important : hardware and software are not the main costs
associated with owning and operating IT.
End-user costs
The end-user costs of IT are a source of great concern to management. In particular, the notion of the non-
work time spent by users with their PCs is something most managers wish they could better control. If a typical
user spends, for example, 10% of the day playing on the computer, then theoretically, if we can control that
time, we could improve productivity by 10% across the board. The assumption behind this thinking is that the
unproductive time would be turned into work-related activities and would not be replaced by equally
unproductive activities such as chatting by the water cooler or making personal telephone calls. While some
time may be re-channelled into productive work, it seems unlikely that employees will ever spend 100% of
their time at work focused solely on work (and they are unlikely to spend 100% of their time at home focused
solely on non-work activities).
As a manager, you can address this problem by developing policies on what you consider to be a reasonable
amount of personal use and communicating your expectations to employees. But remember, the notion of what
is reasonable is likely to be perceived differently by different people, and trying to enforce standards that are
perceived as unreasonable will create a backlash. You can also create technological solutions, like removing
game software from machines and limiting access to websites with games (although many employees will be
able to circumvent such mechanisms). Clearly communicating expectations, and the reasons for those
expectations, will still be necessary.
Situational ethics
The logistics manager has approached you about three of his employees. They work the night shift, from 11
p.m. to 7 a.m., and he has noticed a steady decrease in the amount of work they produce. He has shown up
unannounced twice, just to see that everything is running fine. He has not witnessed anything out of the
ordinary. He knows you have software that restricts viewing certain websites, and that the software also can
be used to track the browsing habits of users (spyware). He wants you to track the Web usage of his three
employees so that he can better see what they are doing through the night when they are not working. He will
use this information to discipline or replace them if necessary.
As the IS manager, and through the governance of HR, how should you ethically approach this request? Do
you foresee any negative consequences to his request? Is there another way to address his concern?
Moores Law
You were introduced to the concept of Moores Law in Module 1. Moores Law shows how rapidly prices decline
in computer hardware, in comparison to the steady increase in power/function. Thus, it relates to the
economics of IT. If the power of a computer doubles every 18 months for the same price, then what does this
mean to the cost of technology?
Moores Law is one of the key drivers behind the problem of evergreening PCs, and one of the key drivers for
IT change in general. With more computing power available for lower cost every day, the possibilities for what
can be automated are virtually limitless at least from a purely technological perspective. Moreover, it drives
the demand for regular upgrades of hardware and software, as users see ever more powerful equipment
available and as new software versions demand more of the equipment.
However, the desktop as the standard of web-based computing is quickly giving way to mobility. Gartner
predicted that by 2013, mobile phones would overtake desktops as the most common web access device.
1
Though this has yet to be proven, significant growth in this new channel not only affects IS planning, but it will
also affect Moores Law. If consumer connectivity through mobile devices surpasses that through PC-based
computing, then the need for evergreening may diminish, or simply switch platforms.
Impact of Moore's Law on computing prices
Two graphs, developed by the National Science Board, show the impact of Moore's Law on computing prices.
Exhibit 2.4-2 shows the cost per gigabyte of stored information, which has decreased by a factor of
approximately 2,000 between 1988 and 2002.
Exhibit 2.4-2
In a blog by Matthew Komorowski, a mathematician and software engineer, the history of storage prices has
been detailed to show the steady decline through 2010, as predicted by Moores Law. Exhibit 2.4-3 shows the
greatest concentration of storage size/price occurs through the mid-1990s to the early 2000s in line with the
rise of the Internet and e-commerce.
Exhibit 2.4-3
Source: mkomo.com, personal blog, A History of Storage Cost, accessed on May 29, 2013.
http://www.mkomo.com/cost-per-gigabyte
Exhibit 2.4-4 shows the cost of computers themselves, with a similar rate of decline. The graphs use a
logarithmic scale and show the decline as a straight line, when in fact it is an exponential decrease.
Exhibit 2.4-4
However, in a Wall Street J ournal study conducted in 2010,
2
a similar decline in PC prices was charted, except
for an increase in 2010. The increase does not debunk Moores Law; instead, the article attributes the increase
to the willingness of consumers to pay more money for higher-end models.
These examples of declining computing prices are meant to highlight the impact of advancing technology on
pricing. To fully understand the economics of IT requires an understanding of Moores Law and its impacts on
the technology environment.
Metcalfe's Law
One final law relates to the economics of IT Metcalfe's Law. Robert Metcalfe, founder of 3Com Corporation,
stated that the usefulness, or utility, of a network equals the square of the number of users.
The idea here is a simple one. Consider the prospect of having the first and only telephone. What value would
it have to you? Perhaps there would be some symbolic value associated with having something nobody else
had, but in terms of functional utility or being able to do anything with it having the only telephone would
be pretty useless. Having one of two telephones is more useful. At least you have one other person with whom
you can communicate. But having one of one million telephones is even more useful as it increases the
chances that you can communicate with anyone using the device, including telemarketers during the dinner
hour. By now you should realize that nothing is simple within IS.
This mass adoption is the essence of Metcalfes Law. Networks in business organizations are meant to
interconnect people and computers, to enable communication and integration of value-producing activities. The
more people who are part of the network, the higher the potential for integration and communication and the
greater the value of the network. For example, the value of a Facebook, Twitter, or Linked In network is
directly attributable to the number of users registered and using the service. The more users, the more
benefits on two fronts: first, easier communication with friends because they are all on a shared, networked
service (Facebook), and second, the greater the potential advertising revenue for Facebook because of the
number of active users.
Metcalfes Law also demonstrates why the most popular technology can be more beneficial than the best
technology. Throughout the history of modern technology there are instances where a technologically superior
alternative was beaten out by its inferior rival for example, Betamax lost to VHS video cassette recorders,
and Macintosh computers gained only a fraction of the market share of Windows-based PCs. In most cases,
the winning alternative was able to capture market share more quickly and establish a critical mass of users.
Having created these networks of users, the options for related products (in the two examples of videotapes
and software) are greater with the more widely adopted technology, and this becomes a self-reinforcing
phenomenon. A contrary example is Blu-ray versus HD -DVD, where the superior technology did win out, but
once again the battle was ultimately won or lost on the basis of market support rather than technology.
Network effects in IS decision making
There are several other reasons why network effects are so powerful in IS decision making:
You dont want to be out of step with competitors.
There is a risk associated with being out there on your own, and it is often safer to stick with
what everybody else is doing (even though that goes against principles of differentiation).
Think about the prospect of a bank deciding to deploy a system to allow customers to access
their account information. Even if the bank could do a better job of adding value to the system
by developing a proprietary network, perhaps using the electrical power grid as the data
transmission medium, the fact that everybody else can support Internet transactions is a pretty
compelling reason to use that approach. How would the bank convince its customers to use this
new network? How would consumers hook into the network? Would they require different
network connections for their online banking and their online bill presentment?
Similarly, you don't want a proprietary technology to lock you out of markets or opportunities.
Being stuck with a technology developed only for you, and one that is not compatible with other
systems (for example, that of your customers and suppliers), creates the risk that you will not be
able to adapt to new opportunities as quickly as those who have adopted more widely known and
used systems.
Finally, if you have a different technology, you may find it difficult to get support.
IT professionals tend to gravitate toward the most widely accepted (and thus marketable)
technologies when developing their skills. So if you adopt a niche technology it may be difficult
(and expensive) to find the qualified people you need to support your operations.
Metcalfes Law is one of the reasons why Netscape gave away copies of its browser and why Microsoft fought
to have its browser included in the Windows operating system. By getting the browser onto more desktops, the
vendors hoped to create networks of committed users and to become the de facto standard for products in the
browser class. This would then drive sales of server-side products and generate substantial revenues. Microsoft
was right.
1
Gartner Highlights Key Predictions for IT Organizations and Users in 2010 and Beyond, Gartner Newsroom,
accessed 15 J uly 2011. http://www.gartner.com/it/page.jsp?id=1278413.
2
Ben Worthen, Rising Computer Prices Buck the Trend, The Wall Street J ournal, December 13, 2010,
accessed April 9, 2013: http://online.wsj.com/article/SB10001424052748704681804576017883787191962.html
2.5 Developing an IS strategic plan
Learning objectives
Evaluate the issues associated with IS planning and developing an IS strategic plan. (Level 1)
Compare the scenario planning approach to more traditional approaches of structuring the
planning process. (Level 1)
Required reading
Reading 2-2: Strategic Planning Donts (and Dos)
Review Chapter 10, Section 10.3, Selecting Projects
LEVEL 1
The issues associated with IS planning can be broadly grouped as five questions:
Why do it?
What is the content of the plan?
Who is involved?
How is planning done?
When is planning done?
Module 1 presented the strategic motivations for IS planning and the business context in which IS planning
must take place, which answers the first question. The content of the plan was discussed in Topic 2.1, which
answers the second question. Here you will examine the planning process, which addresses the final three
questions.
Who needs to be involved?
IS planning needs to involve a range of stakeholders from around the firm, including IS staff, users, and
managers. In some cases, IS planning may also include consultation with customers, suppliers, and other
external stakeholders. Depending on the governance model, different levels of involvement will occur for
different units regarding different decisions. Departmental needs can best be articulated by people from within
those departments. Users and managers from finance to production to marketing need to be consulted, and
their input must be included in the planning process. This consultation may be formal or informal and focused
at one point in time or ongoing. A combination is likely the most effective. Informal, ongoing relationships allow
for easier communication, but formal and more intense consultations may bring to the surface issues that
would otherwise get lost in the minutiae of day-to-day conversation.
Senior management
Senior management plays a role in the planning process by providing broad strategic direction and support to
the planning effort. Planning for the future takes time away from the pressures of current work. Therefore,
without senior management commitment to the planning process, it is likely that insufficient attention will be
given to the process often by the most important stakeholders. A primary tenet of project management
success is to obtain senior management support. It is no different with IS planning.
Financial management
Financial management plays at least two roles in this process. First, the finance team is an important consumer
Course Schedule Course Modules Review and Practice Exam Preparation Resources
of IS products and services. Financial managers also play a role in determining the costs and likely benefits of
an IS opportunity, and identifying where the funds to undertake different projects will be found.
Senior IS management
Senior IS management, along with senior business leadership, must take the lead in this planning process.
Other IS staff may be involved as facilitators of the process, gathering information (working with departmental
management and users to determine priorities), researching new technologies and their potential applications,
and preparing cost-benefit analyses (with help from the financial team and the departmental representatives)
for different proposals. It is important that all IS staff have a role in planning. This helps with department buy-
in when the approved plan is presented to the staff, because the whole department feels included in the
process.
How is planning done?
Various tools and methodologies are used to structure the planning process. Two techniques are outlined in the
textbook: portfolio analysis and strategic analysis (or critical success factors). Both approaches are useful in
organizations. The combination of a top-down (strategic analysis) and risk/benefit evaluation of IS projects
results in complementary information that may yield superior plans by highlighting any strategic disconnects
within the company. To focus on one or the other risks missing important opportunities.
Strategic analysis
Strategic analysis, using critical success factors, takes a top-down approach. Using tools such as Porters five
forces model or the value chain, managers determine the most important processes in the organization and the
opportunities to use IS to support these processes. They then become the priorities for systems development.
Strategic analysis focuses directly on achieving alignment between IS and the business by bringing IS in line
with both the market and the organization.
Limitation of approaches
One limitation to both approaches is that they do not do well at addressing the potential for substantial change.
Both are focused on the present, particularly portfolio analysis where all that can be assessed are the current
projects and current costs. While the strategic analysis perspective aims to look forward, most managers find it
less than ideal when trying to foresee the importance of new developments and their implications for IS plans.
Flexibility and agility are expected from many plans today, and IS cannot afford to be too locked into any one
strategy at the expense of change. Although no organization wants a plan that changes every month, changes
to a business plan can happen quickly, and that speed must also be adopted by the IS plan.
The conflicting forces are consistency and flexibility. IS uses standards to mitigate risk, to streamline costs, and
to avoid variation all to provide a consistent user experience and a supportable IT environment. Standards
work, but often at the cost of flexibility. To remain flexible to change in IT, the department must use less
standard approaches, which increase the reaction time to change, but maintain a risky environment with much
variability.
Scenario planning
An alternative approach called scenario planning, involves looking at a small number of future possibilities and
examining their implications for the current IS plan. An organization called IDEO provides a website that
describes design thinking, an innovative scenario-planning approach across numerous industry sectors.
Scenario planning makes use of open creative thinking around the possibilities of future business, by
challenging assumptions that restrict thinking to the present. This is not an easy thing to do.
When is planning done?
As you learned in the first part of this module, timing in IS planning is not easy. Long-range planning is difficult
in times of great change. It is important to keep plans flexible and open to change as business and technology
realities shift.
IS planning occurs at multiple points in time, driven by the needs of strategic planning, budgeting, and specific
project requests. A clear IS strategy, articulated to define the broad goals of IS within the organization and
how they relate to business strategy, facilitates both regular annual planning and ad hoc planning by providing
a framework within which to make choices and trade-offs. This is a key concept to address with management.
The IS plan is a boat that has a restriction on how many passengers it can hold. Remove some, and you can
add others. You cannot keep adding passengers without sinking the boat. Through governance, management
must decide who stays on the boat now, and who has to wait until it returns to dock.
Long-range planning
Long-range planning would likely have cycles of three to five years, probably closer to three in most
organizations. In the long-range plan, the broad strategic emphasis is developed and large projects are
considered. For example, the Li & Fung Trading Company, a global sourcing company that supplies such retail
customers as Abercrombie and Fitch, The Limited, and American Eagle, maintains a rolling three-year strategic
plan that includes IT plans as a key part. This forces the company to focus on the broad strategic challenges
and to have a set of coherent goals that drive its operational activities.
Budgeting cycle
In most organizations, much of the tactical IS planning is conducted as part of the annual budgeting cycle. As
with other departments, the IS department must specify its financial needs for the coming year. This budgeting
cycle serves as a catalyst to focus on possible projects and milestones to achieve them. It also helps to
prioritize projects and spending over a relatively manageable cycle.
Further planning takes place on an ad hoc basis, as projects are proposed for consideration by various
stakeholders. New opportunities may emerge outside of the normal budgeting cycle. Mechanisms are put in
place to consider these opportunities as they emerge. This is often part of the role of an IS steering committee.
Not only does the steering committee protect the company by maintaining the focus on key projects, but it also
helps protect IS from appearing inflexible.
Module 2 self-test
1. Define a database and a database management system and describe how it solves the problems
of a traditional file environment.
Source: Kenneth C. Laudon, J ane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson
Canada, 2011), page 195. Reproduced with permission from Pearson Canada.
Solution
2. Identify and describe the elements of the hierarchy of data.
Solution
3. Using Table 5.4, Total Cost of Ownership (TCO) Cost Components, from the textbook, calculate
your TCO for any computers you have at home. Use Excel and create a spreadsheet that you can
fill out.
Solution
4. You are working as a business risk analyst for a mid-sized payroll services company. Over the
past week you have attended several seminars on information systems planning portfolio
analysis, strategic analysis (or CSFs), and scenario analysis. The director of your department was
unable to attend these sessions and has asked you to prepare a memo summarizing what you
learned.
Required
Write a memo to Melodie Anderchuk, Director, Operations Management, in which you compare
and contrast the three approaches for information systems planning. Include in your memo a
description of how they are conducted, their strengths and weaknesses, and when you would
advocate using each.
Solution
5. Using the table below, identify which governance archetype exists in your organization. Is it a
good fit? Why or why not?
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Solution
6. Explain the concept of data integration and the challenges involved in ensuring it is achieved for
an organization.
Solution
7. Explain how legislation, industry self-regulation, and technology tools help protect the individual
privacy of Internet users.
Source: Kenneth C. Laudon, J ane P. Laudon, and Mary Elizabeth Brabston, Management
Information Systems: Managing the Digital Firm, Fifth Canadian Edition (Toronto: Pearson
Canada, 2011), page 124. Reproduced with permission from Pearson Canada.
Solution
Module 2 self-test solution
Question 1 solution
A database management system (DBMS) is software that permits an organization to create, store, centralize
data, manage it efficiently, and provide access to the stored data by application programs.
Some of the benefits of a DMBS are:
The value of an information system can be increased through the ability to link data files, and
provide real-time information.
Data redundancy and inconsistency can be reduced, increasing data integrity.
Data confusion can be eliminated through the commitment to one database.
Program-development and maintenance costs can be radically reduced.
Flexibility of information systems can be greatly enhanced.
Access and availability of information can be increased.
Allow for the centralized management of date, their use, and security.
Source: Adapted from Dale Foster, Instructors Manual to accompany Management Information Systems:
Managing the Digital Firm, Fifth Canadian edition, Pearson Canada, 2011, Chapter 6, pages 202-203.
Reproduced with the permission of Pearson Canada.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 2 solution
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 3 solution
This question requires you to think about all of the elements that make up TCO. The table shows an example
for a Mac user.
Components/items Cost
Hardware acquisition iMac $1,246.00
Software acquisition MS Office for Mac $199.95
Installation Nerds on site $100.00
Training N/A ???
Support Apple support $139.00
Maintenance ???
Infrastructure Back-up/network $400.00
Downtime ???
Space and Energy In a year $10.00
Total $2,094.95 + ???
The hardware and software costs, and the Apple support costs are relatively easy to estimate. They are likely
to be fairly accessible on company websites. But other costs are harder to estimate. How much is installation?
You may be able to find this directly from a company like "Nerds on Site" but you may have to guess. And
what about training, maintenance, and downtime? Macs are supposedly easy to use, but does that make their
costs zero? Likely not.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 4 solution
MEMORANDUM
Date: May 6, 20XX
To: Melodie Anderchuk
Director, Operations Management
From: Pat Student
Business Risk Analyst
Subject: Summary of information systems planning seminars
As per our discussion, I have summarized the highlights of the three information systems planning seminars
that I attended last week. I have included a brief description of how each is conducted, their strengths and
weaknesses, and some thoughts on when each might be used.
Portfolio analysis
Portfolio analysis begins with a consideration of the organization in terms of the investment in IT projects, and
their risks and benefits to the organization. It is a cumulative planning method that involves scoring each of the
existing cost investments of IT (licenses, contracts, projects, consulting agreements, etc.) as to their cost and
potential risk/benefit. The key strength of this approach is that it results in better use of existing systems and
in more feasible solutions since it is rooted in closer alignment between IT and company goals. The main
weakness of this approach is that it requires constant attention to maintain the portfolio. Missing or out of date
information will weaken its effectiveness as both a planning and alignment tool.
Strategic analysis
Strategic analysis begins with a consideration of the external environment and the critical activities that the firm
must succeed at to survive in that environment (critical success factors, CSFs). It is a top-down approach that
involves a small number of senior managers who articulate the CSFs as they see them. These are then
aggregated and used as a basis to set IS priorities. Strategic analysis is very good at producing systems that
are tied to business strategy. However, its limitations include involving multiple stakeholders with different
views of the firm and producing ideas that are difficult to implement. This method is thus most useful when
there is a consistent strategy that needs to be leveraged/supported by technology and when there are not
multiple competing views.
Scenario planning
Scenario planning is the process of looking into the future at widely different scenarios that anticipate possible
business realities. The goal is to identify the fundamental assumptions or factors that will influence how the
business would operate under different conditions. Unlike the previous two methods, scenario planning
addresses the needs of planning in times of rapid change. Its weakness is that the specific scenarios identified
are probably unlikely to occur. If those involved do not understand the purpose of developing the scenarios (to
surface assumptions, constraints, and so on), this approach can create problems in implementation. Scenario
planning is most useful in situations of rapid and extensive change.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 5 solution
Your answer will be partly dictated by the type of organization you work in. The key issue is whether it works
or not. For the most part, good governance is associated with whether or not people understand the process
for decision-making and get the support that is required to do their jobs.
Your answer should focus on one of the following governance styles and should describe it correctly:
1. Business monarchy Decisions are made by senior business leaders, such as the CEO.
2. IT monarchy Decisions are made by senior IT leaders such as the CIO.
3. Duopoly Decisions are made jointly by senior business and IT leaders.
4. Federal Decisions are made jointly by central business leaders along with business leaders from
the different areas of the firm, such as strategic business units or functional departments. IT may
or may not be represented in this model.
5. Feudal Decisions are made locally by each SBU or functional unit.
6. Anarchy Decisions are made in multiple locations, typically by individuals with little
accountability. This strategy is rarely seen, and unlikely to ever be considered as effective.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 6 solution
Data integration refers to the ability of systems within an organization to share data and to aggregate it
without a lot of manual intervention. Enterprise systems have a high degree of data integration because they
are built on enterprise-wide data models that show the linkages between functions in terms of the data they
use. They are often modular, and integrate through a core set of rules and functions.
Because IS developments often arise from departmental requests, it is not unusual to see a firm with
unintegrated systems. This is also the case in firms that have grown by acquisition, where the acquired
companies have systems that have been maintained even after the merger.
Two examples of integration problems are given in the module one relating to different definitions of data,
the other relating to different formats of data. Both lead to problems in combining information from different
sources.
The main problem in data integration is one of scale. It is easy to see conceptually what needs to be done, but
the rules for combining information must be extremely precise (in order to be automated), and it is difficult to
determine all of the exceptions and variants on a rule that exist within a large system.
Two current methods being explored to help with data integration involve cloud computing, where systems and
their associated data are stored by third-party affiliates, and SOA (service oriented architecture), where Web
based service models are being adopted by organizations to take advantage of the Webs cross platform
architecture.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 self-test solution
Question 7 solution
In Canada, as legislated by PIPEDA, individuals must opt in to allow their information to be shared by
organizations. The federal legislation that applies to private-sector organizations is mirrored by public-sector
legislation at most provincial levels, and at the federal level. Industry must comply with PIPEDA legislation that
dictates the level of responsibility for privacy and protection of data. Some industries professional
organizations, such as accountants, engineers, and information systems professionals, have adopted codes of
ethics to help regulate what their professionals do. Businesses have taken some steps, including publishing
statements about how this information will be used.
Technology tools can be used on individual computers to protect against viruses, spyware, and block certain
sites. Technical solutions also enable e-mail encryption, anonymous emailing and surfing, and cookie rejection.
Of particular interest is the P3P standard that allows the user to have more control over personal information
that is gathered on the Web sites visited, although the adoption of P3P throughout the Web server community
has been slow and questionable.
Source: Adapted from Dale Foster, Instructors Manual to accompany Management Information Systems:
Managing the Digital Firm, Fifth Canadian edition, Pearson Canada, 2011, Chapter 4, pages 126-127.
Reproduced with the permission of Pearson Canada.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
Module 2 summary
Information technology governance: Organization and planning for IS
Evaluate the critical decisions about IT that organizations make, and the different governance
arrangements for making those decisions.
IT governance is the assignment of decision rights and the development of an accountability framework for
IS/IT decision making.
Different forms of governance are recommended for different IT decisions, and for firms pursuing
different strategic objectives.
Organizations make five key decisions, covering
IT infrastructure
IT principles
IT architecture
business application needs
IT investment
Governance frameworks
COBIT:
A framework to organize IT governance practices by
process and then link them to the actual business needs
of the organization.
Mapping of business process descriptions and templates.
Control objectives that support effective security and
control over IT processes.
Guidelines for management to focus on objectives,
performance measures, and the integration of IT
governance measures with other areas of governance.
Maturity models support process benchmarking and
continuous improvement.
COSO:
The control environment that establishes the processes
for managing and developing employees in an
organization.
The control activities that are directives to ensure
management policies and procedures are followed.
Information and communications polices developed to
ensure governance compliance.
Risks assessments from both internal and external
sources including objectives to manage those risks.
The continuous monitoring of control systems to assess
deficiencies and the need for improvements.
Six decision-making frameworks are commonly used in these situations:
business monarchy
IT monarchy
duopoly
federal
feudal
anarchy
Effective governance involves both identifying appropriate decision-making models and designing
the structures and processes to support those models.
Identify the roles of the financial manager and other stakeholders in IT governance.
Course Schedule Course Modules Review and Practice Exam Preparation Resources
The financial managers role will be to identify what kinds of information and broad systems are needed to help
meet the companys objectives. In todays business environment, no strategy will succeed if it ignores the
challenges and rewards that IT offers.
The financial manager may also be required to
identify multiple opportunities
prioritize needs or opportunities
include the cost of systems in the budget
Assess the challenges of taking an IS plan from theory to practice.
You need to compare the business strategic plan with the current IS infrastructure and look for
alignment and mismatches.
Business strategy is not always determined before IS strategy, especially during times of large-
scale technological change.
Sometimes technology drives business strategy.
Traditional time horizons (perhaps 5-10 years) dont translate well to technology planning.
Implementation of any plan is difficult, given that the technology landscape is constantly
changing. IS plans should undergo constant revision and updating.
Assess the principal drivers of hardware and software decisions in organizations.
The type of business that you have and the kinds of processes that support that business will
dictate the types of software applications you need.
You will then decide how you will implement that application (off-the-shelf or custom
programming).
The kinds of applications that you require will in turn determine the type of hardware and
network infrastructure that you need.
The relatively short lifespan of hardware and software will also drive change in organizations.
Some computers have a much shorter lifespan than more traditional capital equipment, and as a
rule, hardware becomes obsolete before it is worn out.
Customers or suppliers may demand newer or different technologies, especially ones that make
transactions more efficient or that contribute to lower costs.
Identify the specific issues in hardware and software planning.
When you purchase software, you dont own it. You just have a license to use it, subject to a
large number of conditions. The same applies when you subscribe to hosted SaaS.
Most licenses are based upon the number of users of the software. If you are planning to
increase the number of people that work in your organization, you need to plan to spend more
money just to maintain your existing infrastructure.
Software companies constantly release newer versions of existing software, requiring customers
to upgrade to the newer versions, for example by ending support for older versions.
Sometimes the current software in use will not support the goals of the future IS plan so you
start to upgrade the software sooner to meet the future demands.
Open-source software changes the dynamic of forced upgrades. Since improvements and patches
are done by the user community, you can always upgrade the software without purchasing a new
license. There are no additional fees if you increase the number of users.
The upgrading cycle must be planned. You need to make trade-offs between types of users, the
risks of supporting multiple systems, and the costs involved in upgrading computers.
When you are making hardware purchases you must make decisions about the type of vendor,
cost, and the kind of technical support that you need and are willing to pay for.
Evaluate the advantages and disadvantages of centrally determined technology standards.
Standards limit the range of technologies that can be used in an organization, making it easier to
support and control.
Standards can allow for lower costs and better utilization of resources.
Standards limit the choices that business units have in terms of technology. The choices set out
by the central standards may not be the best choice for an individual business unit.
Relate the issues of data integrity, security, and integration as they apply to IS planning.
Data are some of the most important elements in any information system.
In order for data to be useful for an organization it must
be presented in a meaningful format
be available to the people who need it
have the right amount of detail required
be correct
Maintaining data integrity is difficult, challenging, and costly.
There are a number of issues that need to be addressed around how information is recorded and
how it is used in the various business processes.
Data must be secured.
Security not only protects confidential information about clients, but also ensures the
confidentiality of an organizations information.
Security must also ensure that data are incorruptible.
Security protects customer privacy.
Data integration allows organizations to combine the data from different sources in order to
create a more complete picture about a customer or product.
There are issues around scale; the more sources you are using the more complex the job.
Different databases may use different terms to describe the same data or use the same term to
describe different data.
You can improve data integrity and security with a combination of technology (passwords and
data entry rules) and training for the users and developers of systems.
Assess the business value of an information system from a financial perspective.
You can use standard cost-benefit analysis to establish the business value of an IS.
ROI, NPV calculations, and profitability indexes can all define a tangible cost benefit for an IS,
especially in systems that improve or streamline business process.
The intangible benefits from IS are harder to quantify.
IS enables the organization to conduct business.
Some information systems are not designed to reduce head count or reduce cost, but to improve
"decision making" or manage "organizational knowledge."
Financial models have their place and should be used, but will only provide part of the picture.
Identify the four elements that make up the total cost of ownership of IS, and defend the
importance of considering non-capital costs when evaluating IS acquisitions.
Total cost of ownership (TCO) of IS includes not only the cost of hardware, software, and
telecommunications, but also the cost of service and personnel.
Service and personnel costs include
the cost of technical support and administration
the cost of consumables
end-user operations (formal and informal learning, down-time, non-productive
activities)
Organizations try to reduce the total cost of ownership by developing and implementing hardware
and software standards.
Moving towards a more centralized model of computing puts less responsibility on the end user of
computing.
Relate the relevance of Moores Law and Metcalfes Law to IS management.
Because computing power doubles every 18 months, the demand for new computers is constant
and unending.
In order to maximize your investment in hardware and software, it is important that you increase
the size and scope of your network and increase the number of networks with which you
connect.
Evaluate the issues associated with IS planning and developing an IS strategic plan.
IS planning involves a wide range of groups and stakeholders: senior management, financial
management, senior IS management, customers, suppliers, and other external stakeholders.
IS planning can occur at various times, driven by events such as project requests or strategic
planning.
While long-range planning can be problematic, some organizations will review long-range plans
on a regular basis (every 3 to 4 years).
IS planning also occurs during the annual budget cycle.
Compare the scenario planning approach to more traditional approaches of structuring the
planning process.
Scenario planning
involves looking at a future situation and then examining the possible effects on the current IS
plan
doesnt predict the future but helps to anticipate the possible outcomes
is more of a big picture approach
Portfolio analysis
looks at the current projects and recurring IS costs in the organization
looks for strategic alignment with the stated business goals
is a risk/benefit analysis of the dollar and resource investments within IS
attempts to maximize the value of existing projects/expenditures through low risk/high return IS
investments.
Strategic analysis
uses critical success factors, value chain, or Porters five forces to determine the key processes
and then identifies how IS can support these processes
uses a top-down approach
focuses on achieving IS and business alignment

Vous aimerez peut-être aussi