Vous êtes sur la page 1sur 228

TECHNOLOGY PAPERS

Bechtel Technology Journal


December 2009

Volume 2, No. 1

Contents
Foreword Editorial CIVIL
Managing Technological Complexity in Major Rail Projects
Siv Bhamra, PhD; Michael Hann; and Aissa Medjber

v vii

3 11

Measuring Carbon Footprint in an Operational Underground Rail Environment


Elisabeth Culbard, PhD

COMMUNICATIONS
Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths
Ray Butler, Andrew Solutions; Aleksey A. Kurochkin; and Hugh Nudd, Andrew Solutions

21

Cloud ComputingOverview, Advantages, and Challenges for Enterprise Deployment


Brian Coombe

33

Performance Engineering Advances to Installation


Aleksey A. Kurochkin

45

MINING & METALS (M&M)


Environmental Engineering in the Design of Mining Projects
Mnica Villafae Hormazbal and James A. Murray

57 67 81

Simulation-Based Validation of Lean Plant Configurations


Robert Baxter; Trevor Bouk; Laszlo Tikasz, PhD; and Robert I. McCulloch

Improving the Hydraulic Design for Base Metal Concentrator Plants


Jos M. Adriasola; Robert H. Janssen, PhD; Fred A. Locher, PhD; Jon M. Berkoe; and Sergio A. Zamorano Ulloa

OIL, GAS & CHEMICALS (OG&C)


Plot Layout and Design for Air Recirculation in LNG Plants
Philip Diwakar; Zhengcai Ye, PhD; Ramachandra Tekumalla; David Messersmith; and Satish Gandhi, PhD, ConocoPhillips Company

99

Wastewater TreatmentA Process Overview and the Role of Chemicals


Kanchan Ganguly and Asim De

109

Electrical System Studies for Large Projects Executed at Multiple Engineering Centres
Rajesh Narayan Athiyarath

119

ii

Bechtel Technology Journal

POWER
Options for Hybrid Solar and Conventional Fossil Plants
David Ugolini; Justin Zachary, PhD; and Joon Park

133 145

Managing the Quality of Structural Steel Building Information Modeling


Martin Reifschneider and Kristin Santamont

Nuclear Uprates Add Critical Capacity


Eugene W. Thomas

157 165

Interoperable Deployment Strategies for Enterprise Spatial Data in a Global Engineering Environment
Tracy J. McLane; Yongmin Yan, PhD; and Robin Benjamins

SYSTEMS & INFRASTRUCTURE


Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands
Michael R. Lewis; Ignacio Arango, PhD; and Michael D. McHood

177

Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility


Christine Statton; August D. Benz; Craig A. Myler, PhD; Wilson Tang; and Paul Dent

193

Investigation of Erosion from High-Level Waste Slurries at the Hanford Waste Treatment and Immobilization Plant
Ivan G. Papp and Garth M. Duncan

205

TECHNICAL NOTES
Effective Corrective Actions for Errors Related to Human-System Interfaces in Nuclear Power Plant Control Rooms
Jo-Ling J. Chang and Huafei Liao, PhD

215

Estimating the Pressure Drop of Fluids Across Reducer Tees


Krishnan Palaniappan and Vipul Khosla The BTJ is also available on the Web at www.bechtel.com/. (Click on Services Engineering & Technology Technical Papers)

219

2009 Bechtel Corporation. All rights reserved.


Bechtel Corporation welcomes inquiries concerning the BTJ. For further information or for permission to reproduce any paper included in this publication in whole or in part, please e-mail us at btj_edit@bechtel.com. Although reasonable efforts have been made to check the papers included in the BTJ, this publication should not be interpreted as a representation or warranty by Bechtel Corporation of the accuracy of the information contained in any paper, and readers should not rely on any paper for any particular application of any technology without professional consultation as to the circumstances of that application. Similarly, the authors and Bechtel Corporation disclaim any intent to endorse or disparage any particular vendors of any technology.

December 2009 Volume 2, Number 1

iii

Bechtel Technology Journal


Volume 2, Number 1

ADVISORY BOARD
Thomas Patterson . . . . . . . . . . . . . . . Principal Vice President and . . . . . . . . . . . . . . . . . . . . . . . . . . . . Corporate Manager of Engineering Benjamin Fultz . . . . . . . . . Chief, Materials Engineering Technology, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Oil, Gas & Chemicals; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chair, Bechtel Fellows Jake MacLeod . . . . . Principal Vice President, Bechtel Corporation; . . . . . . . . . . . . . . . . . . . . . Chief Technology Officer, Communications; . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bechtel Fellow Justin Zachary, PhD . . . . . . . . . Assistant Manager of Technology, . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power; Bechtel Fellow

TRADEMARK ACKNOWLEDGMENTS
All brand, product, service, and feature names and trademarks mentioned in this Bechtel Technology Journal are the property of their respective owners. Specifically:
Amazon Web Services, Amazon Elastic Compute Cloud, Amazon EC2, Amazon Simple Storage Service, Amazon S3, and Amazon SimpleDB are trademarks of Amazon Web Services LLC in the US and/or other countries. Apache and Apache Hadoop are trademarks of The Apache Software Foundation. AutoDesk and AutoCAD are registered trademarks of AutoDesk, Inc., and/or its subsidiaries and/or afliates in the USA and/or other countries. Bentley, gINT, and MicroStation are registered trademarks and Bentley Map is a trademark of Bentley Systems, Incorporated, or one of its direct or indirect wholly owned subsidiaries. Corel, iGrafx, and iGrax Process are trademarks or registered trademarks of Corel Corporation and/or its subsidiaries in Canada, the United States, and/or other countries. Dell Cloud Computing Solutions is a trademark of Dell Inc. ESRI and ArcGIS are registered trademarks of ESRI in the United States, the European Union, or certain other jurisdictions. ETAP is a registered trademark of Operation Technology, Inc. Flexsim is a trademark of Flexsim Software Products Inc. Google is a trademark of Google Inc. Hewlett-Packard, HP, and Flexible Computing Services are trademarks of Hewlett-Packard Development Company, L.P. IBM is a registered trademark of International Business Machines Corporation in the United States. IEEE is a registered trademark of The Institute of Electrical and Electronics Engineers, Incorporated. Linux is a registered trademark of Linus Torvald. Mac OS is a trademark of Apple, Inc., registered in the United States and other countries. MapInfo Professional is a registered trademark of Pitney Bowes Business Insight, a division of Pitney Bowes Software and/or its afliates. Mentum Planet is a registered trademark owned by Mentum S.A. Merox is a trademark owned by UOP LLC, a Honeywell Company. Microsoft, Excel, and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its afliates.

EDITORIAL BOARD
Justin Zachary, PhD . . . . . . . . . . . . . . . . . . . . . . . . . Editor-in-Chief Siv Bhamra, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Civil Editor Jake MacLeod . . . . . . . . . . . . . . . . . . . . . . Communications Editor William Imrie . . . . . . . . . . . . . . . . . . . . . . . Mining & Metals Editor Cyrus B. Meher-Homji . . . . . . . . . . . . Oil, Gas & Chemicals Editor Sanj Malushte, PhD . . . . . . . . . . . . . . . . . . . . . . . . . . Power Editor Farhang Ostadan, PhD . . . . . . . . . Systems & infrastructure Editor

EDITORIAL TEAM
Barbara Oldroyd . . . . . . . . . . . . . . . Coordinating Technical Editor Richard Peters . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor Teresa Baines . . . . . . . . . . . . . . . . . . . . . . . Senior Technical Editor Ruthanne Evans . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Brenda Thompson . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Ann Miller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Angelia Slifer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor Bruce Curley . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Technical Editor

GRAPHICS/DESIGN TEAM
Keith Schools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Matthew Long . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Mary L. Savannah . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design John Connors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Graphic Design Diane Cole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Desktop Publishing

Salesforce.com is a registered trademark of salesforce.com, inc. Simulink is a registered trademark of The MathWorks, Inc. Sun Microsystems is a registered trademark of Sun Microsystems, Inc., in the United States and other countries. TEAMWorks is a trademark of Bechtel Corporation. Tekla is either a registered trademark or a trademark of Tekla Corporation in the European Union, the United States, and other countries. ULTIMET is a registered trademark owned by Haynes International, Inc.

iv

Bechtel Technology Journal

Foreword

elcome to the Bechtel Technology Journal! The papers contained in this annual compendium highlight the broad spectrum of Bechtels innovation and some of the technical specialists who represent Bechtel as experts in our business. As can be seen from the variety of topics, Bechtels expertise is truly diverse and represents numerous industries, disciplines, and specialties. The objective of the BTJ is to share with our clients, fellow employees, and select industry and university experts a sampling of our technical and operational experiences from the various industries that Bechtel serves. The papers included have been written by individuals from all of the major business units within our company. In some cases, our customers have also made significant contributions as co-authors, and we thank them for that! The authors, advisory board, editorial board, editorial team, and graphics/design team who have made this publication possible can be truly proud of the outcome. To each go my personal thanks. As you will see when reading the selected papers, Bechtel does represent innovation in our approaches to both solving engineering challenges and managing technical complexity. I, too, am proud to be a small part of this effort and am confident that this Bechtel Technology Journal provides a better understanding of how Bechtel applies our best practices to our work.

Sincerely,

Benjamin Fultz Chief, Materials Engineering Technology Bechtel Oil, Gas & Chemicals Chair, Bechtel Fellows

December 2009 Volume 2, Number 1

Editorial

ollowing our successful publication of the inaugural issue of the Bechtel Technology Journal in 2008, and with much appreciation for the interest it generated in various industry sectors, we are pleased to offer our second annual issue.

The BTJ provides a window into the innovative responses of Bechtels leading specialists to the diverse technical, operational, regulatory, and policy issues important to our business. We are confident that this collection of papers, selected from a substantial number of worthy submissions, offers useful and interesting information and presents solutions to real problems.

The editorial staff invites your comments or questions germane to the BTJs content. Please send them to me at jzachary@bechtel.com.

We wish you enjoyable reading!

Dr. Justin Zachary Assistant Manager of Technology, Bechtel Power Editor-in-Chief Bechtel Fellow

December 2009 Volume 2, Number 1

vii

Civil
Technology Papers

Managing Technological Complexity in Major Rail Projects


Siv Bhamra, PhD Michael Hann Aissa Medjber

11

Measuring Carbon Footprint in an Operational Underground Rail Environment


Elisabeth Culbard, PhD

JNP Secondment
A pair of Piccadilly line trains rest in Acton Town station, part of the project to renovate three historic lines of the London Underground Jubilee, Northern, and Piccadilly.

MANAGING TECHNOLOGICAL COMPLEXITY IN MAJOR RAIL PROJECTS


Issue Date: December 2009 AbstractThis paper discusses the common technical issues that may arise during the execution of large projects and presents a structured approach to managing the technological complexity of delivering major rail projects that comply with customer requirements. The major Crossrail Project in London is used as a case study for the application of the approach. Keywordsintegration, rail projects, systems engineering, technology, validation

INTRODUCTION

ecent decades have seen a continued increase in the demand for railway passenger and freight services in most regions of the world. This is particularly true in Europe, the Middle East, South Asia, and the Far East, where major investments are now underway to improve railway service performance, safety, and reliability in response to a demand for higher capacity and performance. Advanced technologies and shrinking design times are increasingly being seen as a means to assist in responding more quickly to growing customer requirements in a commercially competitive environment. As the demand grows for safer, more efficient, operationally flexible, and higher performance railway systems that are well integrated with other forms of transport, customer requirements can be met only through the carefully controlled application of emerging technology. Additional industry challenges arise from the fact that rail projects are often spread over long geographic distances, crossing different communities and even countries, leading to cultural and behavioural issues that prevent technology alone from delivering solutions. The globalisation of system solutions has given rise to a wideranging number of reference sites within the rail industry. This paper discusses some of the recent trends in the growth of technological complexity and examines root causes for the risks that can interfere with meeting customer requirements and expectations. Using a case study example, the paper then sets out means to control and

reduce the project risks involved in the design, implementation, and final handover of a major rail project. These risk reduction means are accomplished via the structured provision of assurance evidence combined with continuous validation against requirements. This approach ensures close and continuous adherence to customer requirements, builds confidence in the end product, and counteracts the risk of not meeting final delivery for commercial operation.

BACKGROUND Increasing the Performance of Railways Railways have grown and expanded throughout many parts of the world since their invention in the UK two centuries ago. In many areas, trains have become faster and more frequent in response to growth in demand. In fact, since the 1950s this demand for rail service in some countries has exceeded the performance levels provided by traditional mechanical interlockings to maintain safe distances between trains and has driven the need to develop more sophisticated technology without compromising safety standards. At the same time, as trackforms have improved and rolling stock has become more resilient, the ability to run at ever-higher speeds has become a viable commercial proposition. Advanced computer-based signalling and train control technologies are increasingly being specified by customers around the world who seek to gain maximum performance from both existing and future infrastructure. In addition, alternative systems for traction power,

Siv Bhamra, PhD


sbhamra@bechtel.com

Michael Hann
mchann@bechtel.com

Aissa Medjber
amedjber@bechtel.com

2009 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


CPFR IM LU NR PDP Crossrail Project Functional Requirements infrastructure manager London Underground Network Rail Project Delivery Partner reliability, availability, maintainability, safety safety management system

Enhance service capacity by increasing the number and speed of trains to accommodate the growing passenger and freight usage demand Meet increasing passenger expectations regarding service quality and punctuality Comply with commercial targets for improving operational and maintenance efficiencies and environmental performance and keeping railways affordable for passengers whilst minimising the burden for state subsidies Improve safety, reliability, and availability by minimising the frequency and impact of equipment failures Integrate new security provisions to protect passengers, staff, and assets against the threat of malicious activity DEFINING THE PROCESS Managing Complexity The increasing complexity of rail systems and the need to ensure their integration into the surrounding infrastructure have created a need for a systems engineering approach. This emerging discipline has become essential to projects around the globe. Failure to manage the integration of a complex system results in significant problems not only at its handover to the operator but also during its operational life. In this context, complexity encompasses not only engineering technology, but also the human organisation and the wider business and environmental fields within which major rail projects are now delivered. A simplified diagram of systems engineering activities is shown in Figure 1. The figure illustrates how systems engineering management needs to sit at the heart of a project. During a projects development phase, it is necessary for the parties involved to come to agreement regarding the life-cycle planning and baselines that eventually lead to project acceptance. This agreement is particularly important when dealing with customers that are inexperienced or are new to the delivery organisation. Reaching agreement is the first, and perhaps the most important, step in building the confidence that is so vital when seeking final handover. In dealing with complex systems, it has become necessary to develop a systems engineering process that uses a robust suite of tools capable

The increasing complexity of rail technologies now requires a systems engineering approach for successful integration.

RAMS SMS

ventilation, communications, passenger information, automatic fare collection, stations, and railway operational control centres are becoming critical requirements on new projects. The commercial, political, and environmental restrictions encountered in providing new rail corridors, particularly in heavily populated urban environments, have contributed to making more intense use of existing routes the most beneficial way of delivering improved performance. Business Drivers for Technology Application The use of ever-more-sophisticated and complex technology in rail applications has arisen principally as a result of the need to:

Development Phase

Baselines

Systems Engineering Management Systems Engineering Process

Life-Cycle Planning

Integrated Teaming

Life-Cycle Integration

Figure 1. Systems Engineering Activities

Bechtel Technology Journal

Stage 1

Stage 2

Stage 3

Stage 4

Customers/ Stakeholders Input

Leadership and Integration

Requirements Management

Design

Verification and Validation

Key Personnel Appointment

Team Integration

Requirements Identification

Requirements Validation

Engineered Designs

Life-Cycle Risks Definition

Figure 2. Simplified Four-Stage Process

Rail projects require earlier and continuous engagement of the customer.

of capturing project requirements and managing organisational interfaces. Baselines establish the agreed-upon project development stages, and life-cycle planning for system integration starts at the development phase. At the heart of all activities is proactive, highly competent systems engineering management. Process Overview The systems engineering approach was originally developed and successfully applied in the United States defence and space industries. It has been progressively applied in other industries, and the basic approach is now being increasingly adopted on complex rail projects, with emphasis on four broad stages: Stage 1Leadership and Integration Stage 2Requirements Management Stage 3Design Stage 4Verification and Validation Stage 1Leadership and Integration Stage 1 manages the concurrent input from all participating customer functions (railway operations, maintenance, regulators, finance, legal, etc.) to optimise the railway projects definition and capital investment objectives. Therefore, the appointment of key personnel to the leadership team who are both managerially and technically competent in the task at hand is critical. The management team must have the ability to define clear work processes and understand and integrate inputs of the key people. Successful rail projects require early and continuous involvement of the customer, railway

operators, maintainers, and other stakeholders in project development. Systems engineering is the systematic process that includes reviews and decision points intended to provide visibility into the process and encourage early and regular stakeholder involvement. Their participation provides stakeholders the opportunity to contribute to the steps in the process where their input is needed. Stage 2Requirements Management Stage 2 comprises two sub-functions: Requirements Identification: Definition, documentation, modelling, and optimisation of the proposed new railway as it is expected to operate after construction and handover Requirements Validation: Robust analytic support to the requirements, design, and verification functions Stage 3Design Stage 3 generates the engineered designs from the customer requirements. These designs are used for follow-on procurement and construction activities and provide a detailed definition of all project life-cycle risks, along with proposed measures for their control. Stage 4Verification and Validation Stage 4 interactively validates the outputs from the other functions throughout the design and execution phases of the project to ensure that customer requirements have been achieved and that risk to project execution and future users of the operational railway has been managed to the extent practicable. Figure 2 shows a schematic representation of the four stages.

December 2009 Volume 2, Number 1

CASE STUDY: APPLICATION OF SYSTEMS ENGINEERING APPROACH FOR CROSSRAIL he four-stage systems engineering model is being applied in the UK to the Crossrail Project. As shown in Figure 3, the project will provide new railway tunnels and stations under London and connect the existing rail routes to the east and west. The Crossrail Project has an estimated total installed cost of 15.9 billion ($24 billion) and will provide passenger services along a 118 km (73 mile) route from Maidenhead and Heathrow in the west through new twin-bore 21 km (13 mile) tunnels under central London to Shenfield and Abbey Wood in the east. Crossrail will serve an additional 1.5 million people within a 60-minute commuting distance of Londons key business districts. When it opens for passenger service, Crossrail will increase Londons public transport network capacity by 10%, supporting regeneration across the capital and helping to maintain Londons position as a world-leading financial centre for decades to come.
Dundee

The complexity of the Crossrail Project is a result of many factors, including: Constraints over the scope, funding, and programme to deliver the project The geographical location of Crossrail The involvement of multiple sponsors, stakeholders, and local communities The requirement to integrate major complicated systems The myriad of technological challenges and opportunities The need to co-ordinate project delivery with design consultants, contractors, and local industry participants Stage 1Leadership and Integration The Crossrail Project has the full support of the UK Government and all main political parties. The Crossrail Act 2008 provides the Crossrail Sponsors the powers they need to deliver this major project within the
Kattegat

The defined process will help deliver the Crossrail Project to customer expectations.

No rt h Se a
Glasgow Edinburgh

Denmark United Kingdom

Copenhagen

Belfast

York Galway

Dublin Ireland

Irish Sea

Bradford

Leeds Hull

Huddersfield Manchester

Limerick

Berlin
Cork Birmingham

Amsterdam Netherlands
Cardiff Bristol

Germany

London
Reading

Celtic Sea
Plymouth

Belgium

En
Wycombe District

nel
Shenfield
Barnet LB Harrow LB

Surface Line Tunnel Portal Stations

Harold Wood Paris


Haringey LB Waltham Forest LB Redbridge LB

Goodmayes Seven Kings

Gidea Park om Romford

South Bucks District

Hillingdon Ealing LB

Brent LB

T Taplow Maidenhead ad

Burnham Slough g
Slough

Iver

Langley e
Royal Borough of Windsor & Maidenhead

Hayes & y Harlington on Southall hr Heathrow Central ntra

Ea g Ealing oadwa Broadway West Ealing Acton Main ain e Line Hanwell
ith rsmit mme LB Ham ulham L &F

or Manor Park Islington Hackney LB est Forest Gate LB atfor r Farrington Stratford F o To h Tottenham City of Ct Rd C d Tower Hamlets Westminster
Camden LB LB

Paddington dd gto
Kensington & Chelsea LB

Whitechapel
Southwark Lambeth LB Bexley LB

ow Heathrow al Terminal Baymtinal 5 f of Biscay

F r a Wandsworth LB nce

Figure 3. Crossrail Location and Route

Bechtel Technology Journal

agreed-to constraints (such as controls over the environmental impact). A dedicated Crossrail Project client team, along with a Project Delivery Partner (PDP) and Industry Partners, is managing delivery of all works. To the extent possible, key parties involved in designing and delivering the Crossrail Project are located together. This juxtaposition assists in providing consistent leadership, improves communication, and maximises teaming. Stage 2Requirements Management The Crossrail Project requirements are captured at the highest level in the Crossrail Act 2008. Below this sits the Project Development Agreement amongst the Crossrail Sponsors, which also sets out the funding arrangements. Then a series of operational, performance, and delivery outputs are set out in a comprehensive list of the Crossrail Project Functional Requirements (CPFR). Delivery of the Crossrail requirements is planned via the industry standard V-cycle shown in Figure 4. By providing control throughout the life cycle, the V-cycle tracks each departments understanding of its scope of work and ensures that the integrated sum of the many scopes ultimately delivers the project requirements.

Stage 3Design Once Crossrail Project requirements have been defined and understood, they are flowed down into the design process. Each detail design consultant, under the supervision of the PDP, is awarded a competitively tendered contract that includes a set of General Obligations (i.e., contract terms and conditions, financial controls, and reporting arrangements) and a detailed list of Design Outputs (i.e., Scope of Design deliverables, the programme for submittals, and the dependencies and interfaces with the PDP and other design consultants). The PDP is responsible for co-ordinating the detailed design in accordance with the overall project programme in order to initiate the start of procurement and construction activity. The PDP supervises and co-ordinates the work of the design consultants to make sure that the requirements of the Crossrail Sponsors are achieved at maximum value for money. Stage 4Verification and Validation The Crossrail Project systems engineering process produces the evidence needed to support Engineering Safety Cases, which, in turn, support the Operational Safety Case. The Operational Safety Case is used to obtain

Systems engineering produces the evidence to support Engineering Safety Cases.

Project Remit (Feasibility and Concept Phase Represented by Stakeholders and Operations Requirements, Hybrid Bill, etc.) Project Requirements + Project Design Specification (Functional Specification Phase)

Stage 1
Acceptance Tests Verification

Acceptance

Stage 2

Validation Tests via Acceptance Criteria System Integration Tests

Operational Trials

Project Requirements + Systems Design Specification (Design Specification Phase) Clarification / Elicitation

Ver Verification

S Stage 3

Test Specification

Systems Testing

Installation and Commissioning

Subsystem Acceptance Tests Detail Design Verification

Stage 4

Test Specification

Subsystem Testing

Integration

Checking Satisfaction (via Traceability)


Existing Requirements

Procurement/Construction

Scope Addressed by This Plan

Figure 4. V-Cycle Diagram

December 2009 Volume 2, Number 1

Stakeholder Safety Management Systems


NR IM SMS LU IM SMS Train Operator SMS Accept

Sponsors Requirements Crossrail Project Functional Requirements Operating Functional Requirements


Normal Operations Plan Degraded Operations Plan Emergency Operations Plan Maintenance Plan

Accept

Operational Safety Case RAMS and Hazard Allocation Process

Crossrail involves a complex relationship amongst civil infrastructure and railway systems.

Engineering Safety Agreed-to Hazard Cases Mitigation

System Design Specifications

Hazard Logs

Engineering Safety Management System

Legend:

Delivery Team

Client Team

Stakeholders

Figure 5. Systems Engineering Assurance Process for the Crossrail Project

authorisation to put the new railway into passenger service (see Figure 5). Key systems engineering inputs to the safety cases are the hazard management process and the requirements identification and management processes. The evidence of adequate systems engineering provides design assurance, hence timely acceptance by the future infrastructure managers (IMs) of the Crossrail assets. The assurance process starts in the top righthand corner of the diagram. Moving clockwise,

the four stages of systems engineering become apparent. Underpinning each stage is the body of assurance evidence that is generated; in particular, the safety cases that form the projects backbone. It is the certified documentation, combined with a trust in the competence of the team, that will eventually lead to a smooth handover of the railway into commercial service. Crossrail systems engineering is being delivered through a structured assurance regime that sets out the hierarchical design, execution, and commissioning plans. Early involvement of the future operator and maintainer (Stage 1 of the process) has already counteracted project risks and allowed the assurance regime to be developed and planned. In common with most railway projects, Crossrail involves complex relationships amongst civil infrastructure and railway systems. An integration process and assurance regime has been established that will allow a progressive build-up of evidence, both geographically and system-wide. This regime will provide the required assurance evidence throughout the V-cycle, as shown in Figure 6.

Affirmation

Trial Operations Dynamic Testing

Performance Measurement

Crossrail Project Objectives Sponsors Requirements

End-to-End Performance End-to-End Operations

Railway Systems
Concept Design

Railway Subsystems

Project Testing and Commissioning

By Station

Detail Design Construction

Testing of Elements

Integrated Station Tests

New Station Testing

Figure 6. Progressive Assurance of the Crossrail Project

Bechtel Technology Journal

Requirements Management System

Accept

AVOIDING THE CONSEQUENCES Minimising Uncertainty In the early stages of a complex project, there is invariably some cost and schedule uncertainty. The less-experienced project teams in the industry face greater uncertainty. As time progresses and the scope of the work becomes better understood, the uncertainty diminishes. This can be depicted by the cone of cost certainty shown in Figure 7. The systems engineering approach focuses on resolving uncertainty in a projects early stages by providing a better understanding of the requirements and their dependencies. The process of incremental design and verification also reduces the risk of uncertainty in estimates developed early in the project. Avoiding Late Changes and Project Cost Increases Whilst there are limited instances of no errors in project development, change orders are often issued during construction. Depending on the nature of the change, it can have a disproportional impact on costs. If base project requirements are significantly changed during

implementation, the rectification is generally very disruptive and therefore costly. On the other hand, design changes during implementation typically have a lesser impact because they are usually driven by value engineering in the same way as construction changes. Figure 8 graphically depicts how a change in base project requirements can result in the most significant impacts. Software development projects have also shown that the legacy cost of early design defects can significantly increase the overall project cost if they are identified late. Figure 8 further shows how a poorly defined or missed requirement can be less costly to fix early, rather than during later stages when the cost of rework can be compounded. The structured systems engineering process defines the requirements and validates the design documents early and continues to do so throughout the project life cycle, thus maximising the chances of identifying and resolving defects early.

As technology changes more quickly, so too does the challenge to more effectively manage the resulting complexity.

CONCLUSIONS

1.3x 1.2x

1.1x 1.0x 0.9x 0.8x 0.7x 1 2 3 4 5

Project Life Cycle, years

Figure 7. Cone of Cost Certainty

Phase Where Defect is Created


Requirements

Cost to Correct a Defect

echnological advancements in railway systems are being introduced with increasing rapidity in the effort to meet the business demands of freight customers and to expand and improve passenger services. Strategic planning for major rail projects is typically 20 to 30 years ahead. Advanced technology applications for rail projects being pursued today typically have been in development for 3 to 6 years and are intended to support passenger operations for 25 to 40 years. As technology changes ever more quickly, so too does the challenge to better plan and more effectively manage the resulting complexity in the design and execution of major rail projects. This has driven the need for a structured systems engineering approach that fosters the competency and leadership skills needed to deliver complex projects. Delivery has to be achieved whilst providing a high level of confidence to operators and maintainers that the end product is safe, is reliable, and meets requirements. The four key stages of the systems engineering process set out in this paper, and illustrated by a case study example of their application on the UKs Crossrail Project, provide a means for handling complexity from the wide range of sources typically faced during the design and execution of major rail projects. A strong

Estimated Project Cost

Design

Construction Requirements Design Construction Service Operations

Phase Where Defect is Corrected

Figure 8. Impact of Late Changes on Costs

December 2009 Volume 2, Number 1

emphasis on using a structured approach to define and document project requirements and then to rigorously validate them through each stage of design and execution is critical to minimise delays, cost increases, or the loss of operational service functionality sought by the customer. Whilst the application of this paper is to rail projects, the four structured systems engineering stages set out are also applicable to managing complexity in any industry. Recent evidence from many parts of the world suggests that there is a need to deploy a more rigorous approach to specifying project requirements and their means of verification throughout the project life cycle as a means of avoiding cost increases, delays, and disappointments during later stages of execution.

Siv is a guest lecturer to several universities and is also a respected transportation security specialist and advisor. He has presented at several conferences and has written numerous papers on management and technical disciplines. Siv has a PhD in Railway Systems Engineering from the University of Sheffield, South Yorkshire; an MBA in Project Management from the University of Westminster, London; and an MSc in Engineering Design from the University of Loughborough, all in the UK. Michael Hann, Engineering Manager for Crossrail Central, has over 30 years of experience in the rail industry and has an impressive record of delivering major works. His early career was spent delivering integrated transport projects for the London Underground, and it was there that he developed an interest in all engineering disciplines. For a short time in the late 1990s, he was the senior project manager responsible for the delivery of Hung Hom Station in Hong Kong. Mike returned to London to be the systems acceptance manager for one of the first fully computer-based signal interlocking systems introduced into the UK. More recently, from 2002, his wide-ranging skills have been ideally suited to the role of manager of Engineering for Tube Lines, the 30-year contract to renew three of Londons busiest lines. In this role, Mike has functional oversight of 400 engineering personnel deployed on capital projects and maintenance activities. Mike is a Fellow of the Institution of Civil Engineers (UK) and a member of the Institution of Railway Signalling Engineers. Mike gained a BSc in Civil Engineering at the University of Greenwich, UK. Aissa Medjber has a 25-year career in project engineering, management, and construction on rail and petrochemical projects. He made a major contribution to the safe delivery, on time and budget, of High Speed 1 in the UK. On this major project, Aissa was responsible for all aspects for the delivery of some 350 million worth of system-wide contracts for track, power supplies, communications and control, signaling, tunnel systems, and mechanical and electrical equipment. This experience has allowed him to take up the role of system-wide manager on the Crossrail Project. Aissas prior experience involved a stint as control systems manager for the Onshore Development 1 & 2 projects in Abu Dhabi. He also the project engineering manager for construction of the worlds largest refinery petrochemicals complex in Jamnagar, India. the Gas was the and

BIOGRAPHIES
Siv Bhamra, PhD, has 28 years of experience in the project management and engineering of major rail projects. He has worked on the full spectrum of rail projects from light rail, urban metros to high-speed lines, engaging in activities ranging from conducting feasibility studies to implementing full schemes. Siv has delivered rail projects in Europe and the Far East and has performed studies for rail operators in the US, Middle East, and South Asia. His numerous technical achievements encompass the development of solidstate traction inverting substations to save energy, the implementation of advanced train control technologies to improve the performance of existing and future railways, and the performance of research into state-of-the-art security management systems. Siv joined Bechtel in 1999 while on the Jubilee Line Extension Project in London. Before that, he had worked for the London Underground and a number of other railway companies. For 2 years, Siv was also a senior transportation advisor to the European Bank for Reconstruction and Development. Currently, as the delivery director for Bechtel on Crossrail, he manages technical functions and oversees the delivery of systems works on this major project. Elected a Bechtel Fellow in 2004, Siv is a member of five professional institutions and three technical societies. He won the Enterprise Project of the Year Award in 2006, the London Transport Award in 2004, and a Safety Management Award in 1998, and has twice been accredited with further awards of technical excellence (1984 and 1986). Siv was recognized for his efforts in restoring the Piccadilly Line to passenger service following the terrorist attacks in London in 2005, commended for helping to recover operational service on the Northern Line following a derailment in 2003, and commended for courage after a major fire at Kennington Station in 1990.

Aissa received a Post-Graduate Diploma in Systems and Control at the University of Manchester and a BSc in Electrical Engineering at the University of Salford, both in the UK.

10

Bechtel Technology Journal

MEASURING CARBON FOOTPRINT IN AN OPERATIONAL UNDERGROUND RAIL ENVIRONMENT


Issue Date: December 2009 AbstractAs part of a commitment to meet local and national CO2 emission targets, Tube Lines calculated the carbon emissions that arise from the life cycle of all of its operations and projects. The 2006 data (2006 baseline) represents the carbon footprint used to make business and investment decisions and drive change to ensure that carbon management is central to the way Tube Lines undertakes its work. In 2008, the baseline was updated to account for changes in Tube Lines projects and operations. Tube Lines had reduced its carbon footprint by 5,277 metric tons (5,817 tons) by the end of 2008 based on its 2006 baseline. In 2009, a further 1,000 metric ton (1,102 ton) reduction target has been set based on the 2008 baseline and will be met. Keywordscarbon, carbon footprint, emissions, fuel

INTRODUCTION Tube Lines Overview Tube Lines1 has a 30-year public private partnership (PPP) contract with London Underground (LU) to maintain and upgrade all infrastructure on the Jubilee, Northern, and Piccadilly underground metro lines. This work encompasses upgrading the signalling on all three lines to increase capacity and reliability and reduce journey times; upgrading 100 stations with an emphasis on improving security, information flow, and the general environment for passengers; introducing a new fleet of trains on the Piccadilly line in 2014 and refurbishing the fleets on the other two lines; and replacing and refurbishing hundreds of kilometres of track and numerous lifts and escalators and improving the general travelling environment for passengers. CO2 Emission Reduction Targets In the UK, the following CO2 emission reduction targets have been set: UK government national target of 1.2% per year until 2050

on the environment based on the amount of greenhouse gasses produced, measured in units of CO2 ) and instituted measures that achieved the targeted reductions. Working with the Carbon Trust (an organisation created by the UK government to help businesses accelerate the move to a low carbon economy) and AEA Energy and Environment (AEA), which maintains the UK National Atmosphere Emissions Inventory (NAEI) (the official air emissions inventory for the UK), Tube Lines calculated its carbon footprint, separating it into the following two components: Corporate (direct) footprint
energy and utilities consumption

Process (indirect) footprint


embedded/indirect CO2 emissions resulting from materials use, waste generation, and transport

Elisabeth Culbard, PhD


eculbard@bechtel.com

London target of 1.7% per year until 2025 To meet these targets, Tube Lines calculated its carbon footprint (impact of human activities
1 Tube Lines is indirectly owned by Bechtel Enterprises (one-third) and Ferrovial (two-thirds).

Tube Lines Carbon Footprint The corporate and process components of Tube Lines carbon footprint (see Figure 1) consist of the following emissions: Corporate footprintemissions resulting from Tube Lines direct operation of its premises, its paper consumption, and its employees commutes. To calculate its corporate footprint, Tube Lines measured

2009 Bechtel Corporation. All rights reserved.

11

ABBREVIATIONS, ACRONYMS, AND TERMS


AEA DSM ERU GPS GSM L&E LED LU NAEI PPP P-Way ZWTL AEA Energy and Environment distribution services management emergency response unit global positioning system global system for mobile communication lifts and elevators light-emitting diode London Underground National Atmosphere Emissions Inventory public private partnership permanent way zero waste to landfill

during the course of Tube Lines work, e.g., track replacement, station modernisation, and fleet maintenance. To calculate the process carbon footprint, Tube Lines used a complex set of calculations and AEAs carbon impact tool to perform a lifecycle assessment of each process, e.g., escalator refurbishment (see Figure 2). This assessment evaluated environmental impacts and identified inputs and outputs in terms of resources, waste, materials, and fuel. The impacts in terms of CO2 emissions were also determined, with the amount of CO2 tied to the control mechanism required to achieve best practicable means. For each process, a process champion was identified and a workshop was held to identify and quantify material types and volumes, waste types and volumes, and transportation. Data was obtained from bottom-up forecasts, method statements, and bills of quantities. The following formula was developed: (Materials used + waste generated + materials transport + staff transport) x AEA conversion factors =

A process champion was identified and a workshop was held to identify the carbon footprint for 34 typical work processes, e.g., escalator refurbishment.

LU Power 205,000 metric tons CO2 (226,000 tons)


Traction D t Depots Stations

Tube Lines 2006 Baseline 78,000 metric tons CO2 (85,980 tons) Corporate 6,000 metric tons CO2 (6,614 tons)
15 Westferry Circus ( l f ) (at left) Ci Trackside House

Metric tons of CO2 This data was then processed and converted into equivalent metric tons of CO2 emissions for an average process, e.g., carbon footprint for an average station modernisation. The process carbon footprint was then factored up by multiplying the individual carbon footprints by the number of times the process was carried out, e.g.: (carbon footprint for an average station modernisation x number of station modernisations completed) + (carbon footprint for an average metre of track replacement x number of metres of track replacement completed), etc. In identifying the CO2 impact of an activity, Tube Lines has also been able to identify and target efficiency improvements to reduce the CO2 impact of that activity. However, standardising the process footprints has posed a challenge. Whilst track replacement can be standardised for a metre of track, the amount of standardisation that can be achieved in station modernisation is less obvious and involved considerable thought and analysis by Tube Lines and AEA.

Process 72,000 metric tons CO2 (79,366 tons)


Co Composed of omposed 34 P 4 Processes

Figure 1. Elements Constituting Tube Lines Carbon Footprint

full-year 2006 energy and utilities (gas and water) consumption from utility bills, paper consumption from ordering records, and employee commute emission levels from a study conducted in 2006, then converted the values into equivalent metric tons of CO2. Process footprintemissions resulting from 34 processes that Tube Lines chose to measure. These processes involve materials use and waste generation and the transportation of materials and people

12

Bechtel Technology Journal

Handrail Comb-plate Trailer Wheel Tracks Main Drive Shaft Assembly

Materials
Newel Wheels Balustrade

Energy
Steps

Waste
Steps

Deliveries
D-Tracks

Travel
Main Drive Chain Gearbox

Travel
Truss Tension Carriage Shaft Assembly

Step Chain Wheel Tracks Drive Machine

Figure 2. Escalator Cross-Section

Energy consumption was reduced by 20% in 2007 and 5% in 2008 against 2% annual targets.

ENVIRONMENTAL BUSINESS OBJECTIVES Go Green is the name of Tube Lines environmental management system. Environmental business objectives are set by Tube Lines Executive Committee and are tied into the employee bonus scheme. 20062008 Objectives Over the past few years, the following objectives were set: 2006 Energy Consumption (kWh)5% reduction at Head Office by the end of 2006 A 3.5% reduction was achieved! 2007 Energy Consumption (kWh)2% reduction at Head Office, Trackside House, Stratford Training Centre, and Piccadilly line depots by the end of 2007 This target was set based on the technical potential for reducing energy consumption, achievements in reducing energy consumption during the preceding year, expectations of potential reductions in energy consumption at each location, and forecasted weather. During 2007, airconditioning thresholds were adjusted at office locations, lighting banks and controls were re-set, movement detectors were fitted, lift use was restricted during off hours, and employee awareness was raised. Tube

Lines also converted to a green energy tariff so that all Tube Lines-sourced electricity (Head Office, Trackside House, and Stratford Training Centre) comes from renewable energy. A 20% reduction was achieved! 2008 Energy Consumption (kWh)2% reduction at Head Office, Trackside House, Stratford Training Centre, and Piccadilly line depots by the end of 2008 During 2008, new personal computer equipment was provided to all Tube Lines sites. This new equipment automatically goes into a sleep state when left unattended, which saves energy. Blade-type computer servers were also employed. Tube Lines increased its energy savings by giving up occupancy of two floors at its Westferry Circus offices and sharing other areas with other tenants (79/21 split based on floor space occupied). A 5% reduction was achieved! 2007 Paper Consumption15% reduction in white A4 paper usage A 44% reduction was achieved! 2008 Paper Consumption5% reduction in white A4 paper usage This target was established after measuring the number of A4 and A3 reams of paper purchased in 2007. The data was collected for 15 Westferry Circus, Stratford Training Centre, and other smaller sites. The target

December 2009 Volume 2, Number 1

13

Table 1. Tube Lines CO2 Reductions


Activity That Reduces CO2
Reduction in electricity consumption Reduction in paper consumption More fuel-efficient DSM road fleet Dedicated paper recycling by DSM ERU fleet

CO2 Reduction, metric tons (tons)


1,189.3 23.0 41.6 2,804.9 2.0 24.0 47.0 9.7 2.4 0.8 32.5 0.5 77.8 21.7 46.0 121.2 (1,311.0) (25.4) (45.6) (3,091.9) (2.2) (26.5) (51.8) (10.7) (2.6) (0.9) (35.8) (0.6) (85.7) (23.9) (50.7) (133.6)

for 2008 was to reduce paper consumption by 5% across the business based on the total amount ordered in 2007. A 22% reduction was achieved! 2007 Fuel Efficiency5% improvement in fuel efficiency of commercial road fleet over 7.5 metric tons (8.3 tons) A 14% reduction was achieved! 2008 Fuel EfficiencyMaintain achievement of 2007 target for commercial road fleet over 7.5 metric tons (8.3 tons) This level was maintained! (To be third-party verified in 2009.) Current or Recently Completed Objectives The following activities have also been completed or are under way: Corporate Inclusion of an energy assessment in the investment application form Updates to the environmental training courses to include energy management Quantification of the financial implications of climate change on Tube Lines Stations Installation of long-life lamps and light-emitting diodes (LEDs) during station modernisations Trial use of 360-degree cameras Installation of waterless urinals Identification of 22 energy-saving initiatives, with estimated savings of 2,199,000 kWh electricity and 908 metric tons (1,001 tons) of CO2. LU has provided funding to review all available low-carbon technologies for use in stations. The purpose of this report is to prioritise those technologies for use in practical trials in the underground energy-efficient model station proposed to LU. P-Way Extension of zero-waste-to-landfill (ZWTL) pilot to the entire permanent way (P-Way) Jubilee and Northern Lines Upgrade Project Investigation of power supply upgrades Investigation of reconfiguring distribution network

LU has provided funding to review low-carbon technologies for use in an underground energy-efficient model station.

P-Way sleeper popping ZWTL Kingsbury Embankment GSM Installation of GPS on ERU fleet Reduced waste seat covers In situ wheel turning Hose nozzle to reduce water Platform resurfacing infrared method, 2007 DSM fleet improvements Using Acton for storage ZWTL P-Way civils embankment works Platform resurfacing infrared method, 2008 Information technology computer refresh Procurement of fire doors Gas consumption at Westferry Circus and Trackside House Water consumption at Westferry Circus and Trackside House Overtiling on stations L&E metal savings ZWTL P-Way civils embankment works Borough lifts Truss escalator replacement at Heathrow Airport TOTAL
DSM ERU GPS GSM L&E P-Way ZWTL

29.2

(32.2)

1.6 137.0 72.8

(1.8) (151.0) (80.2)

8.3 4.0 169.0 95.9 23.7 19.7 5,006

(9.1) (4.4) (186.3) (105.7) (26.1) (21.7) (5,518)

distribution services management emergency response unit global positioning system global system for mobile communication lifts and elevators permanent way zero waste to landfill

14

Bechtel Technology Journal

Installation of 20 kilometres (12.4 miles) of composite conductor rail Reorganisation of track power segments Discussions with LU regarding coasting Modelling of temperature and humidity on Jubilee and Northern lines Sustainable design of supporting infrastructure Northern line control centre green roof/intelligent lighting Stratford train crew accommodationzero-maintenance cladding/optimal use of natural light/intelligent lighting/ green roof/flexible floor design Piccadilly Line Upgrade Project Consideration of environmental innovation, energy efficiency, and CO2 impacts embedded in prequalification process for new rolling stock Investigation of energy storage Investigation of composite materials

CARBON CONSUMPTION

y the end of December 2007, the CO2 impact of all processes was determined and the 2007 reduction was estimated to be 1,085 metric tons (1,196 tons) against the 2006 baseline data. The 2008 reduction target of 5,000 metric tons (5,512 tons) was then agreed to and planned to be achieved through potential reductions in electricity, gas, water, paper, and fuel consumption and waste production, as well as increases in recycling and transportation efficiencies, as depicted in Table 1. This 2008 target was equivalent to a 6% reduction of Tube Lines 2006 (baseline) footprint. The target was agreed to by the Executive Committee. Figure 3 and Table 1 show that by the end of Period 13 2008, the target had been met: a reduction of 5,006 metric tons (5,518 tons) of CO2 was achieved. This was subsequently increased to 5,277 metric tons (5,817 tons) during the verification process. Tube Lines is not able to provide monthly figures for plant and equipment fuel usage since the use of diesel/petrol plant and equipment is discouraged and restrictions exist for its use in Section 12 stations (fire regulated). However, fuel usage has formed part of the CO2 management process assessments.

The original 2008 achieved reduction of 5,006 metric tons was subsequently increased to 5,277 metric tons during the verification process.

6,000

5,000

Carbon Dioxide (CO2 ) Saved, metric tons

4,000

3,000

2,000

1,000

0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13

Period
Action Level Warning Level Target Performance

Figure 3. 2008 CO2 Reductions by Period

December 2009 Volume 2, Number 1

15

1,200

30

1,000 5%

25

Electricity Usage, MWh

800

20

600

15

400

10

In 2008, a 22% reduction in paper consumption led to cost savings in paper purchased.

200

0 Jan Feb Mar Apr May Jun Jul Aug Sept Oct Nov Dec

Monthly Average Temperature

Consumption, MWh

Target

12-Month Rolling Average

Figure 4. 2008 Target and Average Energy Usage

2,600

Reams of Paper Purchased

2,100 22%

1,600

1,100

600

100

P1

P2

P3

P4

P5

P6

P7

P8

P9

P10

P11

P12

P13

Total

Target

13-Period Rolling Average

Figure 5. 2008 Paper Usage

25

Fuel Consumption, mpg

20

15

10

Jan

Feb

Mar

Apr

May

Jun

Jul

Aug

Sept

Oct

Nov

Dec

Consumption, mpg

Target

12-Month Rolling Average

Figure 6. 2008 Fleet Fuel Consumption

16

Bechtel Technology Journal

Mean Temperature, C

As shown in Figure 4, energy consumption at the end of 2008 was 5% below 2007 performance levels and surpassed the 2% reduction target. By the end of 2008, the yearly paper consumption was 22% below 2007 levels and surpassed the 5% reduction target (see Figure 5). As shown in Figure 6, by the end of 2008, fleet fuel consumption was on target to maintain or slightly improve upon its 2007 performance achieving 13 miles (21 km) per UK gallon (11 miles [18 km] per US gallon). Through the Tube Lines business objective targets, corporate and process footprint activities and improvements that target a reduction against the LU power footprint can also be tracked (see Table 2).
Table 2. LU CO2 Reductions
Activity That Reduces LU CO2 Level
Installation of green roofs Creation of model station (proposal submitted to LU): Automatic/controlled lighting Use of waste heat in tunnels for heating Reduced use of escalators during off hours Installation of wind turbines for auxiliary supply 6 188 52 19 (6.6) (207) (57) (21)

BIOGRAPHY
Elisabeth Culbard, PhD, is a technical expert in the field of sustainability, environmental and social impact assessment, and construction management. She has more than 25 years of experience on transport and infrastructure projects in London, the UK, and internationally, providing problem-solving technical solutions, hands-on construction management expertise, and strategic environmental and sustainability policy advice. Elisabeths responsibilities include developing sustainability and responsible procurement programs across a range of Bechtel Civils business portfolio of projects; in particular, Crossrail, Tube Lines, and Autostrada Transylvania. She is also experienced in finding workaround solutions to problems that can cause project delays or budget overruns. Elisabeth was team leader on Bechtels Strategy Working Group on Climate Change and a Steering Group Member on the UK Construction Industry Research Information Association project on How to Deliver Socially Responsible Construction Projects. She is also an Expert Member on a Joint Institute of Civil Engineering/Engineers Against Poverty Panel on Promoting Social Development in International Procurement. Elisabeth was involved in route optioneering, contaminated land remediation, and construction supervision and sustainability planning for the High Speed 1 Temple Mills Depot. She also worked on the Channel Tunnel as environment manager, leading the project through the Hybrid Bill; project stakeholder engagement; and 4 years of environmental planning, design integration, and construction execution. Before joining Bechtel, Elisabeth was responsible for the day-to-day environmental and social performance of the International Finance Corporations global portfolio of infrastructure construction projects. This work won her the prestigious James Wolfensohn Excellence Award for due diligence on global railway projects. Elisabeth received her DIC (Diplomate of Imperial College of Science and Technology) from the University of London; her PhD in Environmental Engineering from the Royal School of Mines, Imperial College of Science and Technology, University of London; and her BSc with Combined Honours in Geology and Environmental Science from the University of Aston in Birmingham; all in the UK.

The aim of this programme is to introduce CO2 management into the business case process.

CO2 Reduction, metric tons (tons)


1,050 (1,157)

CONCLUSIONS ube Lines has developed a mechanism for evaluating the carbon footprint of its day-to-day operations and created a baseline against which it can make focussed operational and investment decisions to reduce CO2. The aim of this programme is to introduce CO2 management into the business case process.

ACKNOWLEDGMENTS The author would like to thank Charlotte Simmonds and the rest of the Tube Lines Go Green Environment team for their valuable contributions to this paper. Since Charlotte has now left Tube Lines, all enquiries on this topic and associated environmental management issues should be forwarded to Steven Judd at steven.judd@tubelines.com.

December 2009 Volume 2, Number 1

17

18

Bechtel Technology Journal

Communications
Technology Papers

21

Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths


Ray Butler Aleksey A. Kurochkin Hugh Nudd

33

Cloud ComputingOverview, Advantages, and Challenges for Enterprise Deployment


Brian Coombe

45

Performance Engineering Advances to Installation


Aleksey A. Kurochkin

BTC Mobile
With global reach and a local touch, Bechtel is the perfect partner for rapidly growing communications companies. In Bulgaria, we are managing site deployment for a nationwide wireless network.

INTERMODULATION PRODUCTS OF LTE AND 2G SIGNALS IN MULTITECHNOLOGY RF PATHS


Issue Date: December 2009 AbstractCommunication signal distortion in wireless network radio frequency (RF) paths has been predicted theoretically and studied experimentally by numerous authors for various signals. Research and testing have demonstrated that, while most vital signal parameters remain within the limits specified by applicable standards, an increasing level of spurious emissions caused by cross- and intermodulation (IM) can result when the composite signal power approaches the specified maximum power. To avoid degrading both existing and new systems, it is important to understand the sources of IM and to minimize the levels of IM generated. This and other signal quality issues must be considered when designing and deploying multitechnology systems. To illustrate these points, this paper examines the reaction of a multiband antenna system to a wideband long-term evolution (LTE) signal and a narrowband 2G signal. In doing so, the paper discusses the causes of passive IM, its harmful effects at the system level, methods of measuring it, and design principles to reduce its generation to acceptable levels. Keywordsaccessibility, antenna system, antenna testing, component selection, connector, field testing, intermodulation (IM), IM products, key performance indicator (KPI), long-term evolution (LTE), multitechnology wireless system, nonlinearity, passive intermodulation (PIM), passive RF component, receiver desensitization, retainability, signal distortion, spurious emissions, voice quality
INTRODUCTION ultichannel radio communication systems have been deployed for many years, and a common requirement in all such systems has been to suitably manage spurious signals produced by intermodulation (IM). [1, 2, 3] IM is the interaction (or mixing) of the fundamental signal frequencies in a nonlinear circuit, resulting in the generation of additional, unwanted, signals. Signals at different frequencies in any nonlinear circuit create a large number of additional signals at other frequencies. These other frequencies are the harmonics (integer multiples) of the fundamental frequencies and the sum and difference frequencies of any combination of these fundamental and harmonics. Depending on the particular frequency plan, some of these signals can appear in the desired communication channels and cause interference. These interfering signals can be generated by nonlinear behavior in either the active circuits (amplifiers and signal processing circuits) or the passive circuit elements (antennas, cables, connectors, filters, etc.). Consider two fairly close frequencies f1 and f2. When carried in a circuit together, they can be caused to generate a number of related frequencies, such as 2f1 f2 (third order), 3f1 2f2 (fifth order), or 4f1 3f2 (seventh order). (The harmonic numbers in the two terms of these difference frequencies differ by 1.) Introducing a third nearby frequency f3 can cause products such as f1 + f2 f3 (also third order) to be generated. These various products are close to, and can even be the same as, the systems receive channel frequencies. If they are the same, they cause additional noise in the system or, at high enough magnitudes, occupy one or more channels and make them unavailable for traffic. Even in more recent mobile phone systems whose wideband modulation schemes (e.g., code division multiple access [CDMA]) do not have discrete channels in the frequency domain, IM can still produce unwanted noise that reduces system efficiency. In radio communication systems such as terrestrial microwave links, which have been deployed for many decades, and mobile phone systems, which have been deployed for about 25 years, the necessity to minimize generated IM levels has long been recognized. The main sources of nonlinearity are usually in the active circuits, but appropriate system design typically limits the number of frequencies handled by, say, a single amplifier. However, some of the passive circuit elements such as antennas are required to
21

Ray Butler
Ray.Butler@andrew.com

Andrew Solutions

Aleksey A. Kurochkin
aakuroch@bechtel.com

Hugh Nudd
Hugh.Nudd@andrew.com

Andrew Solutions

2009 Bechtel Corporation. All rights reserved.

CAUSES OF PASSIVE INTERMODULATION ABBREVIATIONS, ACRONYMS, AND TERMS


2G ARQ BCCH BS BTS CDMA second generation digital mobile phone service automatic repeat request broadcast control channel base station base transceiver station code division multiple access Deutsches Institut fr Normung (German Institute for Standardization) ratio of signal energy to additive noise enhanced data rates for global evolution general packet radio service global system for mobile communication intermodulation key performance indicator long-term evolution multiple input, multiple output multiple user passive IM radio frequency received signal strength indication receive/receiver signal-to-noise ratio single user transmit/transmitter universal mobile telecommunications system

PIM is generated in a circuit carrying more than one frequency whenever a nonlinearity occurs.

DIN

Eb/No EDGE GPRS GSM IM KPI LTE MIMO MU PIM RF RSSI Rx SNR SU Tx UMTS

PIM is generated in a circuit carrying more than one frequency whenever a nonlinearity occurs, i.e., whenever the voltage is not exactly proportional to the current or the output power is not exactly proportional to the input power. The greater the degree of nonlinearity (i.e., the greater the curvature of the voltage/current or output power/input power characteristic), then the greater the level of the PIM signal generated. In the passive radio frequency (RF) circuits of typical components, the two fundamental causes of nonlinearity and PIM generation are (a) some degree of currentrectifying action at the conductor joints and (b) a varying magnetic permeability because of the presence of ferromagnetic materials in or near the current path. [2] Current-Rectifying Action In a typical passive RF component, multiple metallic parts form the conduction path. Because there are usually many such components in the system, the complete transmission path has many metallic junctions that the RF currents must cross. The metals used in transmission paths usually have very thin oxide layers on the surface. The oxide itself may be semiconducting (e.g., copper oxide), but even if it is insulating (e.g., aluminum oxide), electrons still cross the very thin surface layer by tunneling. This nonlinear process produces a small diode effect. [2] The result is some degree of rectifying action and hence a nonlinear mechanism for generating PIM products. Note also that when a direct current exists in the transmission path along with the RF signal (to power a tower-mounted amplifier, for instance), the RF signal may be moved to a more nonlinear portion of the voltage/ current characteristic, causing an increase in the magnitudes of the PIM products generated. The severity of PIM generation depends on the degree of nonlinear-to-linear current flow, which, in turn, depends on how well true metalto-metal contact is created at the junction. The best contact is achieved by welding, soldering, or brazing the two metal parts. If doing this is impractical, then there should be some means of creating high pressure across the contacting surfaces. Low contact pressure increases the proportion of nonlinear current flow and, correspondingly, the magnitudes of the PIM products generated. Nonlinearity at conductor joints can also be produced by the presence of corrosion products. These may form over time, especially at junctions of dissimilar metals in the presence of moisture.
Bechtel Technology Journal

carry many frequencies, making it necessary to also control passive intermodulation (PIM). In long-term evolution (LTE) systems, PIM affects one or several 180 kHz blocks; this reduces cell and neighbor capacity. PIM also increases LTE intercell interference on the affected band and overall in the system. Furthermore, PIM can cause the system to operate at maximum power instead of under power control, causing undesirable increased power dissipation in the components. This paper describes the causes of PIM, its harmful effects at the system level, methods of measuring PIM, and design principles to reduce its generation to acceptable levels.

22

Presence of Ferromagnetic Materials The second mechanism of PIM generation arises from the nonlinear permeability that occurs when RF currents flow in ferromagnetic materials (e.g., iron or nickel). In their nominally unmagnetized state, these materials consist of small regions (domains) that are highly magnetized but oriented in different directions so that there is no net magnetism. An RF field causes the domains to oscillate, and the domains experience internal forces resisting the motion (hysteresis). The result is a magnetic permeability that depends strongly on the field strength. [2] Because the skin depth in a conductor (the effective thickness of the surface layer in which the RF current flows) depends on permeability, and thus field strength, so does the RF resistance. In general, ferromagnetic materials show a high degree of nonlinearity to RF currents, and the levels of PIM generation are correspondingly high. Additional Considerations Because PIM products are caused by nonlinear processes, their amplitudes depend in a nonlinear way on the amplitudes of the generating signals. Basic considerations suggest that third-order products increase by 3 dB for every 1 dB increase in signal level. In practice, the increase varies somewhat from case to case but the measured increase is usually somewhat less than the predicted figure; a rate of about 2.5 dB per 1 dB is typical. At the typical signal levels found in communications systems, the amplitudes of higher order PIM products (fifth order, seventh order) are lower by 10 or 15 dB between successive orders, but the rate of change based on the driving signal amplitude is greater. In all cases, then, the relative level of the PIM products with respect to that of the driving signals depends on the signal level, which must be specified. The mechanisms for PIM generation are considered in more detail in the following descriptions of design principles and methods for minimizing PIM generation.

PIM can also be generated in the tower, especially in outdoor device connectors, by a phenomenon called rusty bolt noise, which is caused by erosion of the devices in harsh environments. Laboratory tests show that PIM also increases as device temperature rises. The differences between PIM products are in their magnitudes and in their effects on system performance. Some RF path failure modes have been shown to increase insertion loss in the path, and also often result in PIM generation. These issues manifest themselves mostly in the transmit path, where the power is high. While tolerable for systems with reverse-link-limited link budgets, such issues reduce the RF coverage of forwardlink-limited systems. Fortunately, PIM testing often catches these issues. In severe cases, when a component becomes overheated, the systems stop working partially or completely. For example, if the transmission path passes both global system for mobile communication (GSM)/general packet radio service (GPRS) and LTE signals (as shown in Figure 1), either or both may be affected to the point that the transmit signal cannot be detected by the mobile device. Naturally, all major key performance indicators (KPIs) are worsened in the area of the affected cell, as discussed below.

Ferromagnetic materials show a high degree of nonlinearity to RF currents, and the levels of PIM generation are correspondingly high.

Dual-Band or Single-Band Antenna

Two RF Coax Cables GSM Tx1/Rx1 UMTS Tx3/Rx3 GSM Tx2/Rx2 UMTS Tx4/Rx4 Diplexer

EFFECTS OF PIM IN WIRELESS SYSTEMS PIM Generation and Levels of Tolerance As described in the previous section, passive RF components usually experience some level of PIM. When the input signal to a passive device includes more than one frequency, various products of the input harmonics are generated. Product signals falling within the receive signal band pass through the transceiver filter and could easily desensitize the receiver. [4]
BTS1 GSM

Diplexer

Node B UMTS

Rx Tx

Rx Tx Rx Tx

Rx Tx

Duplexer Duplexer

Duplexer Duplexer

Figure 1. Schematic of Cell Site

December 2009 Volume 2, Number 1

23

PIM affects most wireless systems with balanced or uplink-limited link budgets by increasing the base station noise floor.

The maximum tolerable PIM performance for an RF network depends on the signals that traverse the individual devices, the locations of the devices in the RF system, and the signal power that is transmitted. Antennas usually have very stringent PIM requirements, because they carry almost the full power of the base station (BS) and carry signals with various frequencies. On the other hand, the PIM tolerance of the receiver filter located on the other side of the diplexer is not as stringent because it receives only a very low signal power in a single frequency band. Thus, RF engineers should indicate the maximum available PIM level for each component in the RF path based on a complete analysis of the feeder system. For example, the specifications for an antenna might be indicated as 107 to 110 dBm IM3 power, measured with two 43 dBm carrier tone inputs. A receiver becomes desensitized when PIM power is comparable to thermal noise or other interferences. Such power causes receiver performance to deteriorate. The degree of desensitization varies depending on the power of the generated harmonics. Typically, this power is measured in decibels and is added to the noise floor of the BS. This type of PIM affects most networks, because most wireless systems are designed with balanced or uplink-limited link budgets. By increasing the BS noise floor, PIM decreases the maximum allowable path loss for the mobile. Effects on Legacy 2G Systems Legacy 2G systems suffer when PIM reduces the signal-to-noise ratio (SNR), which affects all of the major KPIs. Accessibility Mobiles in the narrowband legacy systems such as GSM experience receiver desensitization as reduced cell radius and increased interference areas. The GSM mobile may not be able to read the broadcast control channel (BCCH) and access the BS when the mobile is located on the planned edge of the cell. Effectively, the cell shrinks, while still demanding full power from the mobile and transmitting the full power from the BS, which in turn increases the PIM levels and exacerbates the problem, further decreasing accessibility. Retainability PIM causes decreased retainability in a GSM/ GPRS/enhanced data rates for global evolution (EDGE) cell. To illustrate, assume that a GSM mobile originates a call near a BS in an area with a relatively good SNR. Then, the mobile moves

to the cell edge, where the BS, affected by PIM, stops recognizing the mobile. However, the mobile continues to read a stronger received signal strength indication (RSSI) from the originating cell than from the neighbor and does not initiate the handoff. Nonetheless, the BS drops the call, thus reducing retainability and worsening the statistics. Quality Quality is also affected. Again assume that a GSM mobile originates a call near a BS in an area with a relatively good SNR. The mobile again moves closer to the cell edge, without crossing it. There, the BS, affected by PIM, barely recognizes the mobile, although the mobile still reads a stronger RSSI from the originating cell than from the neighbor. In this example, the mobile neither initiates the handoff nor drops the call. Instead, it experiences voice clipping, slow data rates, and other poor quality effects. For all KPIs, uplink power control may try to combat some level of PIM in the BS by increasing mobile power, but this results in decreased battery life and an increased level of uplink interference. Effects on LTE An LTE mobile experiences PIM problems similar to those of a legacy mobile despite designed resiliency to interference, because PIM affects the BS receiver by also increasing the overall noise floor. For example, suppose an LTE receiver works over the 10 MHz band with 90% utilization. If receiver desensitization is the typical 0.8 dB and the noise figure is 5 dB, the receiver has a noise floor of 99.5 dBm and can still detect signals with an interference level at the equipment antenna port of 106.5 dBm. [5] However, PIM further increases the receiver noise floor by 1 to 3 dB, thereby reducing receiver sensitivity and the ratio of signal energy to additive noise (Eb/No) in the system. Thus, a cell affected by PIM also shrinks from its planned radius, dropping the mobile connection before it reaches another cell. In severe cases, the receiver is overloaded and blocked [5] and the LTE BS loses its reference signal and resets. The PIM level drops during the reload and the cell starts working normally, but as soon as it starts picking up more users, the power increases, PIM levels grow, and the cycle is repeated. The LTE downlink, which consists of 180 kHz frequency blocks, can generate PIM even without the presence of 2G signals. When the LTE downlink experiences path nonlinearity,
Bechtel Technology Journal

24

the tones interact with each other and their products distort the downlink signal and leak into the adjacent channels. In extreme cases, the mobile may stop recognizing even an otherwise strong downlink channel. Scheduler and ARQ In addition, PIM may affect the whole scheduler algorithm. The scheduler may not be able to fight this increased noise and the resulting SNR reduction because the scheduler is better designed to combat temporary RF signal fading. [6] Instead, usage of the automatic repeat request (ARQ) mechanism may increase because more transmission errors are being caused by PIM-generated noise in the channel. It can be predicted that the second layer of the ARQ mechanism would be used more often than intended. More data frames would need to be retransmitted, which may decrease channel throughput. Power Control and Inter-Cell Interference Coordination An LTE mobiles power control pushes up the mobiles power in an attempt to combat PIM in the BS receive path. When overall power in the band of the affected cell increases, LTE spectrum efficiency decreases and inter-cell interference increases. The affected cell may get a high interference indicator over a large portion of the band. The neighboring cells start losing capacity in an attempt to accommodate the interference issue of the affected cell. Multiple Antenna Element Transmission Multiple antenna element transmission is designed to create diversity gain in LTE. [6] However, if both antennas are connected to the same receiver, PIM from one antenna receive path would desensitize the whole receiver. If the antennas are connected to different receivers, the PIM levels will be different in each. In this case, the diversity would cause the path less affected by PIM to be chosen. However, this decision would also be locked in due to PIM, instead of working against fading and capitalizing on multipath. Transmission to/from two or more antennas in both multiple-user multiple-input, multiple-output (MU-MIMO) and single-user (SU-MIMO) is affected, albeit to various extents, resulting in slower data rates than would be expected with multilayer transmission. For example, if SU-MIMO transmission is established with four antenna paths (layers), four times the data transmission speed of the single connection can be expected. [6] However, if one or more of these paths is dropped or affected by PIM, the top speed could be significantly less.

MINIMIZING PIM GENERATION

s stated, the main mechanisms that generate IM products in passive components are poor (low pressure) contacts between conductors, the presence of corrosion products, and the presence of ferromagnetic materials in or near the conducting path. At the same time, the greater the number of frequencies passing through those components, the greater the possibility that harmful frequency combinations may occur. Therefore, the steps to minimize PIM are twofold: through proper planning and through better physical conditions. Proper Planning

Frequency Planning The RF engineer should conduct IM analyses to identify possible frequency combinations that could produce harmful products. Several IM study software packages are available that can quickly run through combinations of frequencies and point out which ones can affect the receivers. The RF engineer can then try to avoid these combinations by using advanced options in 2G frequency hopping or the LTE scheduler. Spatial Planning If the harmful frequency combinations overlap the carrier frequencies, site selection and antenna placement should be considered. The site engineers should be advised of the appropriate minimum vertical and horizontal separation between carrier antennas. The same analysis may need to be repeated at each site in question to determine site-specific antenna placement. Cable Utilization Cable utilization to help avoid PIM generation involves a systemwide decision. When there is a need to transmit two or more transmit signals (Figure 1), the RF/systems engineer should be encouraged to distribute these signals over different cables. This decreases the maximum transmit power in a path and decreases the PIM. Better Physical Conditions PIM products are minimized by improving the physical connections, by paying attention to the detailed design of all conductor junctions so that they remain solid and reliable for all service conditions, by eliminating the possible formation of corrosion products over time, and by eliminating ferromagnetic materials from any region where significant RF currents flow. Contacts A low-pressure conductor junction actually has true metal-to-metal contact at only a relatively

The steps to minimize PIM are twofold: through proper planning and through better physical conditions.

December 2009 Volume 2, Number 1

25

It is good design practice to use compatible metals in the electrochemical series at all junctions.

low number of contact points, which means that some current flows across the nonlinear oxide layers on the conductor surfaces and generates IM products. The degree of nonlinearity and the generation of IM are reduced, then, by improving the contact mechanism. The best methods for doing this are welding, soldering, and brazing. These methods establish a continuous metallic path for the current flow, but they are often impractical. Some junctions must be able to be disassembled (e.g., connector interfaces), and the configuration of others may be determined by product assembly or field assembly requirements. In these cases, the mechanical arrangements for the contact design must be capable of generating high contact pressures (up to 1,000 psi). Such pressures force a higher proportion of the contact area to be true metal to metal, thereby reducing the degree of nonlinear current flow. Also, the design of the contact and the arrangement of the associated mechanical support must be such that high contact pressure is maintained throughout the fluctuations in ambient temperature and vibration that occur under various operating conditions. Methods of making RF electrical contacts, in approximately descending order of quality in terms of minimizing IM generation, include: Soldering, brazing, or welding Clamped joints (with suitable support arrangements) Butt joints Spring fingers Crimped joints All of these find application in typical passive components except possibly the last (crimped

joints), which can loosen with repeated thermal expansion and contraction. Corrosion Products Corrosion products, typically metal oxides, hydroxides, carbonates, and other salts, form on metal surfaces in the presence of water. Their formation can be especially severe at the junctions of dissimilar metals. For this reason, it is most important to prevent the ingress of even small amounts of water into an RF transmission path (where it would also cause immediate electrical performance degradation). It is also good design practice to use compatible metals in the electrochemical series at all junctions. Ferromagnetic Materials As noted earlier, ferromagnetism gives rise to a field-dependent permeability and generation of IM products. To minimize IM products, only nonferromagnetic materials (such as copper, aluminum, silver, gold, brass, and phosphor bronze) can be used for conductors or as plating or under-plating materials for conductor surfaces. Before the deployment of modern cellular phone systems, nickel (which is ferromagnetic) was often used as a plating material on conductor surfaces (e.g., on RF connector bodies) because of its excellent environmental stability and reasonably low cost. This practice was soon found to cause IM generation, however, and nickel has generally not been used on RF connectors in these systems for some time. Nickel was sometimes used, too, as under-plating for gold, and doing so was also found to give rise to IM products. Figure 2 shows the variation of current density with distance into the conductor for 100 microinches of gold (which is a typical plating thickness) on nickel at three different frequencies. At 1 GHz (close to cellular operating frequencies), the current density at the top of the under-plating is about 40% of that at the outer current-carrying surface; hence, the nonlinearity and the generation of IM products would be high. The situation improves somewhat at higher frequencies because of the reduction in skin depth, but even at 10 GHz (much higher than present operating frequencies), the current density at the nickel under-plating surface is still 5% of that at the outer conductor surface. It may be necessary to use a ferromagnetic material for strength or other reasons; thus, very small diameter coaxial cables may use a copper-clad steel wire for the inner conductor. In these cases, the current-carrying surfaces must have a thickness of several skin depths over the ferromagnetic material.

1.0 0.9 0.8 0.7


0.1 GHz 1.0 GHz 10.0 GHz

Total Current

0.6 0.5 0.4 0.3 0.2 0.1 0

50

100

150

200

Distance from Outer Surface, microinches

Figure 2. Current Distribution for Composite Plating [7]

26

Bechtel Technology Journal

PIM MEASUREMENT CONSIDERATIONS

s indicated earlier, it is important to measure PIM, not only to validate the integrity of the components in the RF path, but also to verify that the connectors have been properly attached and connections properly torqued. A PIM test can be used to identify loose connectors as well as poorly assembled or designed RF components and modules that might be missed by a swept return loss or other test. At the same time, a PIM test can also be more difficult to perform. This section explores the best practices for testing PIM and provides recommendations on how best to proceed. Measurement Sensitivity The successful measurement of PIM products in the factory or in a field setting is difficult because of the sensitivity of the measurement to factors in the environment as well as in the test setup and instrumentation itself. Antenna manufacturers, for example, typically use a PIM test chamber with RF-absorptive material to avoid the effects of the environment. As another example, in most cases, the 50 ohm dummy loads used to calibrate the test set for a swept return loss test generate a high PIM level. A load supplied by the test equipment provider must instead be used. Typically, such a load consists of a length of small diameter coaxial cable. One of the first things to understand about PIM measurements is how the difference in the transmitter power and receiver sensitivity of the 10 dB Margin for PIM tester can affect the Valid Measurement accuracy and sensitivity of the measurements. Note that the specified power level applies to each of the two transmitter signals that the instrument generates. So a PIM analyzer specified to output 43 dBm signals actually outputs two signals at 20 W (43 dBm) each, for a total output power of 40 W (46 dBm). Test Tone Power Level As indicated earlier, power level is important because the effect of transmitter power on
24 dB

the level of the PIM products is nonlinear. Theoretically, if the transmitter carrier changes by 1 dB, the corresponding change in PIM power level is 3.0 dB. In practice, this change ranges between 2.4 dB and 2.8 dB for every 1 dB change in transmitter signal power level. In practical terms, this means that care must be taken in specifying and executing the tests. If an antenna, for example, is measured to have PIM products of 150 dBc with two 20 W test tones, then the performance of that same antenna would be 164 dBc when measured with two 2 W test tones. This is because reducing the input signal by 10 dB reduces the PIM from the same antenna by 24 dB. Then, the measured products are 24 dB 10 dB = 14 dBc. Figure 3 helps to explain this concept. In Figure 3, the upper bar represents the high power measurement, where 43 dBm of signal power per tone is used, as shown on the right. The required signal level is 107 dBm. The difference between these two is 150 dBc. Two test tones, each +43 dBm (20 W), is the industry standard measurement. In the lower bar, test tones with 33 dBm of signal power are used. In this case, the measured PIM signal from the same antenna is 24 dB lower, which is 131 dBm. The difference between 131 dBm and +33 dBm is 164 dBc. The reason the absolute PIM level is 24 dB lower is because

A PIM test can be used to identify loose connectors as well as poorly assembled or designed RF components and modules.

Equivalent PIM Tests 20 W vs. 2 W (For every 1 dB of carrier reduction, PIM reduces ~2.4 dB.)

107
150 dBc

43

131 164 dBc

33

10 dB

135

120

105

90

75

60

45 Level, dBm

30

15

15

30

45

Typical Residual PIM and Noise Floor

PIM

Carrier Levels

Figure 3. Equivalent PIM Tests for Different Power Levels

December 2009 Volume 2, Number 1

27

the fundamental test tones are reduced in signal strength by 10 dB. In this case, the PIM signals change 2.4 dB for every 1 dB of change in the test tones, which is typical for a third-order product. A PIM analyzer with at least 20 W per tone is recommended. The low power tester shown in Figure 3 is incapable of accurately measuring a device with performance of 150 dBc (as measured at 20 W) and should not be used for PIM measurements any better than 140 dBc (at 20 W).

PIM signals change phase as they travel down the cable, and they add or subtract depending on their relative phases at all points along the cable. The in-phase and anti-phase levels depend on the relative phases of the individual PIM sources, their frequencies, and the frequency being observed. For this reason, a swept test is recommended. Because the frequencies required to perform a swept test are in commercial use, the test must be performed in a factory setting. The only way to get an accurate measurement is to sweep one of the tones across the band of interest otherwise an error almost always results. A practical alternative is to step one of the test tones across the band in four or five steps when using analyzers that do not allow swept tests. Thus, a key factor in selecting a PIM analyzer is its ability to perform swept measurements.

A PIM analyzer with at least 20 W per tone is recommended.

Residual PIM of Analyzer Another significant measure of the performance of a PIM analyzer is its own internal residual PIM level. It is recommended that the measured PIM value of the unit under test be at least 10 dB above the residual PIM level of the analyzer. Figure 4 shows the measurement error due to residual PIM as a function of PIM signal proximity to the residual PIM noise floor. The chart is derived by vector addition, in the best and worst phase cases, of the PIM signal to be measured and the residual PIM level of the test equipment.

In-Phase

Insertion Loss It is important to clearly define the reference point of the test tone power level because there is insertion loss in the test cables. Typically, on the order of 0.5 dB to 1.5 dB of insertion loss exists in cables, depending on their length and As seen in Figure 4, the error resulting from quality. If, as is usually the case, the specification measuring within 10 dB of the residual PIM calls for testing at the input terminal of the floor is +2.2/3.3 dB. For signal levels closer to unit under test, then the loss of the cables must the instruments noise floor, significantly worse be compensated for by increasing the output errors can result. This is because each point in power of the instrument. This compensation is the signal path where there is a connector or typically accomplished via a built-in calibration contact creates some small level of PIM. The routine that automatically accounts for the loss in the test cables Interference Effect of Two PIM Sources and increases the PIM testing output power 6 5 exactly enough to meet 4 the requirements at the 3 output of the test cables. 2
1 0 1 2 3 4 5 6 7 8 9 10 11 12

Resultant IM1, dB

+ =

Anti-Phase + =

Superposition Cancellation No Interference

IM1 True PIM of device under test (not known; estimated by Resultant) Resultant Measured PIM IM2 Other PIM source, such as Residual IM of the test equipment Measurement Uncertainty = Resultant IM1

9 10 11 12 13 14 15 16 17 18 19 20

IM1IM2, dB

Mechanical Shock A common practice among top antenna vendors is to induce mechanical shock to an antenna while it is undergoing testing. Doing this in a controlled, repeated manner has been demonstrated to catch poorly performing units that would have otherwise passed the test.

Figure 4. PIM Measurement Minimum Signal Level

28

Bechtel Technology Journal

Test Equipment Setup The test equipment setup, shown in Figure 5, must first be verified before it is used. Fundamentals that must be followed include: Use low IM jumper cables. High performance jumper cables are a must to correctly measure PIM. Otherwise, the cable PIM dominates the measurement and masks the actual performance of the system being tested. Use a low IM 50 ohm load. Low PIM loads typically contain lengths of coiled cable and have excellent PIM performance. Resistive loads, as used for swept tests, typically generate high levels of PIM. Use low IM adapters and minimize the number of adapters. However, the use of a connector saver at the test equipment connection is encouraged. Inspect connector faces for damage and cleanliness. Clean or replace them as needed. Torque connectors per specification to keep them tight. To prevent mechanical strain on the connectors, use low PIM jumpers to connect heavy or stiff items (antennas and large diameter coaxial cables). Perform swept tests as noted above. Use 7/16 DIN connectors as the high performance PIM connectors of choice. Do not use braided cables for test applications. These generally have poor PIM performance. Even if they are good initially, they can degrade with repeated flexing. Figures 6, 7, and 8 illustrate inappropriate setup configurations and the poor results they lead to. In Figure 6, braided test cables are used. In Figure 7, the problem is too many adapters and old, worn connectors. In
Figure 5. Test Equipment Setup

High performance jumper cables are a must to correctly measure PIM.

Passive IM Response (IM3)


F2 Down F1 Up
60 80

F1=2110.0 MHz, F2=2160.0 MHz; IM3=66.2 dBm at 2060.0 MHz F1=2113.0 MHz, F2=2170.0 MHz; IM3=66.6 dBm at 2056.0 MHz

IM Level, dBm

100 120 140 160 2050

2051

2052

2053

2054

2055

2056

2057

2058

2059

2060

Frequency, MHz REVERSE IM


Summitek Instruments VFP SI-2000E (UMTS) Rev. 4.1.2483

Figure 6. Use of Braided Test Cables

Passive IM Response (IM3)


F2 Down F1 Up
60 80 =2 Down

F1=2110.0 MHz, F2=2164.0 MHz; IM3=87.4 dBm at 2056.0 MHz F1=2113.0 MHz, F2=2170.0 MHz; IM3=88.2 dBm at 2056.0 MHz

IM Level, dBm

100 120 140 160 2050

2051

2052

2053

2054

2055

2056

2057

2058

2059

2060

Frequency, MHz REVERSE IM


Summitek Instruments VFP SI-2000E (UMTS) Rev. 4.1.2483

Figure 7. Use of Too Many Adapters and Old, Worn Connectors

December 2009 Volume 2, Number 1

29

Carrier Sweep ALC is on

TORQUE

Measured Power
43.0 dBm 43.1 dBm

Offset
0.0 dB 0.0 dB

Carrier Sweep ALC is on

IMPROPER

Measured Power
42.9 dBm 43.0 dBm

Offset
0.0 dB 0.0 dB

F2 DOWN from 1880.0 to 1828.0 MHz; F1 Fixed at 1805.0 MHz F1 UP from 1805.0 to 1831.0 MHz; F2 Fixed at 1880.0 MHz

F2 DOWN from 1880.0 to 1828.0 MHz; F1 Fixed at 1805.0 MHz F1 UP from 1805.0 to 1831.0 MHz; F2 Fixed at 1880.0 MHz

Passive IM Response (IM3)

Passive IM Response (IM3)


F2 Down F1 Up F1=1805.0 MHz, F2=1840.0 MHz; IM3=94.8 dBm at 1770.0 MHz F1=1827.0 MHz, F2=1880.0 MHz; IM3=98.1 dBm at 1774.0 MHz

The antenna must be tested in an area clear of metal objects and obstructions and within the antennas main beam pattern.

F2 Down F1 Up

F1=1805.0 MHz, F2=1844.0 MHz; IM3=124.5 dBm at 1766.0 MHz F1=1823.0 MHz, F2=1880.0 MHz; IM3=121.3 dBm at 1766.0 MHz

IM Level, dBm

80 100 120 140 160 1730 1738 1744 1749 1754 1760 1766 1771 1776 1782 1785

IM Level, dBm

60

60 80 100 120 140 160 1730 1738 1744 1749 1754 1760 1766 1771 1776 1782 1785

Frequency, MHz REVERSE IM

Frequency, MHz REVERSE IM

Figure 8. Proper Torque for Connections

Under Clear Sky (123 dBm [166 dBc])

Pointed Toward Forklift (84 dBm [127 dBc])

Next to Person with Phone, Keys, Adapters, and Badge (94 dBm [137 dBc])

Near Shelter (102 dBm [145 dBc])

Pointed at Fence (102 dBm [145 dBc])

Near Cabinet and Test Equipment (96 dBm [139 dBc])

Figure 9. Environmental Impacts on Antenna PIM Testing Results

Figure 8, the antenna on the left is properly torqued, while the antenna on the right is loosely torqued. The performance of the loose connector is roughly 20 db to 25 dB worse than that of the properly torqued connector. Testing Standalone Antennas When unmounted standalone antennas are to be tested, the best, most repeatable measurements are obtained in an RF anechoic chamber. If such a chamber is unavailable, special care must

be taken to guarantee a free sky view for the antenna. This means that the antenna must be tested in an area clear of metal objects and obstructions and within the antennas main beam pattern. A 90-degree antenna requires a wider clear area than a 33- or 65-degree antenna. The antenna shown in Figure 9 meets its PIM specification, has a good test setup, and was validated. As the figure depicts, it was leaned on its side and pointed in different directions. The measured results indicated that extreme

30

Bechtel Technology Journal

variations of as much as 40 dB or more can be observed, depending on where the antenna is pointing. These results illustrate how an antenna that is better than the specification by 16 dB can be shown to apparently fail by more than 20 dB, when, in fact, the poor readings are caused by the environment. Clearly, the environment in which an antenna is tested must be free of objects and people to avoid this type of problem. Summary To summarize, PIM testing is sensitive and can identify faulty components and poor workmanship during installation. However, making accurate measurements requires good test equipment, appropriate procedures, and care throughout. The recommended steps, if followed, produce valid test results and sound RF paths and components.

[3]

J.A. Woody and T.G. Shands, Investigation of Intermodulation Products Generated in Coaxial Cables and Connectors, Final Technical Report, RADC-TR-82-240, Rome Air Development Center, Georgia Institute of Technology, Atlanta, GA, September 1982, see http://www.stormingmedia. us/43/4362/A436221.html. A.A. Kurochkin and E. Dinan, Importance of Antenna and Feeder System Testing in Wireless Network Sites, Bechtel Communications Technical Journal, Vol. 5, No. 2, September 2007, pp. 6168, http://www.bechtel.com/communications/ assets/files/TechnicalJournals/September2007/ BTTJv5n2.pdf. M.H. Ng, S. Lin, J. Li, and S. Tatesh, Coexistence Studies for 3GPP LTE with Other Mobile Systems, IEEE Communications, Vol. 47, No. 4, April 2009, access via http://dl.comsoc.org/comsocdl/#. D. Astely, E. Dahlman, A. Furuskr, Y. Jading, M. Lindstrm, and S. Parkvall, LTE: The Evolution of Mobile Broadband, IEEE Communications, Vol. 47, No. 4, April 2009, access via http://dl.comsoc.org/comsocdl/#. B. Carlson, RF/Microwave Connector Design for Low Intermodulation Generation, Proceedings of IICIT CONN-CEPT 92: 25th Annual Connector and Interconnection Technology Symposium, San Jose, CA, September 30October 2, 1992, pp. 153166.

[4]

[5]

[6]

Measuring PIM should be added to the standard site testing and troubleshooting procedures.

[7]

CONCLUSIONS n modern wireless communication systems, minimizing PIM generation is critical to achieving optimum system performance. In taking steps to minimize PIM generation, particular attention must be paid to the quality of the system components and interconnections. At the same time, measuring PIM should be added to the standard site testing and troubleshooting procedures. PIM measurement must be undertaken using suitably reliable and accurate test equipment, and the measurements must be conducted with care.

ADDITIONAL READING Additional information sources used to develop this paper include: I.A. Chugunov, A.A. Kurochkin, and
A.M. Smirnov, Experimental Study of 3G Signal Interaction in Nonlinear Downlink RAN Transmitters, Bechtel Telecommunications Technical Journal, Vol. 4, No. 2, June 2006, pp. 9198, http://www.bechtel.com/ communications/assets/files/ TechnicalJournals/June2006/Article10.pdf. 7/16 DIN and Type N, Mobile Radio Technology, April 1995 and May 1995, Intertech Publishing Corp. Reprinted as Andrew Corporation Bulletin No. 3652, May 1995.

ACKNOWLEDGMENTS The authors gratefully acknowledge the managements of Bechtel Corporation and of Andrew Solutions (a CommScope Company) for permission to publish this paper, as well as Igor A. Chugunov (Bechtel) for his thorough paper review and thoughtful comments.

J.D. Paynter and R. Smith, Coaxial Connectors:

BIOGRAPHIES REFERENCES
[1] M. Bani Amin and F.A. Benson, Coaxial Cables as Sources of Intermodulation Interference at Microwave Frequencies, IEEE Transactions on Electromagnetic Compatibility, Vol. EMC-20, No. 3, August 1978, pp. 376384, access via http://www.ieeexplore.ieee.org/xpl/ tocresult.jsp?isYear=1978&isnumber= 4091185&Submit32=View+Contents. G.H. Stauss, Ed., Studies on the Reduction of Intermodulation Generation in Communication Systems, NRL Memorandum Report 4233, Naval Research Laboratory, Washington, DC, July 1980. Ray Butler is vice president of Engineering, Base Station Antenna Systems, at Andrew Solutions, the wireless business unit of CommScope, Inc. As such, he is responsible for overall base station antenna design. Previously at Andrew, Ray was vice president of Systems Engineering and Solutions Marketing, responsible for worldwide technical sales engineering and marketing solutions for wireless operators.

[2]

December 2009 Volume 2, Number 1

31

Ray has over 25 years of RF engineering experience. Earlier in his career, he was director of National RF Engineering at AT&T Wireless and vice president of Engineering, Research and Development, and of International Operations at Metawave Communications, a smart antenna company. Ray was also technical manager of Systems Engineering for Lucent Technologies Bell Laboratories and has held other management positions with responsibility for the design of RF circuits, filters, and amplifiers. Ray holds an MS in Electrical Engineering from Polytechnic University and a BS in Electrical Engineering from Brigham Young University, Provo, Utah. Aleksey A. Kurochkin, project manager for Bechtel Communications, is currently responsible for the product testing, system design, and entire site implementation cycle of EVDO and LTE technology in one of the most important regions for a US cellular operator. He is also a member of Bechtels Global Technology Team and was a member of the Bechtel Telecommunications Technical Journal Advisory Board. Formerly, as executive director of Site Development and Engineering for Bechtel Telecommunications, Aleksey managed the Site Acquisition and Network Planning Departments and oversaw the functional operations of more than 300 telecommunications engineers, specialists, and managers. In addition, he originated the RF Engineering and Network Planning Department in Bechtels Telecommunications Technology Group. As a member of Bechtels Chief Engineering Committee, Aleksey introduced the Six Sigma continuous improvement program to this group. He is experienced in international telecommunications business development and network implementation, and his engineering and marketing background gives him both theoretical and hands-on knowledge of most wireless technologies. Before joining Bechtel, Aleksey established an efficient multiproduct team at Hughes Network Systems, focused on RF planning and system engineering. In addition to his North American experience, he has worked in Russia and the Commonwealth of Independent States. Aleksey has an MSEE/CS in Telecommunications from Moscow Technical University of Communications and Informatics, Russia.

Hugh Nudd is cable product development manager with Andrew Solutions. He joined the firm at its Lochgelly, Scotland, plant in 1978, transferring to the United States in 1982. Since 1983, Hugh has managed groups responsible for product design and development for air and foam dielectric coaxial cables, fiber cables, semiflexible elliptical waveguides, rigid waveguides, and their connectors and other components. Hugh has worked for more than 40 years on RF and microwave components and transmission lines. Before joining Andrew, he worked at Marconi Space and Defence Systems on transmission components and subsystems for satellite communications. His initial job was at the General Electric Company (UK), where he worked on the design and development of microwave components, mainly for trunk radio systems. Hugh graduated with Honors in Physics from the University of Oxford, England.

32

Bechtel Technology Journal

CLOUD COMPUTINGOVERVIEW, ADVANTAGES, AND CHALLENGES FOR ENTERPRISE DEPLOYMENT


Issue Date: December 2009 AbstractCloud computing is a paradigm shift that enables scalable processing and storage over distributed, networked commodity machines. Enterprises that want to reap the benefits of cloud computing must realize that the decision to migrate is neither quick nor easy. Key enterprise personnel must fully understand the cloud services providers offering and be ready to discuss the challenges and obstacles both organizations will face together as the enterprise migrates to the cloud. This paper provides a basic overview of cloud technology and reviews several deployment options that can be described as instantiations of the cloud. Potential advantages of using cloud computingincluding scalability, flexibility, and reduced capital and operating expensesare reviewed, as are hurdles to successful deployment. The latter include regulatory, performance, security, and availability issues. A brief economic analysis of cloud computing and an overview of key players offering cloud services are also provided. Keywordscloud computing, grid computing, scalability, security, software as a service (SaaS), virtualization

INTRODUCTION he extensibility and flexibility of software architectures and the promise of distributed computing have created a concept known as cloud computing. The cloud shifts the centralized, owned-and-operated computing infrastructure model to a fully distributed decentralized paradigm. To enable the cloud, data centers leverage commodity hardware, virtualization techniques, open frameworks, and ubiquitous network access. Cloud computing builds on the grid computing concept that created a virtual supercomputer through distributed, parallel processing. Grid computing was generally used to run a few processor-intensive tasks that would normally be run on a high-performance machine. Cloud computing extends this concept to perform multiple tasks for numerous users in a distributed fashion. The network (intranet or Internet) is employed to interconnect commodity machinery and to deliver services to disparate users. [1] Figure 1 illustrates the cloud infrastructure.

Grid Infrastructure

Brian Coombe
bcoombe@bechtel.com

Figure 1. The Cloud Infrastructure

2009 Bechtel Corporation. All rights reserved.

33

Resources are available on demand. ABBREVIATIONS, ACRONYMS, AND TERMS


API Cloud application programming interface Computing architecture with distributed, scalable machines providing services via a network central processing unit Federal Information Security Management Act Health Information Portability and Accountability Act information technology operating system Open Systems Interconnection (International Organization for Standardization Standard 35.100) software as a service

Infrastructure is often shared. Virtualization can enable multiple customers and applications to share the same physical machines. Services are generally provisioned on demand and scaled up or down as required. Services are usually subscription-based, with a variety of tiered service offerings as well as flat-rate and per-use pricing models. These components, fundamentally tied together into an architecture, produce a cloud services offering. [3] The architecture of cloud computing can be described using a layered model, in a manner similar to that of the Open Systems Interconnection (OSI) seven-layer model developed to provide an abstract description of layered communications and computer network protocol design. At the top of the cloud model is the client layer, which interfaces directly with cloud environment end users. Below the client layer is the cloud applications layer. Applications that run on the cloud reside here and are generally accessed by application developers. Next is the software infrastructure layer, where basic infrastructure services, including storage, computing, and communications, are performed. Below these three layers are the actual cloud environment software and hardware layers. At the software layer resides the kernel that translates and executes the cloud applications instructions on the cloud hardware. In many architectures, this cloud software kernel can include a hypervisor for executing virtualized applications. Finally, underpinning all of the cloud layers is the hardware layer, which includes processor, memory, storage, and communications hardware. Figure 2 depicts the relevant layers. [4]

CPU FISMA HIPAA IT OS OSI

New management capabilities, availability of open-source and low-cost software, commoditization of computing, and increase in telecommunications services bandwidth at reduced costs allow the cloud computing model to make technical and financial sense for some enterprises.

SaaS

The cloud computing concept is not new. As early as 1961, John McCarthy, a Massachusetts Institute of Technology professor, proposed a time-sharing computing model in which hardware and services were: Centrally hosted and managed Sold and billed like utilities such as electricity and water However, the limited telecommunications services bandwidth prevented wide-scale adoption of this model. Furthermore, the steady decrease in size and cost of general-purpose computing hardware pushed widespread adoption of the traditional enterprise-ownedand-operated hardware and software paradigm. Almost five decades later, the development of new management capabilities, availability of open-source and low-cost software, commoditization of computing, and increase in telecommunications services bandwidth at reduced costs allow the cloud computing model to make technical and financial sense for some enterprises. [2]

CLOUD OFFERINGS loud computing is really about two fundamental concepts: leveraging economies of scale and improving hardware use. In its current buzzword status, many cloud offerings are quick to associate a service or solution with the cloud architecture. While taking many forms, cloud offerings can generally be sorted into several categories, including: Software as a service (SaaS) Utility computing Disaster recovery

CLOUD ARCHITECTURE

everal fundamental components make up the cloud architecture:

Computing resources are located off site in a data center that is not owned or managed by the enterprise using the cloud services. Resources often leverage virtualization for ease of management and interoperability.
34

Bechtel Technology Journal

Client

Cloud Applications

Software Infrastructure

Computing

Storage

Communication Communications Communications

Software Kernel
Microsoft Windows OS Linux OS Mac OS

Hardware

Enterprises can outsource some or all data center needs to the utility computing provider, often starting with lower-value, routine processing and storage.

Figure 2. Cloud Layer Model

Application programming interfaces (APIs) Managed services Large dataset processing Cloud integration Software as a Service SaaS allows an application to be delivered through a Web browser. The application is generally software and hardware agnostic and relies on server components outside the users network. The server hardware can be owned and managed by the firm selling the SaaS application, or it can be further removed (in other words, hosted and managed by another party). SaaS is generally characterized in four maturity levels, with the most mature allowing the greatest flexibility, scalability, and reliability. Traditional software firms are beginning to provide SaaS offerings to small- and medium-sized enterprises that often do not have the infrastructure and resources to run larger-scale enterprise applications. Other SaaS offerings include typical productivity and office suites, which can reduce an enterprises licensing, maintenance, and infrastructure requirements. [5]
December 2009 Volume 2, Number 1

Utility Computing The term utility computing predates widespread use of the term cloud, but is a classic example of the benefits of a cloud infrastructure. Providers offer solutions that enable a virtual data centerthat is, an Internet-enabled commodity processing and storage hardware environment. Enterprises can outsource some or all data center needs to the utility computing provider, often starting with lower-value, routine processing and storage. Current offerings range from on-demand processing and storage up to entire remotely hosted and managed data centers. The cloud paradigm allows these services to be offered via a distributed, flexible, connected topology, rather than through a single data center. [6] Disaster Recovery Along with offering traditional server replacement and utility computing options, the cloud also enables a new way to deliver disaster recovery services. Disaster recovery often requires dedicated, specific hardware for data storage and remote applications operation. The cloud paradigm can allow enterprises to

35

use distributed networked commodity devices to replace dedicated data centers for disaster recovery, thereby reducing the costs to provide this service and, potentially, making it available to a wider range of clients. [7] Application Programming Interfaces APIs are becoming a popular way to provide new service offerings while leveraging the cloud infrastructure. APIs allow unique applications to be written and offered via the Web using existing software and services. These applications are software and hardware agnostic and can be run via a Web browser. An application provider can use another providers Web-enabled services and software and either manage and host the application itself, or allow the existing provider to host the application on its own network. [7] Managed Services Managed services, such as remote monitoring and administration, security services, anti-virus scanning, and other back-end offerings, have been prominent for over a decade. However, recent advances allow services offerors to distribute the deployment and management of their offerings, resulting in flexibility, increased reliability, and reduced costs for both the provider and the user.

APIs allow unique applications to be written and offered via the Web using existing software and services.

Large Dataset Processing Cloud architectures have enabled significantly improved processing of large datasets. The commercially developed Google MapReducea programming model and associated implementation for processing and generating large datasetsis particularly efficient and allows distribution, independent processing, and consolidation of analysis, as shown in Figure 3. Apache Hadoop is a software product that provides a distributed computing platform for sharing and processing large amounts of data. The Hadoop project develops open-source software for reliable, scalable, distributed computing; one of its subprojects is a free, open-source version of MapReduce, available for general use. In a recent example of its application, an enterprise needed to process terabytes of data and analyze server use logs to provide better troubleshooting and optimization. When the datasets became too large for a single, high-performance machine to handle efficiently, the enterprise used Hadoop (including MapReduce) and other Apache products to deploy and process the data across 10 commodity nodes. Performance was greatly improved, processing times were reduced, and the system demonstrated significant scalability. [8]

Map1

Da

ta

Da ta

Map2
Da ta

Da

Da ta

ta

Map3
Data
Data

Data
Data

Start
Da ta
Da ta

Map4

Data

Da

ta

Reduce R d

Finish

...
Da ta

Da ta

MapN

Figure 3. The MapReduce Process

36

Bechtel Technology Journal

Cloud Integration While a variety of cloud services scenarios and deployments exist, many cloud offerings are still deployed as islands today. Enterprises may have to integrate data and existing applications with applications deployed on a cloud or integrate multiple cloud applications. A new cloud integration market is emerging, with established and upstart players offering software and services that promise to stitch together disparate applications and services in a cloud environment. Table 1 summarizes the benefits to be derived from taking advantage of the seven cloud offerings just discussed.
Table 1. Benefits of Cloud Offerings
Cloud Offering
SaaS Utility Computing Disaster Recovery APIs Managed Services Large Dataset Processing Cloud Integration

Desktop Support Cloud infrastructure can flexibly support and reduce costs for both desktop virtualization and traditional desktop services. Enterprises do not have to maintain a software baseline, end-user licenses, drivers, version control, and multiple images for either type of deployment. Furthermore, cloud computing can enable an enterprise to detach end-user hardware from its managed services, allowing individual users, departments, and groups to own and manage this hardware according to their specific needs. Mobility and Flexibility Cloud infrastructure can potentially support better mobility and flexibility, giving end users a consistent look and feel while providing them access to the same set of services and resources from disparate locations. A users applications, documents, and services thus look the same in the office, at home, or anywhere else an Internet-connected computer is accessed. Cost and Operational Advantages For large enterprises, on-demand computing offers significant cost and operational advantages. In a recent example, a large enterprise set up, executed, and tore down a computationally intensive process on a virtual server in only 20 minutes, at a cost of $6.40. To achieve the same results with dedicated physical hardware, the process would have taken 12 weeks and cost tens of thousands of dollars. When the process was complete, the enterprise no longer had to manage or disposition the hardware. [9] Scalable Services An enterprises computing requirements are never static and often peak around a specific time or event. As a result, the enterprise plans and designs its data center hardware and network to handle the maximum computing requirement, which results in idle capacity. This approach not only is inefficient from a capital and operational perspective, but can also prove to be an engineering challenge. When peak demand is greater than forecasted, capacity cannot easily be added to the infrastructure to immediately address that demand. This can result in lost revenues; unhappy customers; and, potentially, a strategic competitive disadvantage. The cloud utility computing model can make scalable services available on demand. The enterprise pays only for services used, while knowing that the scalability to offer additional resources is available. In a recent

Benefits over Traditional Model


Flexible, lightweight client support On-demand storage and service with lower overall costs Increased flexibility and reduced costs Easier deployment of new services using existing cloud Flexibility, increased reliability, and reduced costs Increased performance and scalability Linking of disparate services using cloud infrastructure

The enterprise plans and designs its data center hardware and network to handle the maximum computing requirement, which results in idle capacity. This approach not only is inefficient from a capital and operational perspective, but can also prove to be an engineering challenge.

CLOUD ADVANTAGES he overarching benefit of the cloud is simple in theoryit offers computing that is better distributed and managed as a result of applying economies of scale. These economies of scale are realized by moving the computing hardware from a myriad of enterprise data centers to centrally managed, distributed data centers run by firms that exist for the purpose of operating such data centers. The distributed hardware is less expensive for enterprise users because they share purchasing, operating, and maintenance costs, while the cloud operators leverage their expertise, purchasing relationships, and management abilities. This model results in advantages, discussed in the following paragraphs, that potentially benefit enterprises using cloud services as part of their information infrastructure.

December 2009 Volume 2, Number 1

37

US presidential town hall meeting, the White House used a commercial provider to augment network and server capacity to support the additional demand on its network. [10] Private and Hybrid Clouds Large enterprises that own and manage their own infrastructure have the ability to leverage private clouds, where services are provided via a cloud implementation with some or all hardware and resources managed by the infrastructure. Software and interfaces developed to allow cloud resources to perform in a shared environment can be obtained, licensed, and deployed on an enterprises hardware and network, allowing scalable deployment of a cloud infrastructure without relying on an outside service provider. Private clouds can also leverage desktop virtualization, SaaS, and other flexible information technology (IT) offerings. Processing advantages using both proprietary and open source algorithms (such as Hadoop) can be realized by deploying private clouds. However, the private cloud still forces an enterprise to retain ownership of and maintain server hardware, which removes the single largest advantage of cloud computing. [11] A potential new approach may be a hybrid of a private and public cloud. In this hybrid, hardware manufacturers or cloud offerors construct a private hardware and software network, either collocated with the customer or in a separate managed environment, that is purpose-built and maintains the integrity of the customers data in an air-gap environment. This model can leverage the providers hardware and software pricing economies of scale, as well as the providers expertise in managing such services, while providing the customer the advantages of a private cloud. Cloud Software and Applications Cloud benefits will be further realized as software is designed around the cloud. Today, many applications have been designed and written to run as a single instance serving one set of users from a single server. New applications can be written to perform better in a cloud environment and purpose-built to work in that distributed architecture. These applications can be designed to scale so that additional servers, users, or capacity can be added without modifying a single line of code. New applications can also be written to balance the load so that multiple, identical instances of the application exist on multiple servers, with varying users accessing

these servers without noticing a difference. These identical instances allow efficient and elegant recovery from failure or disruptions.

CLOUD CHALLENGES ith the advantages that cloud computing offers, enterprises should be lining up to adopt the new model and to begin migrating infrastructure to the cloud or purchasing new cloud services. While there has been an uptick in demand, significant challenges still need to be addressed before an enterprise can effectively leverage the advantages of the cloud. Security A key concern for any enterprise deploying cloud computing is security. Is data protected now that it is no longer within the confines of the data center, but is instead distributed and traversing the Internet? In a recent poll, IT executives cited security as the number one hurdle to deploying cloud services, as seen in Figure 4. Enterprises interested in deploying cloud services must consider the security of both the service provider and any hosting services the provider uses. An independent analysis of the hosting provider may afford better insight into the end-to-end security of the implementation. [12] The jurisdiction where the data is held poses another security issue. An enterprise may be operating in Europe, where privacy laws are strong. It may also use a provider that hosts services in the United States, a jurisdiction more favorable to lawful data interception. A potential solution is to have the cloud services provider act as the data custodian only, not as the owner. End-to-end data encryption further removes the cloud provider from data ownership. While some contend that cloud computing services can lead to less security, several logical arguments point, instead, to enhanced security. Human error is the single largest cause of security breaches. To remove humans from the equation, large-scale cloud operators automate as many processes as possible. Employee access to data is another significant security issue that can be managed by removing large amounts of readily accessible data. This may increase the level of security. [13] To improve security, cloud proponents advocate moving data that is at rest off of personal machines because documents, databases, and other files are often the target of enterprising thieves. Moving this data to the cloud, where it

Large enterprises that own and manage their own infrastructure have the ability to leverage private clouds, where services are provided via a cloud implementation with some or all hardware and resources managed by the infrastructure.

38

Bechtel Technology Journal

Why Not Cloud Services?


Q: Rate the Challenges/Issues of the Cloud/On-Demand Model
(1=Not Significant; 5=Very Significant) Security Performance Availability Hard to Integrate With In-House IT Not Enough Ability to Customize Worried That Cloud Will Cost More Bringing Back In House May Be Difficult Not Enough Major Suppliers Yet 65%
n=244

88.5% 88.1% 84.8% 84.5% 83.3% 81.1% 80.3% 74.6% 70% 75% 80% % Responding 3, 4, or 5
(Source: IDC Enterprise Panel, August 2008)

85%

90%

95%

Figure 4. Poll [12]

can be accessed only by legitimate, authenticated users, increases the security level. Thus, a stolen laptop is simply a piece of hardware that needs to be replaced, not a data breach that costs thousands of hours and dollars to mitigate and potentially results in lost revenue and customers. [14] Another argument put forth by cloud proponents is that the economies of scale leveraged by the cloud extend to security. Large-scale providers of cloud infrastructure have significant expertise, resources, and capital to address security issues. Purpose-built, secure architectures maintained by security experts have the potential to outperform those developed by firms whose core competencies lie outside of IT. Availability and Reliability Along with security, IT executives cite availability and reliability as key concerns when migrating to cloud services. Large providers of cloud services have suffered from outages and performance bottlenecks. While service-level agreements may include penalties for downtime, enterprises are not reimbursed for lost revenues and lost customers. The choice for enterprises comes down to whether they feel they can operate their infrastructure more reliably than

an outside party, as well as what components and services, in terms of risk and availability, they want to control in house versus extending to a cloud infrastructure. [12] Regulatory Hurdles Regulatory hurdles may also prove challenging for cloud providers. Legislation may be enacted that can affect location, use, and disclosure requirements for personal and financial data. For example, the Health Information Portability and Accountability Act (HIPAA) of 1996 requires personal health and identification information to be protected and encompasses protecting it from unencrypted transition over open networks or from downloading to public or remote computers. HIPAA also requires security controls, including access control and audit policies. One cloud storage provider claims to offer a solution that meets all HIPAA requirements for privacy and security. However, enterprises dealing with this type of data must thoroughly understand the regulatory requirements and ensure that the cloud provider meets or exceeds them. [15] A regulatory hurdle for government agencies seeking to take advantage of the cloud paradigm is information security legislation such as the Federal Information Security Management Act

The choice for enterprises comes down to whether they feel they can operate their infrastructure more reliably than an outside party, as well as what components and services, in terms of risk and availability, they want to control in house versus extending to a cloud infrastructure.

December 2009 Volume 2, Number 1

39

(FISMA). Compliance with FISMA requires an agency to explicitly define the security controls of any new IT system prior to its authorization for use. This extends to any services provided by any other agency, contractor, or other source. When an IT system is outsourced to a contractor, the new system must have security processes and procedures identical to those of the system being replaced. This hurdle can be overcome by close collaboration between the agency and the provider and the application of clear, thorough requirements. [16]

Table 2. Traditional Infrastructure Versus Cloud Infrastructure


Traditional
Performance Scalability Cost Security Flexibility Manageability Availability

Cloud

? ? ? ? ? ? ? ?

To determine whether migration to the cloud is economically viable, an enterprise must compare the costs of the clouds wide-area network bandwidth, storage, and CPU with the costs of its own network, hardware, and software.

Interoperability and Portability Finally, data and information interoperability and portability can pose a challenge for those wanting to use cloud services. Enterprises will want the option to: Change cloud reconfiguration providers without and

cloud computing just discussed. Any economic analysis of cloud models must be performed against this backdrop. To determine whether migration to the cloud is economically viable, an enterprise must compare the costs of the clouds wide-area network bandwidth, storage, and central processing unit (CPU) with the costs of its own network, hardware, and software. Complicating this process is the amortization of power, space, and cooling costs across the enterprise equipment, as well as the cost of operations with this equipment. Calculating and allocating both of these costs is challenging. One estimate is that computing, storage, and bandwidth costs actually double when the facilities costs are amortized across them. [18] In this context, a strong supporting argument for migration can be made that the economic benefits of the cloud architecture come from its elasticitythe ability to add or remove resources quickly and easily as required. Since data centers are often provisioned for peak load, actual server use can average as low as 5%. In a model cited by the University of California at Berkeley, a data center with peak use of 500 servers, trough use of 100 servers, and average use of 300 servers would spend 1.7 times as much on server hardware as is actually needed to meet the average demand over a 3-year period. [19] Along with its financial analysis, the study by the University of California at Berkeley cited above recommends that a detailed analysis be conducted of the performance, bandwidth, and processing time requirements of any application to be executed via a cloud. While the cloud model may make sense financially, other performance parameters may prevent deployment in the cloud. [19]

Migrate between self-managed cloud-managed services

Deploy new applications and services, either on their own hardware or on cloud hardware, that will have to interface with the existing services This challenge also extends to applications and data management with respect to the need to use different management tools and techniques to interface with data on and off the cloud. Standard, well-defined and documented interfaces, both for application and management traffic, can reduce potential interoperability issues. [17] Addressing the Challenges The potential pitfalls associated with cloud service deployment have given pause to some major cloud services providers. As a result, several of these providers, along with enterprises wishing to benefit from cloud computing, have joined together to publish the Open Cloud Manifesto. This document is intended to encourage a dialogue among the providers and users of cloud computing services about the infrastructure requirements and the need for open, interoperable standards, including appropriate existing and adopted standards, as well as new standards when warranted. [17]

ECONOMIC ANALYSIS OF CLOUD MODELS able 2 compares and contrasts the traditional and cloud infrastructures with respect to some of the advantages and challenges of

40

Bechtel Technology Journal

SUMMARY OF CURRENT CLOUD PROVIDERS

variety of cloud computing services providers exists today, with new entrants and modifications in services offerings common. Current key players include: Amazon Web Services LLC (AWS) Dell Inc. Salesforce.com Hewlett-Packard (HP) International Business Machines (IBM) Sun Microsystems

Business. IBM also offers hardware, software, and configuration assistance for enterprises interested in deploying private clouds. [25] Sun has announced its forthcoming Sun Cloud, a public compute and store infrastructure targeted at developers, students, and startups. Sun, like other companies, also provides hardware and software for private clouds, as well as technical assistance. [26] This review of current providers is by no means all inclusive. In addition to these established firms, many small- and medium-sized startups are providing a variety of competitive cloud offerings by leveraging their infrastructure or building services on top of the infrastructures provided by the key players. For up-to-date information on vendors and services, refer to reference materials or contact providers directly.

Amazon, famous for the infrastructure that powers its online retail service, has leveraged that infrastructure to provide a portfolio of services known as Amazon Web Services. AWSs offerings include: Amazon Elastic Compute Cloud (Amazon EC2), which provides configurable, on-demand computing capacity in the cloud Amazon Simple Storage Service (Amazon S3), which is scalable, distributed storage Amazon SimpleDB, which leverages the cloud to provide database services AWS has stated that its cloud services have now used capacity beyond the excess needed for internal operations. [20] Oracle has also partnered with AWS to deploy Oracle software and back up Oracle databases in the AWS cloud computing environment. [21] Dell, the personal computer manufacturer, recently established its Data Center Solutions (DCS) division to offer design-to-order cloud computing hardware and services under the rubric Dell Cloud Computing Solutions. One DCS offering includes hosted rental of CPU cycles per hour, which is essential to providing on-demand capacity. [22] Salesforce.com, the online customer relationship management firm, is offering its Force.com platform, which provides a programming model and cloud-based run-time environment where developers can pay on a per-login basis to build, test, and execute code. Underpinning this platform is the already widely used Salesforce.com infrastructure. [23] HP has offered cloud computing services since 2005. Dubbed by HP as Flexible Computing Services, this offering provides computing power as a service in a utility model. [24] Hardware and software giant IBM has introduced a suite of services known as IBM Smart

CONCLUSIONS

s enterprises realize the benefits of the cloud, it is possible that cloud computing will become as ubiquitous as client-server computing, virtualization, and other architectures that brought about a technological paradigm shift. It is also possible that cloud computing is overhyped and oversold, much like previous concepts that claimed to be the next big thing. However, enterprises are experiencing real benefits from using cloud computing. Enterprises that want to reap the benefits of cloud computing must realize that the decision to migrate is neither quick nor easy. Key enterprise personnel must fully understand the providers offering and be ready to discuss the challenges and obstacles they will face together in migrating to the cloud. Through close, open collaboration with cloud providers, enterprises can focus on delivering the right technologies and services to their personnel and customers while using new technologies and architectures and realizing cost and operational benefits.

Key enterprise personnel must fully understand the providers offering and be ready to discuss the challenges and obstacles they will face together in migrating to the cloud.

TRADEMARKS Amazon Web Services, Amazon Elastic Compute Cloud, Amazon EC2, Amazon Simple Storage Service, Amazon S3, and Amazon SimpleDB are trademarks of Amazon Web Services LLC in the US and/or other countries. Apache and Apache Hadoop are trademarks of The Apache Software Foundation.

December 2009 Volume 2, Number 1

41

Dell Cloud Computing Solutions is a trademark of Dell Inc. Google is a trademark of Google Inc. Hewlett-Packard, HP, and Flexible Computing Services are trademarks of Hewlett-Packard Development Company, L.P. IBM is a registered trademark of International Business Machines Corporation in the United States. Linux is a registered trademark of Linus Torvald. Mac OS is a trademark of Apple, Inc., registered in the United States and other countries. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Salesforce.com is a registered trademark of salesforce.com, inc. Sun Microsystems is a registered trademark of Sun Microsystems, Inc., in the United States and other countries.

[8]

S. Hood, MapReduce at Rackspace, Mailtrust, January 23, 2008, http://blog.racklabs.com/ ?p=66, accessed June 25, 2009. R. Mullin, The New Computing Pioneers, Chemical & Engineering News, Vol. 87, No. 21, May 25, 2009, pp. 1014, http://pubs.acs.org/ cen/coverstory/87/8721cover.html.

[9]

[10] M. Arrington, White House Using Google Moderator for Town Hall Meeting. And AppEngine. And YouTube., TechCrunch, March 24, 2009, http://www.techcrunch.com/ 2009/03/24/white-house-using-googlemoderator-for-town-hall-meeting/accessed June 25, 2009. [11] B. Gourley, Cloud Computing and Cyber Defense, white paper provided to the National Security Council and Homeland Security Council as input to the White House Review of Communications and Information Infrastructure, March 21, 2009, http://www.whitehouse.gov/files/documents/ cyber/Gourley_Cloud_Computing_and_ Cyber_Defense_21_Mar_2009.pdf. [12] D. Yachin, Cloud Computing: It Is Time for Stormy Weather, IDC Emerging Technologies PowerPoint presentation, slide 10, http://www.grid.org.il/_Uploads/ dbsAttachedFiles/IDC_Cloud_Computing_ IGT_final.ppt. [13] A. Croll, Cloud Security: The Sky is Falling!, The GigaOM Network, December 11, 2008, http://gigaom.com/2008/12/11/cloud-securitythe-sky-is-falling/, accessed June 25, 2009. [14] R. Preston, Down To Business: Customers Fire a Few Shots at Cloud Computing, InformationWeek, June 14, 2008, http://www.informationweek.com/news/ services/data/showArticle.jhtml;jsessionid= AY130JZGGFXPCQSNDLPCKHSCJUNN2JVN? articleID=208403766&pgno=1&queryText= &isPrev, accessed June 25, 2009. [15] Creating HIPAA-Compliant Medical Data Applications with Amazon Web Services, AWS white paper, April 2009, http://awsmedia.s3.amazonaws.com/ AWS_HIPAA_Whitepaper_Final.pdf, accessed June 25, 2009. [16] J. Curran, A Federal Cloud Computing Roadmap, ServerVault Corp presentation, March 22, 2009, http://www.scribd.com/ doc/13565106/A-Federal-CloudComputing-Roadmap. [17] Open Cloud Manifesto, http://www.opencloudmanifesto.org, accessed June 25, 2009. [18] J. Hamilton, Cost of Power in Large-Scale Data Centers, Perspectives blog, November 28, 2008, http://perspectives.mvdirona.com/ 2008/11/28/CostOfPowerInLargeScale DataCenters.aspx, accessed July 19, 2009. [19] M. Armbrust et al., Above the Clouds: A Berkeley View of Cloud Computing, Electrical Engineering and Computer Sciences, University of California at Berkeley, http://www.eecs.berkeley.edu/Pubs/TechRpts/ 2009/EECS-2009-28.pdf, accessed July 15, 2009.

REFERENCES
[1] D. Chappell, A Short Introduction to Cloud Platforms: An Enterprise-Oriented View, Chappell & Associates, August 2008, http://www.davidchappell.com/ CloudPlatforms--Chappell.pdf. A. Mohamed, A History of Cloud Computing, ComputerWeekly.com, March 27, 2009, http://www.computerweekly.com/ Articles/2009/03/27/235429/a-history-ofcloud-computing.htm, accessed June 15, 2009. F. Chong and G. Carraro, Architecture Strategies for Catching the Long Tail, Microsoft Corporation paper, April 2006, http://msdn.microsoft.com/en-us/library/ aa479069.aspx, accessed June 16, 2009. J. Foley, A Definition of Cloud Computing, InformationWeek, September 26, 2008, http://www.informationweek.com/ cloud-computing/blog/archives/2008/09/ a_definition_of.html, accessed July 11, 2009. BlueLock, Inc., http://blog.bluelock.com/ blog/bluelock/0/0/cloud-computing-a-five-layer-model. R. Martin and J.N. Hoover, Guide to Cloud Computing, InformationWeek, June 21, 2008, http://www.informationweek.com/news/ services/hosted_apps/showArticle.jhtml? articleID=208700713 , accessed June 16, 2009. E. Knorr and G. Gruman, What Cloud Computing Really Means, InfoWorld, April 7, 2008, http://www.infoworld.com/ print/34031, accessed June 17, 2009.

[2]

[3]

[4]

[5]

[6]

[7]

42

Bechtel Technology Journal

[20] Amazon Web Services LLC, http://aws.amazon.com, accessed July 19, 2009. [21] Oracle Technology Network, http://www.oracle.com/technology/tech/ cloud/index.html, accessed July 19, 2009. [22] Dell Cloud Computing Solutions, http://www.dell.com/content/topics/ global.aspx/sitelets/solutions/cluster_grid/ project_management?c=us&l=en&cs=555, accessed July 19, 2009. [23] R.B. Ferguson, Salesforce.com Unveils Force.com Cloud Computing Architecture, eWeek, January 17, 2008, http://www.eweek.com/c/a/EnterpriseApplications/Salesforcecom-UnveilsForcecom-Cloud-Computing-Architecture/, accessed July 19, 2009. [24] Hewlett-Packard Utility Computing Services, http://h20338.www2.hp.com/services/cache/ 284425-0-0-0-121.html, accessed July 19, 2009. [25] IBM Cloud Computing, http://www.ibm.com/ibm/cloud/smart_ business/, accessed July 19, 2009. [26] Sun Microsystems Sun Cloud Computing, http://www.sun.com/ solutions/cloudcomputing/offerings.jsp, accessed July 19, 2009.

Brian is a member of the IEEE; the Project Management Institute; SAME; ASQ; NSPE; MSPE; INSA; AFCEA; the Order of the Engineers; and Eta Kappa Nu, the national electrical engineering honor society. He authored six papers and co-authored one in the Bechtel Communications Technical Journal (formerly, Bechtel Telecommunications Technical Journal) between August 2005 and September 2008. His most recent paperDesktop Virtualization and Thin Client Optionsappeared in the December 2008 (inaugural) issue of the Bechtel Technology Journal, a compilation of papers from each of Bechtels six global business units. Brians tutorial on Micro-Electrometrical Systems and Optical Networking was presented by the International Engineering Consortium. In 2009, he was selected to attend the National Academy of Engineerings Frontiers of Engineering conference, an annual 3-day meeting that brings together 100 of the nations outstanding young engineers from industry, academia, and government to discuss pioneering technical and leading-edge research in various engineering fields and industry sectors. Brian earned an MS in Telecommunications Engineering and is completing an MS in Civil Engineering, both at the University of Maryland. He holds a BS with honors in Electrical Engineering from The Pennsylvania State University. Brian is a licensed Professional Engineer in Maryland and a certified Project Management Professional. He is a Six Sigma Yellow Belt.

BIOGRAPHY
Brian Coombe, deputy project manager for the Ground-Based Midcourse Defense project based in Huntsville, Alabama, oversees the execution of Bechtels scope, which includes the design and construction of facilities and test platforms designed to protect the US from attack by long-range

ballistic missiles.

Previously, as a program manager in the Bechtel Federal Telecoms organization, Brian supported a major facility construction, information technology integration, and mission transition effort. Prior to holding this position, he was the program manager of the Strategic Infrastructure Group, which included overseeing work involving telecommunications systems and critical infrastructure modeling, simulation, analysis, and testing. As Bechtels technical lead for all optical networking issues, Brian draws on his extensive knowledge of wireless and fiber-optic networks. He was the Bechtel Communications Training, Demonstration, and Research (TDR) Laboratorys resident expert for optical network planning, evaluation, and modeling, and supported planning and design efforts for AT&Ts wireless network and Verizons Fiber-tothe-Premises architecture. Before joining Bechtel in 2003, Brian was a systems engineer at Tellabs.

December 2009 Volume 2, Number 1

43

44

Bechtel Technology Journal

PERFORMANCE ENGINEERING ADVANCES TO INSTALLATION


Issue Date: December 2009 AbstractSite integration and wireless system launches have always been intense and demanding activities for both the operator and the general contractor performing the work. The new network is typically overlaid on the existing network, which must remain fully operational during the overlay process. Ideally, the upgrade should not noticeably affect customer experience in any part of the network. Inevitably, however, some of the existing sites must be altered to accommodate the new technologys equipment. These changes, in turn, may cause existing coverage patterns to degrade. Such degradation can arise from high-level network design problems, poor workmanship in installation, or unexpected hardware defects in the new components. It is imperative to quickly identify the source of the degradation and move toward a resolution. Failure to do so may result in significant loss of revenue and a reduced level of customer satisfaction due to the amount of time taken to resolve the problem. Much attention is being given to using performance engineering to quickly and accurately isolate and classify degradation problems based on key performance indicator (KPI) statistics for the existing network. Also, processes exist that can reduce the time needed to rectify problems, thus improving overall customer experience during the network upgrade. Keywordscomponent failure, degradation, global system for mobile communications (GSM), key performance indicator (KPI), legacy network, performance engineering, sector verification testing, site failure, system launch, tower-mounted amplifier (TMA), universal mobile telecommunications system (UMTS) overlay, workmanship

INTRODUCTION odays competitive cellular environment forces operators to upgrade their networks more often than they did in the past. Moreover, the new technology being implemented is typically only a preliminary version based on a new standard and will eventually be upgraded. As a result of delays in the full-service launch of new systems, operators must rely on the legacy system to generate revenue. Meanwhile, upgrades cause the legacy network to be in an almost constant state of change. These changes can produce service degradation and make it challenging for the operator to maintain consistently high levels of service quality. Some statistics even show dangerously high failure rates in networks being upgraded. Since the legacy system generates revenue and is the base for any future upgrades, it must be protected.

in global system for mobile communications (GSM)/general packet radio service (GPRS)/ enhanced data rates for GSM evolution (EDGE) networks. To the extent that degradation can be detected and resolved during the installation process, as proposed in this paper, the legacy system may be protected from the adverse effects introduced by the new technology installation.

SOURCES OF QUALITY DEGRADATION IN LEGACY SYSTEMS

Aleksey A. Kurochkin
aakuroch@bechtel.com

This paper explores service degradation issues and provides some useful key performance indicator (KPI) changes. If compared correctly, these KPI changes can help the operator to identify and mitigate the causes of degradation

cellular system upgrade typically occurs on a market level, when hundreds, or even thousands, of cell sites start to operate cohesively under a new technology to serve a new type of subscriber or provide the existing subscribers with a new type of service. When state-of-the-art technology requires coaxial cable and antenna sharing [1] on many sites, the launch becomes not only critical to the new system, but also very intrusive to the revenue-generating legacy system. For example, a universal mobile telecommunications system

2009 Bechtel Corporation. All rights reserved.

45

ABBREVIATIONS, ACRONYMS, AND TERMS


BEAST BSS EDGE GPRS GSM Basic Engineering Analysis and Statistical Tool base station subsystem enhanced data rates for GSM evolution general packet radio service global system for mobile communications human-machine language integrated services digital network key performance indicator Link Access ProcedureD network management system network operation center protocol data unit radio frequency RF data sheet receive antenna interface tray traffic channel tower-mounted amplifier transceiver universal mobile telecommunications system

a wide geographic area. The problem should be corrected during a period of low traffic (typically lasting only a few hours). When the site goes back on line, wireless customers should not notice any difference in service during their daily commute. The wireless network operators employ performance engineers to check critical alarms and the KPIs immediately after a site returns to service after the upgrade. If all alarms are clear, the site is accepted back into the network. Otherwise, troubleshooting begins as soon as enough statistics are gathered or as soon as practical. Various data-processing tools are available for use in interpreting the raw data from the base station subsystem (BSS). Because a service provider cannot afford to have its service degraded even for a short time, onsite testing and human-machine language (HML) terminals are used to immediately evaluate service after changes are implemented at a site. Hourly reports generated by software such as the Basic Engineering Analysis and Statistical Tool (BEAST) are used to monitor short-term BSS statistics, and operator- and industry-developed 24-hour and longer-term average trending tools are used to support more in-depth analysis. These tools provide better trending capabilities and alert the performance engineer to the changes at adjacent sites, which can signal abnormal site performance in the highly interdependent cellular system. The performance engineers primary challenge is to differentiate between degradation due to changed design parameters and degradation due to a fault introduced in the radio frequency (RF) path during the installation process. Marginal degradation across all KPIs indicates the first scenario, while significant deviation from the pre-upgrade performance requires a field diagnosis of RF paths of the underperforming sector. However, the following questions arise: How much degradation requires troubleshooting in the field and what issues need to be diagnosed to fix the problem? The decision regarding which problems require field troubleshooting has to be made judiciously, since such troubleshooting is costly and may divert resources needed for the new network rollout. The performance engineers role is to assess the problem quickly and make recommendations based on the initial classification of the source of the problem. The following sections describe the classifications of the common causes of legacy service degradations.

The performance engineers primary challenge is to differentiate between degradation due to changed design parameters and degradation due to a fault introduced in the RF path during the installation process.

HML ISDN KPI LAPD NMS NOC PDU RF RFDS RXAIT TCH TMA TRx UMTS

(UMTS) technology overlay is intrusive to the existing GSM network when both technologies share the antenna systems. Wireless network operators measure the impact of this sharing by monitoring changes in the legacy system KPIs such as traffic, retainability, dropped calls, and accessibility. These KPIs can be measured on various levels (network, cluster, and individual cell site) to provide operators with enough detail to troubleshoot problems occurring during the launch. Since cell sites are typically upgraded one at a time before the launch, there is a small window of opportunity to resolve an issue developing at a legacy site due to upgrade before it negatively affects subscribers in

46

Bechtel Technology Journal

SERVICE DEGRADATION CLASSIFIED BY SOURCE

ervice degradation needs to be classified by source so that different resources can be allocated to rectify problems quickly and efficiently.

Design-Related Degradation Although every RF design engineer and systems engineer intends to make an overlay of new technology non-intrusive to the legacy system, this goal is not achievable in real networks. Both the new and legacy technologies have to share antenna systems to leverage the real estate at the site. To share the same coaxial cable, various diplexers, duplexers, and combiners need to be introduced on existing sites (as shown schematically in Figure 1). On average, each of these new RF components introduces a 0.1 to 0.6 dB loss to the legacy system path. While the impact of a 1 to 2 dB loss on the individual sector link budget may seem insignificant, the current legacy system was simply not optimized to support this extra loss across all sites. Even if the performance of most of the legacy sites is not hindered by this loss, some sites with stretched coverage will develop a traffic issue due to the cumulative effect of this loss and shrinking coverage. Therefore, during the technology upgrade, the legacy system must also be altered to maintain service levels. Some legacy sites may receive new azimuth and/or downtilt designs. However, the legacy system alterations can cause other problems. Sometimes, the changes fail to preserve the subscriber level of service, and the legacy cluster itself needs to be optimized. Ultimately, either more sites are required to support the legacy system, or some level of quality degradation has to be accepted. Component Failure As discussed earlier, the new RF components introduced

into the legacy network path share antenna systems at a site. Throughout the industry, there are many examples of intermodulation issues caused by various active or even passive components [2] that have been used in combination but have not been tested to the site-specific conditions prior to rollout. While this paper does not cover this source of legacy system service degradation, the reader is encouraged to check references [2] and [3], as well as the paper Intermodulation Products of LTE and 2G Signals in Multitechnology RF Paths (pages 2132 of this issue of the Bechtel Technology Journal), which explain the processes and the ways to avoid component failure or non-performance issues during the network launch. Workmanship Figure 1 does not adequately illustrate the complexity of the state-of-the-art cellular site.

During the technology upgrade, the legacy system must also be altered to maintain service levels.

Dual-Band or Single-Band Antenna

Two RF Coax Cables GSM Tx1/Rx1 UMTS Tx3/Rx3 GSM Tx2/Rx2 UMTS Tx4/Rx4 Diplexer

BTS1 GSM

Diplexer

Node B UMTS

Rx Tx

Rx Tx Rx Tx

Rx Tx

Duplexer Duplexer

Duplexer Duplexer

Figure 1. Feeder Configuration for UMTS and GSM Antenna Sharing

December 2009 Volume 2, Number 1

47

As a result of many network consolidations and technology upgrades, a cell site is a conglomerate of RF components from various manufacturers connected by coaxial cables marked with color schemes from previous owners, such as shown in Figure 2. Poor workmanship can affect performance significantly. One connector tightened loosely can introduce as much service loss as 200 feet (60 meters) of cable. One crossed jumper reduces site traffic at least twice. If one connector has substandard weatherproofing, moisture entering the connector will degrade performance in that sector within a couple of months. Therefore, much depends on the skill of the technicians and the quality of their workmanship on each connector.

that site. Changes in the KPIs shown by these reports have to be interpreted in conjunction with the impact of the changes carried out at the site as per the RF data sheet (RFDS). For example, an RFDS may call for new tower-mounted amplifiers (TMAs) to be installed on the existing coaxial lines. Installation of TMAs should increase the traffic in the sector because the TMAs compensate for the transmission line losses and effectively increase the site coverage. However, if a TMA is malfunctioning or not powering up, accessibility failures and handover failures to the sector from neighboring sectors increase. These failures not only affect the performance of the sector under consideration, but also that of neighboring sectors. If a performance engineer has discovered compromised performance instead of the expected increase in traffic, TMA issues should be considered along with other component failures or workmanship issues. Typically, this type of analysis has been carried out after the site has been brought back on line and has experienced performance issues for some time. As discussed earlier, this approach entails considerable expense for the operator in terms of both revenue and service quality. Therefore, it would be interesting to explore whether steps in the analysis could be introduced earlier in the installation process before the site resumes serving customers and capital expenditures are increased.

If degradation problems are discovered while the original crew is still on site, these problems can usually be corrected within a couple of hours.

TIMELY DETECTION AND TROUBLESHOOTING

f degradation problems are discovered while the original crew is still on site, these problems can usually be corrected within a couple of hours. However, after the crew has left the site, it may take days or even weeks and several site visits to troubleshoot the same problems because the same crew may not be available and another crew needs time to learn the site details. Meanwhile, the operator is losing revenue from the legacy system and the implementation team is losing money on troubleshooting. Table 1 lists major KPIs, typical causes, reasons for degradation, and troubleshooting methods. The reports generated by the data-processing tools discussed previously, along with a record of changes at the site, can provide insight into reasons for the performance degradation at

SECTOR VERIFICATION TESTING

he transmit-path-testing troubleshooting method listed in Table 1 has a protective effect on network quality if it is used during the final hours of installation instead of during troubleshooting. In this scenario, the performance engineer checks the reported sector for critical alarms after physical work at the site is finished but the installation crew is still on site. If the sector is clear of alarms, an implementation team member travels to a designated spot in the middle of the sector and makes several test calls to the network operation center (NOC). At the NOC, the performance engineer shuts down and starts up the sector transceivers (TRxs) in sequence to test both RF portions of the call path. If any of the calls fail or alarms are discovered, the crew checks the suspected connection in a specific path and rectifies the issue immediately. Thus, most service-related issues are eliminated before the site comes on line and starts interacting with other cell sites and serving customers. Though
Bechtel Technology Journal

Figure 2. Coaxial Cable Runs on an Existing Tower

48

Table 1. KPI Degradation, Typical Causes, and Troubleshooting Methods


KPIs
Dropped Calls or Retainability (increase in percentage of dropped calls or decrease in percentage of calls successfully completed; due to drop in signal level or inability to hand over)

Typical Causes
Design-Related Degradation

Reasons for Degradation


Change in coverage due to downtilt, change in antenna pattern, or change in azimuth Change in relation of serving cell to its neighboring cells

Troubleshooting Methods
Compare new RF coverage with existing coverage to check for deficiencies Compare RFDS for this site and its neighbors

Component Failure

Failures in backhaul network such as T1 malfunction due to transmission equipment fault or microwave fading Higher insertion loss of RF transmission cable system

Check alarms at the site for radio failure, T1 failure (LAPD1 failure), or transmission equipment failure Check cable sweep results for return loss and insertion values Compare RFDS and link budget

Workmanship

Accessibility (ability to successfully assign a TCH to a mobile phone)

Design-Related Degradation

Link imbalance between transmit and receive signal (receive signal is weaker than transmit signal) Not all radios functioning One of two transceivers not working TMA faulty Diplexer/RXAIT faulty

Component Failure

Check alarms at the site for radio failure; check TMA functionality

Service degradation needs to be classified by source so that different resources can be allocated to rectify problems quickly.

Workmanship

TMA not powered up Loose or faulty connectors on radio ports, duplexers, RXAIT, or TMA

Verify that PDU for TMA is working Check for loose or faulty connectors Compare the before and after RFDS values for change in downtilts Account for additional insertion losses in the link budget Adjust the transmit power level or document the reduced coverage footprint Determine if new bands were added to accept some of the traffic Check for locked or blocked radio

Traffic (measure of channel usage; usually, a change in traffic is measured)

Design-Related Degradation

Reduced or changed coverage footprint due to downtilt, change in azimuths, or insertion loss from additional components introduced in transmit path Another band added to the sector, e.g., GSM 1900 added to GSM 850, or vice versa

Component Failure

Not all radios functioning, which results in blocking of TCHs and fewer Erlangs being picked up by the sector Receive or transmit jumpers crossed with another sector

Workmanship

Test the RF transmit paths by making test calls from the middle of the sectors while locking and unlocking TRxs in the NMS
TCH TMA TRx traffic channel tower-mounted amplifier transceiver

GSM KPI NMS PDU

global system for mobile communications key performance indicator network management system protocol data unit

RF RFDS RXAIT

radio frequency RF data sheet receive antenna interface tray

(Acknowledgment: Table 1 was originally created by Harbir Singh, formerly associated with Bechtel.)

1 LAPD (Link Access ProcedureD), a part of integrated services digital network (ISDN) Layer 2 protocol.

December 2009 Volume 2, Number 1

49

Little Paint Branch Park

Bay Hill Dr

Ba

n Medallio

yH

Dr

Dr

Ait

ch

ill

es

on

Ha

Bay Hill Drive

(132) [655] <268>

Rd

sbr

oo

kD

Test Phone Location Alternative Location

r bo

ur

Tow

nD

Inn

95

St

If the installation crew cannot fix the problem during the maintenance window, it remains assigned to the site and can troubleshoot the issue quickly the next day.

g ton R

ll f o

Br i

rd

g gs

Dr

Ch

yR ane

Site 1 Lat, Long


mi nS t

St

on

Dunnin

B
Little Paint Branch Park
De

ja en

We

Fr a n

eh

Old Gun

Iv
Ful ler

r yD
29B

Rd

powder

nim

Rd

all

k lin

Dr

St

G or don
ale Rd
Am

Belt sv il

F lin

t Ro

t on

St

(134) [649] <284>

ck D

(149) [646] <276>


ale Rd

A
e Tr ot t

e mm

nd

Ave

me nd ale Rd

y Ln

le Dr

Am

nd me

Beltsville North Neighborhood Park

29B

Ma

Little Paint Branch Park

l Rd

Pine

212

Powd

er Mil

K no

tt S

Heart w

Figure 3. Sector Verification Test Locations with Latitude/Longitude and Broadcast Control Channel Information

co nS t

ood Dr

DATA SHEET SITE VERIFICATION


Site Name: CONST ID: Location ID: Site Address: Lat: Long:
Record Data per "Site Verification Field test Procedure"

some issues may not be addressed immediately due to weather changes or nighttime hours, the crew is already aware of what needs to be checked or fixed as soon as practical. Moreover, the same crew is still assigned to the site and maintains its momentum. In general, the sector verification test consists of the following steps: 1. Performance engineer prepares for on-air site integration by reviewing the RFDS and comparing it with the actual system configuration in the network management system (NMS).
Cell ID
N/A N/A

Tx path GSM 850 - 1 (idle) GSM 850 - 1 (dead) GSM 850 - 2 (idle) GSM 850 - 2 (dead) GSM 1900 - 1 (idle) GSM 1900 - 1 (dead) GSM 1900 - 2 (idle) GSM 1900 - 2 (dead)

Sector 1 Cell ID BCCH RSSI

Sector 2 Cell ID BCCH RSSI

Sector 3 Cell ID BCCH RSSI

Cell ID

PSC

RSCP

Cell ID
N/A N/A

PSC

RSCP

PSC

RSCP

N/A UMTS 1900 (idle) N/A UMTS 1900 (dead) 1 COMMENTS - Sector

2. Performance engineer checks for pre-existing alarms to alert the installation crew of any pre-existing conditions. 3. When sector construction is complete, the installation crew brings the sector on line in private mode. 4. Performance engineer checks for the sector alarms again and directs the installation crew according to the findings. 5. A member of the installation team travels to a planned location (see Figure 3) and verifies that the test phone is locked on this sector.

COMMENTS - Sector

COMMENTS - Sector

PE Name: or: War Room Coordinat Tester Name:

Date:

Figure 4. Sample Site Verification Data Sheet

50

Bechtel Technology Journal

6. The installation team member makes 850 MHz and 1,900 MHz band calls from the test phone to the performance engineer at the NOC. 7. The performance engineer shuts down the sector TRxs to test all bands and RF paths.

8. The test results and any issues are documented (see Figure 4). 9. Issues are qualified using the methodology discussed in the previous section. 10. If any issue affecting service qualifies as a workmanship issue, the performance engineer discusses the plan for correction with the crew foreman. 11. Depending on the situation, some testing steps may need to be repeated. As depicted in Figure 3, the planned location should be easily identifiable and accessible by car. Alternative locations (marked with a green cross) may have to be provided as well. Sector Verification Test Results This section discusses statistics analyzed for 919 completed sector verification tests. These tests were performed after physical work was completed and the sector(s) was deemed ready to resume service. In some cases, more than one sector and both 850 MHz and 1,900 MHz bands with GSM technology were tested together. Figure 5 illustrates the initial sector failure rate. It is important to note how many issues would adversely affect network and customer experience if performance engineers, together with the installation team, were not involved in testing during the early stages of the implementation process. The 18% failure rate depicted in Figure 5 is not acceptable for the legacy network, even if it is experienced over several months. For example, if the new technology upgrade rate is 10 sites (30 sectors) per day, the legacy system will lose five to six sectors each morning, stretching existing resources and, most likely, creating a false need for additional temporary manpower (with all its associated inefficiencies). In this case, most failures were rectified immediately or by the next day. For those failures that took longer to rectify, the right resources (performance engineers and troubleshooting crews) were assigned to the issues immediately and were used effectively.

Figure 6 illustrates the initial classification of issues by source. This data has a flaw in that when the cause of an issue is not immediately apparent, it is classified as workmanship related; typically, kinked cable, loose connectors, and swapped jumper cables are suspected as being at fault. As the troubleshooting process proceeds, the team identifies more instances of failed components and design issues where workmanship had initially been blamed. It can be especially difficult to pinpoint a design issue, since it often takes time to compare the performance indicators for the site and its neighbor before and after the change. Although the source ratio changes as troubleshooting progresses, poor workmanship remains a significant source of the issues encountered during the launch and, as such, a focus for improvement. Special attention should be given to the coaxial cable connector and weatherproofing workmanship. It is believed that conducting immediate testing while the crew is still on site instills an increased sense of ownership among the crew members, which in turn improves overall workmanship quality. To explain further, a crew member may exercise more care in performing tasks when he/she knows that his/her work will be put to the test immediately, with any defects in workmanship likely to be exposed in front of the supervisor and co-workers. Also, an

Conducting immediate testing while the crew is still on site instills an increased sense of ownership among the crew members, which in turn improves overall workmanship quality.

18%

Test Passed Test Failed 82%

Figure 5. Initial Sector Failure Rate During Verification Test

2% 22% Design Component Workmanship 76%

Figure 6. Preliminary Issue Source Classification

December 2009 Volume 2, Number 1

51

installation crew that turns over consistently alarm-free sites gets immediate recognition, accolades (possibly), and personal satisfaction from the work it doesall of which sustain willingness to take ownership of quality. The foregoing assertions notwithstanding, no statistics have been compiled that prove an increased level of workmanship quality inevitably results from an increased sense of ownership among installation crew members. This could be a good topic to explore in another paper.

CONCLUSIONS

When performance engineering is integrated into the installation process early, most of the issues are addressed before the site is brought back to service.

Site Alarm Check Statistics Alarm checks can be performed jointly or independently during verification testing. Figure 7 shows the ratio between pre-existing and post-installation alarms for a 125-site sample. All alarms were checked at the 125 sites, not just those affecting service. Therefore, the percentage of issues is seen to increase dramatically in comparison with verification test issues because the sector verification process inherently detects only major issues affecting service. Also, tri-sector sites have higher alarm rates than any of the individual sectors. Nevertheless, no operator would want to place hundreds of legacy sites with a 63% alarm ratio back into service. Therefore, alarm checks are common in the industry. In this example, however, the performance engineer could have helped to rectify these alarms by directing the crew to the most likely causes before the sites were returned to service. When any alarm persisted, the crew left the site with some troubleshooting directions and often was assigned to future troubleshooting efforts.

ew technology overlay is intrusive to the existing legacy system. The business plan necessity of leveraging existing network infrastructure and sharing the same coaxial lines between the new and the legacy systems introduces new components into the optimized legacy network. Introduction of the new technology decreases the performance levels of the legacy sites (to varying degrees). The test results presented in this paper demonstrate that, under a traditional launch scheme, an unacceptable percentage of sectors or sites brought on line is likely to have issues affecting service. Typically, the subscribers in the area experience service degradation due to these issues immediately (next morning) after a site is returned to service. These problems lead to customer churn and decreased revenue from the legacy system. This paper proposed a more integrated approach to solving these problems whereby sector verification testing and performance engineering are introduced into the installation process to identify the performance issues early. Other discussions centered on the methodology used to quickly classify issues by source and how to address the issues with the right resources with the goal of reducing troubleshooting and rectification time so that the customer can enjoy the same level of services as before the update, an inseparable part of life of any state-of-the-art network.

REFERENCES
[1] E. Dinan and S. Kettani, RF Engineering Approach to Site and Antenna Sharing, Bechtel Telecommunications Technical Journal, Vol. 2, No. 1, January 2004, pp. 2130, http:// www.bechtel.com/communications/assets/files/ TechnicalJournals/January2004/Article3.pdf. I.A. Chugunov, A.A. Kurochkin, and A.M. Smirnov, Experimental Study of 3G Signal Interaction in Nonlinear Downlink RAN Transmitters, Bechtel Telecommunications Technical Journal, Vol. 4, No. 2, June 2006, pp. 9198, http://www.bechtel.com/ communications/assets/files/ TechnicalJournals/June2006/Article10.pdf. A.A. Kurochkin and E. Dinan, Importance of Antenna and Feeder System Testing in Wireless Network Sites, Bechtel Communications Technical Journal, Vol. 5, No. 2, September 2007, pp. 6168, http://www.bechtel.com/assets/ files/TechPapers/importance-of-antenna.pdf.

[2] Alarms Found


37% 56% 7% Post-installation Pre-existing None

[3]

Figure 7. Results of Site Alarm Check

52

Bechtel Technology Journal

BIOGRAPHY
Aleksey A. Kurochkin, project manager for Bechtel Communications, is currently responsible for the product testing, system design, and entire site implementation cycle of EVDO and LTE technology in one of the most important regions for a US cellular operator. He is also a member of Bechtels Global Technology Team and was a member of the Bechtel Telecommunications Technical Journal Advisory Board. Formerly, as executive director of Site Development and Engineering for Bechtel Telecommunications, Aleksey managed the Site Acquisition and Network Planning Departments and oversaw the functional operations of more than 300 telecommunications engineers, specialists, and managers. In addition, he originated the RF Engineering and Network Planning Department in Bechtels Telecommunications Technology Group. As a member of Bechtels Chief Engineering Committee, Aleksey introduced the Six Sigma continuous improvement program to this group. He is experienced in international telecommunications business development and network implementation, and his engineering and marketing background gives him both theoretical and hands-on knowledge of most wireless technologies. Before joining Bechtel, Aleksey established an efficient multiproduct team at Hughes Network Systems, focused on RF planning and system engineering. In addition to his North American experience, he has worked in Russia and the Commonwealth of Independent States. Aleksey has an MSEE/CS in Telecommunications from Moscow Technical University of Communications and Informatics, Russia.

December 2009 Volume 2, Number 1

53

54

Bechtel Technology Journal

Mining & Metals


Technology Papers

TECHNOLOGY PAPERS

57

Environmental Engineering in the Design of Mining Projects


Mnica Villafae Hormazbal James A. Murray

67

Simulation-Based Validation of Lean Plant Configurations


Robert Baxter Trevor Bouk Laszlo Tikasz, PhD Robert I. McCulloch

81

Improving the Hydraulic Design for Base Metal Concentrator Plants


Jos M. Adriasola Robert H. Janssen, PhD Fred A. Locher, PhD Jon M. Berkoe Sergio A. Zamorano Ulloa

Los Pelambres Repower 2


The copper concentrator process area includes process water ponds (foreground) and tailings impoundment (far background). Cerro Nocal mountain is in the distance. The strong safety record at Los Pelambres helped Bechtel win new work in Latin America.

ENVIRONMENTAL ENGINEERING IN THE DESIGN OF MINING PROJECTS


Issue Date: December 2009 AbstractThe application of environmental engineering (including pollution control) from the inception of a projects study phase through final completion has a significant effect on the project outcome. Good environmental practices, federal and local regulations and laws, international agreements, owner policies, and requirements of financial institutions are applied from the conceptualization of a project through its subsequent design phases. In practice, this means that the environmental engineering discipline works closely with the other engineering disciplines in preparing engineering designs that mitigate environmental impacts. In their team roles, the environmental engineers exchange project information during the owners environmental impact assessment (EIA) process, wherein the environmental impacts are assessed, evaluated, and submitted to the local authorities. However, common misunderstandings and confusion over the distinctions between an EIA and environmental engineering design can have a detrimental effect on the development of mining projects. While some universities have been developing environmental engineering programs within the last two decades, the training is mostly aimed toward EIA and environmental management and not toward the actual practice of environmental engineering design. The uncertainty regarding the differences between these functions can confuse the engineers responsible for delivering a successful project outcome. Keywordsengineering design, environmental engineering, environmental impact, environmental impact assessment (EIA), mining projects

INTRODUCTION General Rachel Carsons book, Silent Spring (Houghton Mifflin, 1962), was widely credited with launching the environmental movement in the US. Before 1962, pollution control regulations were enforced under a number of programs administered by various agencies, and most regulations covered worker industrial hygiene or a few special pollution control districts (boards created to enforce local, state, and federal regulations); otherwise, citizen complaints were handled under general nuisance control regulations, i.e., pollution control complaints fell under the same regulations as complaints about a neighbors barking dog! The US Environmental Protection Agency (EPA) was created in 1970 to establish a national environmental policy, replacing the smaller programs. At approximately the same time, many other countries or jurisdictions within countries were developing pollution control regulations, with different agencies or departments within agencies independently

responsible for regulations pertaining to air, water, solid, or hazardous wastes. In the US, environmental impact assessments (EIAs) originated under the National Environmental Policy Act (NEPA) 1969 to predict the combined effect of a project on the environment. (Different jurisdictions require similar documents such as environmental impact statements, environmental impact reports, or estudios de impacto ambiental. These documents are noted here for environmental specialists, who often concentrate on the differences among the documents instead of the considerable similarities.) By the mid-1980s, some multinational mining companies and/or financial institutions were preparing EIAs for projects even if such documentation was not required by the host country. However, following the United Nations Conference on the Environment and Development (UNCED) (known as the Earth Summit), held in Rio de Janeiro, Brazil, in 1992, and with growing environmental awareness on the part of the World Bank, EIA requirements and regulations have been adopted in most countries.

Mnica Villafae Hormazbal


mvillafa@bechtel.com

James A. Murray
jamurray@bechtel.com

2009 Bechtel Corporation. All rights reserved.

57

ABBREVIATIONS, ACRONYMS, AND TERMS


EIA EPA EPC GBU M&M environmental impact assessment (US) Environmental Protection Agency engineering, procurement, and construction (Bechtel) global business unit (Bechtel) Mining & Metals (GBU) National Environmental Policy Act nongovernmental organization operation and maintenance total installed cost United Nations Conference on the Environment and Development

Pollution control and other mitigation capital costs can range from 3% to more than 50% of the TIC.

NEPA NGO O&M TIC UNCED

written by legal professionals in a manner that can be difficult for individuals outside the legal profession to understand. In addition, the overall environmental requirements applicable to various projects can differ based on unique sitespecific objectives. Even requirements for projects separated by only a few kilometers can vary. Furthermore, compared with other engineering disciplines, environmental engineering is a new field that is changing rapidly, and the applicable regulations can change rapidly at the discretion of elected or appointed political bodies. On the other hand, engineering discipline codes are written by engineers for engineers and may be applied uniformly throughout a country or in a region that consists of multiple countries. These codes may be more than 100 years old and typically can be changed only after a thorough technical peer review. For most engineering disciplines, a building permit with an associated plan check from the local government department may be required. While this requirement may be waived for major heavy industrial capital projects, the application or checking fees usually are not. However, typically more than 100 separate environmental permits or other documents must be approved by various governmental agencies before construction and/or operations can begin. Public hearings must be held before the EIA is approved or permits are acquired. For most projects, some permits are also on the critical path for the release of funding by the financial institution and/or the owners board of directors. Depending on the type of project, the pollution control and other mitigation capital costs can range from 3% to more than 50% of the total installed cost (TIC). An Environmental Engineer on a project has critical schedule and budget duties, only some of which are similar to those of any other engineering discipline. The seamless integration of environmental engineering into the execution of a major capital project is important to the owners financial and operational success and to the efficiency of an engineering, procurement, and construction (EPC) projects planning and works, as well as the oversight functions of the regulatory agencies. This paper uses the observations, lessons learned, and approaches of the M&M GBU to describe the role of the Environmental Engineer on major projects and to contrast that role with the important work performed by other engineers and scientists with environmental training. This paper also describes the role of formal education and the need for on-the-job training.
Bechtel Technology Journal

In 1994, Chile established the General Law for the Environment, which was further institutionalized with the promulgation of the Environmental Impact Bylaw in 1997. Argentinas environmental policy developed in a similar manner, starting from the UNCED Treaty of 1992 and culminating in the finalization of the current regulatory framework in 2002. In Peru, the Environmental and Natural Resources Code (1990) established the types of activities that must be performed in an EIA; these are governed by the General Law for the Environment (2005). Environmental Regulations and the Environmental Engineer For the purposes of the discussions in this paper, the term Environmental Engineer (as indicated by capitalization) is used to refer to a Bechtel environmental engineer engaged in the Mining & Metals (M&M) Global Business Units (GBUs) engineering design activities as described herein. It is likely that the other Bechtel GBU environmental engineers have similar experience. Environmental regulations that govern the Environmental Engineers work are somewhat similar to the codes applicable to other disciplines, such as mechanical (boiler), electrical, and seismic. While the environmental requirements contained in other engineering codes incorporated only by reference are equally binding and enforceable, environmental regulations are quite different from traditional engineering codes. The most obvious difference is that the environmental regulations often are

58

BACKGROUND efore discussing the role of environmental engineering on major EPC projects, it would be useful to briefly describe how M&M Environmental Engineers view an owners project. As seen in Figure 1, a typical project undergoes a series of phases: study (conceptual through feasibility) through execution (construction and operation). After the end of the operational phase, the facilities are removed or other closure activities are implemented. Following the operational phase, the property is in the postclosure phase, often with a new beneficial use of

the land. Several items are noted in reference to Figure 1: Although the study phase is quite similar from company to company, various organizations define it differently. In Figure 1, the study phase is essentially defined by the type of capital cost estimate being developed for the study. The cost engineer is the internal target customer for the engineering efforts and provides input to the owners go or no-go decision(s) about advancing to the next phase, expanding the current phase, or withdrawing from the investment prospect.

PROJECT PHASE

An Environmental Engineer on a project has critical schedule and budget duties.

OWNERS PERSPECTIVE
Overall Project
Duration, years Nominal Cash Flow * Owner Environmental Needs Conceptual 1 to 10 2% Go/No-Go A Judgment Call Pre-Feasibility 1 to 10 5% Generic Pollution Control Requirements and Costs Baseline Studies

Feasibility and Financing 1 to 10 15% Major Equipment and Facility Specifications (for Permit Input) Capital and Operating Costs Mitigation Plans Baseline Documentation EIA Approvals Major Permits $100K to $5M

Engineering and Construction 1 to 4 200% Pollution Control Equipment and Facilities Monitoring and Reporting Federal and Local Permits Engineering and Construction Compliance

Operation 5 to 50 1,000% Monitoring, Reporting, Compliance

Closure 100 to 10,000 2% Monitoring, Reporting, Compliance, Agency Approval for Abandonment

Environmental Level of Effort

$10K to $30K

$10K to $400K

3% to 60% TIC

2% to 10% TIC

0.1% to 5% TIC

* 100% = TIC for typical Bechtel Engineering and Construction scope, excluding Owners costs (e.g., exploration, property acquisition, process royalties, early staffing, and startup costs)

BECHTELS PERSPECTIVE
Bechtel Environmental Support (for Execution)
Business Development Engineering Procurement Project Controls (Estimating Cost and Schedule) Study Study Bid/No-Bid Input Bid/No-Bid Input Go/No-Go Generic Recommendation Pollution Control Requirements Identify Major and Costs Pollution Control Requirements Define Areas for Baseline Studies Study to Basic Engineering Bid/No-Bid Input Prepare Description of Environmental and Pollution Control Facilities Cost and Schedule Input Mitigation Planning Coordination with Environmental Consultants Project EPC Bid/No-Bid Input Convert EIA Requirements and Bechtel Design Guides to Design Criteria Input to Pollution Control Equipment/Facilities Material Requisitions Construction Contracts Technical Data Input to O&M Procedures Convert EIA Requirements to Construction Environmental Control Plan Construction Contracts Field Requirements 1% to 5% TIC O&M Remediation Bid/No-Bid Input Bid/No-Bid Input

Construction

Go/No-Go Environmental Recommendation Compliance During Constructibility Review

Define Cost and Planning for Construction Environmental Compliance Plan

Environmental Level of Effort

$1K to $30K

$10K to $30K

$20K to $50K

THIRD-PARTY PERSPECTIVE
Environmental Consultants (and/or Unbundled Bechtel Services)
Environmental Level of Effort None $20K to $400K $100K to $5M >2% TIC

Figure 1. Environmental Activities by Project Phase

December 2009 Volume 2, Number 1

59

The Environmental Engineers role starts at the earliest phases of a project, affects the critical path, and defines a significant portion of the capital expenditures.

On M&M projects, the owners involvement in the project life cycle could last from several decades to more than a century. In contrast, Bechtels involvement may last only a few years when Bechtels role is limited to the EPC phase. When Bechtels role includes supporting the owners project development studies, the total job might continue for a decade. Bechtel has also participated in other types of project scopes that entail facility operation and maintenance (O&M) or the remediation of US Department of Energy sites that are a legacy of the Manhattan (atomic weapons) Project; our involvement in some of these projects has continued for several decades. As a consequence of the adoption of more advanced environmental requirements, smaller companies that cannot afford to pay for the baseline studies and environmental approval processes no longer take on these projects. As can be seen from the first goldcolored row in Figure 1, the cost of each successive phase increases significantly. A project might go through several different owner companies because smaller companies cannot afford the cost of a subsequent phase. The smaller companies may need to engage larger companies to provide the financial resources for more extensive drilling efforts to define an ore body, characterize the associated waste rock, and/or initiate the environmental baseline studies that are part of the pre-feasibility study phase. In some cases, only the major, multinational mining houses have the internal resources necessary to assume the financial risks through the feasibility study and reporting phase, until the project can attract financial backing from the international banking community. In addition, the international banking community can require that the selected companies have demonstrated, large-project operating experience that is often not found outside the multinational mining companies. The owners environmental requirements and efforts are shown, by phase, in the green row. These are complemented by the Bechtel Environmental Engineering support shown in the blue row. The red row shows the requirements to perform construction activities while maintaining environmental compliance. The owners typical environmental costs as shown in the first gold-colored row are given as a factor of the TIC. During the prefeasibility and feasibility study phases, most

of the costs are for the EIA work. During the engineering and construction phase, most costs are for installation of the pollution control equipment and facilities, with a relatively small cost for permit, monitoring, and reporting requirements associated with the construction. Bechtels typical services costs for environmental support are shown in the second gold-colored row. Finally, the third gold-colored row shows typical services by outside (third-party) consultants that prepare the EIA baseline study and impact evaluation documents; most of the environmental-related services costs expended on a project are shown in this row. The main point to be taken from Figure 1 is that the Environmental Engineers role starts at the earliest phases of a project, affects the critical path, and defines a significant portion of the capital expenditures.

THE ENVIRONMENTAL ENGINEERS ROLE Conceptual Design Phase During the conceptual design phase, the overall engineering role includes providing practical definitions for the mining, production, and waste-handling facilities, as well as practical definitions of the infrastructure requirements. There is little need to optimize the designs because the purpose is to develop the rough capital and operating cost estimates that are the basis from which to start defining the minimum net income needed to pay for the infrastructure and the operating facilities versus the size of the ore body needed to support a production rate necessary to generate the required income. The cost estimate accuracy required to make these initial determinations is low, the contingency is high, and the cost estimates are often factored. Therefore, little engineering detail is required. Since Bechtel has a rather extensive library of cost information for M&M projects, the individual cost estimates needed to develop the overall capital cost estimate to the requisite degree of accuracy for the conceptual design phase can be produced with a relatively small amount of engineering effort. In general, most of the basic production facility capital costs can be factored from previous projects, while the mine, waste rock, and tailings disposal areas; camps; access corridors for road, rail, and utilities; and ports require only a limited engineered definition to obtain the degree of accuracy required for the capital cost estimate.

60

Bechtel Technology Journal

The Environmental Engineers role can be quite limited in this phase. A primary role can be to assist the owner with the early identification of potential environmental issues and/or fatal flaws. The Environmental Engineer works with the owners environmental team to suggest EIA and permit acquisition strategies. Internally, the Environmental Engineer details to the cost engineers how the subject plant might differ from the plants in the historical database and how the factored costs can be adjusted accordingly. For example, if several previous concentrator projects were sited in remote areas of Chiles Atacama Desert, a concentrator project located in the Santiago Metropolitan Region or agricultural areas would require more extensive air and water pollution control facilities. Depending on the experience of the Environmental Engineer, these adjustments might be handled as an additional percentage allowance, or one or more of the critical facilities might require that some preliminary engineering be performed. The same is true for the project infrastructure. The Environmental Engineer prepares the pollution control sections of the conceptual design report. These sections include Bechtels assumptions about the environmental setting, the extent of pollution control facilities, and the costs of any special control equipment. Pre-Feasibility Study Phase During the pre-feasibility study phase, an engineering study is usually prepared to examine alternative approaches for developing the resources and to provide the associated capital and operating costs for these alternatives. Some projects cover more than a hundred alternatives for improving rates of return and containing risks. Some pilot testing is performed to determine mineral recovery efficiencies. Additional exploration drilling is used to acquire detailed information on the ore body and its extent. The objective of this phase is to eliminate several of the alternatives and then define the scope of one or a few of the alternatives to be carried forward to the next study phase. Again, the Environmental Engineers input to the engineered facilities in this phase of a project might be rather limited, with most efforts consisting of assisting the cost estimating engineers with factoring in information from previous projects and the historical database. Frequently, to prepare the overall project schedule, the long-lead-time activities for the EIA and permit acquisition processes have to

be started in this phase. The Environmental Engineer needs to prepare project and pollution control descriptions so that the EIA scientists can start or advance their baseline studies before all of the project information is known, let alone finalized. These descriptions often have to include several of the alternatives before the final project is defined. The project footprint has to be defined broadly enough to include variations among the alternatives, but be narrow enough to preclude excessive baseline study costs and/ or perception of impacts. Emission and effluent inventories are estimated based on a preliminary screening of alternatives and partially completed metallurgical process information. Also, the Environmental Engineer continues to work with the owners environmental team on the EIA and permit acquisition strategies. In most cases, the EIA scientists are employed by a third-party contractor engaged by the owner. The Environmental Engineer prepares the pollution control sections of the pre-feasibility design report(s). These sections include Bechtels assumptions about the environmental setting, extent of pollution control facilities, costs for any special control equipment, status of EIA and major permit acquisition activities, and recommendations for addressing future environmental items that might affect schedule or capital costs during the next phase. Feasibility Study Phase During the feasibility study phase, the engineering study is prepared to assist the cost engineers responsible for developing capital and operating costs for the study report. In addition, the engineering study is specifically directed toward providing information, drawings, and discussions for those individuals tasked with preparing a bankable document to be used by an international financial institution (or other funding source) in determining whether to invest in the project. The bankable document outlines project risks, delineates methods to eliminate those risks, and measures potential economic returns. It includes a certified evaluation of the project ore reserves; evaluation of pertinent commodity market(s) and factors related to the project revenue stream; capital costs; operating costs; pro forma contracts for the reagents, utilities, and transportation costs; information regarding ownership of the land, mineral rights, and process patents; and environmental approvals. In most cases, Bechtel prepares the capital cost estimate and frequently prepares operating cost estimates.

The Environmental Engineer needs to prepare project and pollution control descriptions so that the EIA scientists can start or advance their baseline studies before all of the project information is known.

December 2009 Volume 2, Number 1

61

However, Bechtel is seldom responsible for the overall bankable document submitted to the financing institution. The Environmental Engineer issues an environmental engineering design criteria document that, in conjunction with a process design criteria document, constitutes the performance basis for the other disciplines designs. The Environmental Engineer often prepares the environmental sections of the bankable document. Although the baseline studies and impact evaluations are usually performed by third-party specialty contractors hired by the owner, the Environmental Engineer reviews the EIA documents for consistency among (a) various baseline studies and the facility footprints, (b) the engineered pollution control and other mitigations measures, and (c) the engineered features and the capital or operating cost sections. The Environmental Engineer also reviews alternative design concepts proposed and/or evaluated by the parties conducting the EIAs, the adequacy of the projects proposed pollution control measures (to verify that they comply with project alternatives suggested by agency personnel or commissioners), and the technical or economic feasibility of alternative concepts proposed by nongovernmental organizations (NGOs). The purpose of the environmental section of the feasibility study is to summarize the environmental aspects of the project; identify risks that could materially and adversely affect the technical, environmental, and financial success of the project; and describe the mitigation provisions that have been included in the estimated cost basis. Detailed Engineering and Construction Phase During the detailed engineering and construction phase, engineers prepare criteria, specifications, calculations, drawings, etc., for use in purchasing equipment and bulk material. Construction forces also use these documents to execute the project. These duties constitute the traditional role of engineers on an EPC project. The Environmental Engineer supplements the environmental engineering design criteria issued during the feasibility phase, adding data on new conditions imposed by the EIA and permit approvals, and prepares a project environmental compliance matrix. Both of these documents are technical interpretations of commitments made by the owner during the EIA and associated processes. Written primarily for

internal use by the other engineering disciplines, they are prepared in a manner that precludes these disciplines from having to conduct their own respective investigations into the EIA/ regulations or references thereto. The Environmental Engineer also prepares the technical documentation for the owners submittal to the pollution control agencies. To maintain the schedule, such documents often have to be prepared before the designs are complete; therefore, the Environmental Engineer has to understand the pollution control system and the agency review process well enough to make an approvable application. Special conditions resulting from the approval process have to be incorporated into the design criteria and the compliance matrix. On M&M projects, the engineering discipline tasked with designing a specific facility is also responsible for designing the pollution control measures for that facility. For example, the mechanical discipline designs a belt conveyor transfer station in accordance with the metallurgical/process engineering disciplines flow requirements and the environmental engineering disciplines pollution control requirements. If the air regulations require a baghouse, the environmental design criteria delineate the grams-per-second limits of dust emitted and the concentration of dust, the specific instrumentation, and the sampling points. Next, the mechanical engineer uses the disciplines guidelines to calculate the air flow and coordinates with the layout/plant design discipline to route the ductwork between the platework at the transfer point and the baghouse. In this manner, the work is divided so that each engineering discipline is responsible for the work that it traditionally performs. In another example involving the civil engineering disciplines responsibility for sedimentation ponds and site drainage, the environmental design criteria establish the minimum sizing requirements for the ponds. These requirements are usually defined by a storm event and freeboard and the discharge concentration and turbidity limitations. Using this information, the civil engineer plans the diversion and/or interception channels and determines the locations and number of ponds. In both of these examples, the pollution control designs are integrated into the overall design and are not just a pollution control unit operation added on at the end of the process by the environmental engineering discipline.

The pollution control designs are integrated into the overall design and are not just a pollution control unit operation added on at the end of the process by the environmental engineering discipline.

62

Bechtel Technology Journal

Of course, the Environmental Engineer has to understand the other disciplines work practices well enough to be able to clearly inform those disciplines of the engineering requirements without using the legalistic writing style of the EIAs or environmental regulations. Furthermore, an Environmental Engineer has to have sufficient knowledge of the other engineering disciplines technical design details to be able to assist with preparing the design, technical specification, data sheets, and guarantee clauses, if requested, and to participate in the design coordination and checking to maintain the quality of both processes. Environmental Engineers also assist with startup and commissioning of the pollution control equipment because they know how this equipment works and how the overall plant production system functions and are knowledgeable about the commissioning testing and reporting requirements of the governing agencies. Operation and Closure Phases During the operation and closure phases, the Environmental Engineer prepares brief descriptions of the EIA and other documents, including an operating plan for the mine and plant pollution control systems and facilities. These descriptions may also include feasible closure and post-closure plans for the project. The Environmental Engineer also helps the cost engineers estimate the closure costs. Since the project might operate 25 to 100 years into the future, there could be capital or operating cost implications if these costs must be secured through lines of credit or bonding by a third party. These descriptions may not be the actual plans; they could constitute representative plans acceptable to the respective agencies and may remain in effect until the owners plant environmental personnel come onto the project. At this time, final closure plans can be prepared based on plant environmental personnel staffing levels and more current knowledge of the plant.

encompass responsibility for regulations that are not managed by other engineering disciplines or project functions, e.g., impact evaluation, pollution control, sustainability, land use planning, socioeconomic mitigation, plan approvals, code checking, industrial hygiene, operations monitoring and reporting, plant closure, and site remediation. Differing Educational Philosophies In educational institutions in North America, Australia, and Europe, an environmental engineering degree is usually awarded by the Civil Engineering department because large public works (water supply and sewerage) programs were traditionally run by civil engineers. A degree in environmental sciences is usually awarded by another university department (often dealing with natural sciences or resources), typically under the leadership or grantsmanship of individual professors within those departments, possibly reflecting the interests of those individuals. Obviously, the scope is not the same from university to university. Whereas individuals graduating with degrees in electrical, mechanical, or civil engineering (as well as fellow graduates in other disciplines or fields) have a general sense about their prospects after graduation, Environmental Engineers tend to be far less certain about their post-graduation plans. In Chile, Argentina, and Peru, Environmental Engineering and Environmental Science are separate, standalone departments established to meet the demand for engineers in these disciplines coming from the industry and the environmental regulators and policy makers. These graduates are very knowledgeable about how to evaluate, guide, plan, and control environmental policies and regulations, with the goal of creating a sustainable development program. Needless to say, these two divergent educational philosophies can lead to a culture clash when engineers from the different traditions are brought together in a single, multinational design office. Divergent Demands In the Western Hemisphere, Europe, and Australia, the educational programs for Environmental Engineers are very similar. They cover a wide variety of environmental subjects, with emphasis on the basic sciences, technology, natural resources management, environmental laws, and regulations. This foundation prepares

In Chile, Peru, and Argentina, Environmental Engineering and Environmental Science are separate, standalone departments established to meet the demand for engineers in these disciplines coming from the industry and the environmental regulators and policy makers.

DEVELOPING THE ENVIRONMENTAL ENGINEER Imprecisely Understood Role The Environmental Engineers role on a major capital project is often imprecisely understood by the general public as well as by project management. Quite often, the role comes to

December 2009 Volume 2, Number 1

63

the graduates to work with consultants and agencies to evaluate, guide, plan, and control environmental policies and regulations. However, this training does not cover EPC work processes or the budget and schedule information needed to prepare the studies required to develop and finance projects. This can lead to a different type of culture clash between the reflective, analytical environmental graduates and the intense, budget/schedule-driven EPC team.

and/or board of directors. Depending on the type of project, the pollution control and other mitigation capital costs can range from 3% to more than 50% of the TIC. While many scientists and engineers who are highly skilled in the environmental arena contribute to a projects environmental documentation, only one or two Environmental Engineers within a project EPC organization work with their counterparts in the owners organization to define what will actually be constructed. There are two key conclusions from the discussions in this paper: First, there is the lack of environmental engineering graduates who can perform EPC design work because most of the engineers graduating from universities have been trained to do environmental assessments. Some of these graduates can be trained in the EPC work processes. However, to compensate for the lack of Environmental Engineers, Bechtel M&Ms approach has been to train engineers who have expertise in a certain area (dust control, water and air contamination, etc.) in the legal requirements applicable to their field. This approach has been successful but time consuming, and, more often than not, cyclical engineering backlogs make developing and retaining trained personnel problematic. Second, to ensure that an Environmental Engineer is able to work as effectively as possible, proper communication tools need to be in place to inform all the engineering disciplines working on a project of the environmental requirements. The primary tools are the environmental engineering design criteria and the environmental compliance matrix. These documents present the requirements in a manner that can be integrated into each disciplines work process. The benefits of this approach are that fewer Environmental Engineers are needed and, more importantly, that engineers, regardless of discipline, can and will take ownership of the pollution control equipment and facilities instead of passing the problem to an Environmental Engineering group to handle.

On-the-job training is required to prepare graduate Environmental Engineers to perform EPC work processes.

The emphasis given by the universities is largely understandable. Compared with the number of engineers engaged in environmental design, a larger number of environmental scientists evaluate project impacts on the flora and fauna, the habitat of endangered species, groundlevel air pollutant concentrations and visibility, groundwater and surface water pollutant travel and attenuation, soil remediation, etc. As a result, university curricula emphasize the need for graduates to be able to monitor, analyze, and report impacts to community planners, enforcement agencies, and regulatory policy makers. Also, there is a significant demand for engineers to design, construct, and operate the worlds water supply and sewerage systems. Conversely, only a relatively small number of companies throughout the world (including Bechtel M&M and its competitors) undertake large EPC efforts such as mining, beneficiation, and smelting projects. As a consequence, there is little demand for the type of Environmental Engineer described in this paper. Therefore, universities do not emphasize formal curricula for such engineers. This means that on-thejob training is required to prepare graduate Environmental Engineers to perform EPC work processes, or that an EPC engineers specialty must be modified to ensure that he or she understands the environmental compliance issues. CONCLUSIONS

nvironmental considerations and activities form a substantial portion of work performed on M&M projects worldwide. More than 100 separate environmental permits and other documents must be approved by various governmental agencies before the construction and/or operations can begin. Essentially, all projects must have an approved EIA before release of funds. On most projects, some of the permits are also on the critical path for the release of funding by the financial institution

64

Bechtel Technology Journal

BIOGRAPHIES
Mnica Villafae Hormazbal is the chief representative Environmental Engineer for Bechtels Mining & Metals business based in Santiago, Chile. She is functionally responsible for environmental engineering executed from this office and provides environmental expertise to major copper projects. Mnica develops environmental design criteria and verifies that projects being developed in the Santiago office comply with environmental regulations (compliance matrix) and client environmental requirements. Mnica has 26 years of experience, including 3 years with Bechtel Chile. She specializes in environmental regulations and permitting; water treatment; recovery and discharge permitting; hazardous waste treatment, storage, and disposal; solid waste treatment; and mine closure/post-closure management/procedures. She is also very knowledgeable about the environmental legislation of Chile, Peru, and Argentina, and, to a lesser extent, the legal regulations of Mexico. Mnica has presented and published more than 18 technical papers on a wide variety of environmental topics, such as technological alternatives for wastewater management, environmental impact procedure and its use in the mining industry, and solid waste management in northern Chile. She assisted the local authorities in Antofagasta, Chile, after the high-magnitude earthquake that affected the city in 1995 and was recognized by its mayor for her contributions. Mnica holds a degree in engineering sciences and is a Civil Engineer from the University of Concepcin, Chile. She has completed specialization courses and internships in environment, safety, and occupational health; audit; and quality.

James A. Murray retired in 2008 from Bechtels Mining & Metals Global Business Unit after serving as the GBUs chief environmental engineer for approximately 25 years. He continues with M&M as a senior principal engineer in an in-house consulting role supporting pollution control engineering and environmental permit acquisition programs for a range of commodities and technologies. Jim has performed environmental engineering activities in connection with the mining, beneficiation, and smelting of light, heavy, precious, and base metals; tailings dams; cement; coal; coke (metallurgical and petroleum); fossil power; fertilizer; industrial minerals; petroleum and petrochemical; pipelines; ports; and subway and railroad tunnel ventilation. Jim authored Chapter 15, Economic Impact of Current Environmental Regulations on Mining, in Mining Environmental Handbook: Effects of Mining on the Environment and American Environmental Controls on Mining. He also authored or co-authored 16 technical papers. Jim holds four US patents and six related foreign patents. Before joining Bechtel, Jim was manager for air pollution control at Kaiser Engineers. Jim earned his MS and BS, both in Mechanical Engineering, from Stanford University, in California. He is a licensed Professional Mechanical Engineer in California and a Diplomate of the American Academy of Environmental Engineers.

December 2009 Volume 2, Number 1

65

66

Bechtel Technology Journal

SIMULATION-BASED VALIDATION OF LEAN PLANT CONFIGURATIONS


Issue Date: December 2009 AbstractBechtels Aluminium Centre of Excellence (part of the companys Mining & Metals Global Business Unit) has developed advanced simulation modeling methods and tools that can be used to validate configurations for aluminium smelter plants. Integral to Bechtels continuous improvement process, modeling and simulation are used during the basic engineering phase to help designers and engineers visualise the impact of a proposed solution before it is implemented. This paper illustrates a simulation-based approach to validating the capability of a lean configuration for handling, storing, and conveying the green, baked, and rodded anodes produced by a smelters carbon plant to support the operational needs of the potline, with all the technical specifications and operational strategies implied. The results demonstrate that adequate anode inventories could be maintained under all expected operating, maintenance, and transient conditions for the proposed lean carbon plant configuration. Reduced storage space, a single anode stacker crane, and appropriate anode inventories were targeted and achieved. Compared with the baseline design offered by leading technology suppliers to the aluminium industry, the result is a safer, leaner (particularly in terms of eliminating waste), more efficient plant for storing and handling carbon anodes. The measurea lower life-cycle cost for the optimised lean designwas achieved. Keywordsaluminium, aluminum, anode, hydrocarbons, lean design, modeling, pallet storage, paste plant, potline, rodding, simulation, Six Sigma, smelter, smelting

INTRODUCTION

B
Robert Baxter
rfbaxter@bechtel.com

echtels Aluminium Centre of Excellence (ACE) Knowledge Bank is the repository of the companys institutional knowledge, technical capability, historical information, and lessons learnt on the design and construction of smelter projects. [1] ACE applies efficient, highly valued, knowledge-based teams (headquartered in Montreal, Canada, but deployed to projects worldwide) to train, organise, and assign staff that enhance Bechtels ability to execute world-class primary aluminium industry projects. To achieve excellence, ACE: Performs feasibility studies and leads development of project basic engineering for Bechtel primary aluminium projects globally Maintains a cadre of primary aluminium technology specialists to provide state-ofthe-art knowledge and leadership to studies and technical support to projects

Evaluates primary aluminium industry technology-based projects and products Develops and maintains relationships with primary aluminium industry leaders in technology supply, technical specialty, and technology-based equipment and systems supply The primary objectives of ACEs mandate to develop a simulation-based approach to validating lean plant configurations were to: Deliver certainty of outcome Make projects and operating plants lean, reliable, and cost-efficient Deliver value by applying simulation knowledge and skills to the configuration aspects of smelter projects ACE used discrete element modeling of process elements to predict the dynamic response of the system to ensure that the proposed lean configuration can meet customer needs during normal, maximum, and upset operating conditions.
67

Trevor Bouk
tbouk@bechtel.com

Laszlo Tikasz, PhD


ltikasz@bechtel.com

Robert I. McCulloch
rimccull@bechtel.com

2009 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


ACE BDD CAD FMEA M&M (Bechtel) Aluminium Centre of Excellence basic design data computer-aided design failure mode and effect analysis (Bechtel) Mining & Metals Global Business Unit mean time between failures mean time to repair research and development value-stream mapping

To drive off and burn volatile hydrocarbons and to improve the physical properties of the anode, green anodes are baked in a furnace at temperatures in excess of 1,000 C (1,800 F). The baked anodes are then rodded with electrical connections and transferred to the potline for consumption in the reduction process. Blending and creating usable prebaked anodes is a 2-week operation. The challenges to achieving a lean, cost-efficient carbon plant configuration include: Capturing and articulating customer needs (as opposed to wants). Identifying and quantifying risks associated with driving a lean configuration, with proper regard for the systems capability to be reliably operated and safely maintained. Demonstrating the capability of the proposed lean configuration to mitigate the identified risks and communicating the results to clearly address customer needs. Understanding and respecting industry experience and practices. With appropriate countermeasures, the lean configuration must convince the process owner that system stability, product quality, and ultimately the customers needs can be achieved over all operating conditions. Developing and validating the lean plant configuration early, during the project definition phase, and having a high certainty of the outcome. Design changes

The lean configuration must convince the process owner that ultimately the customers needs can be achieved over all operating conditions.

MTBF MTTR R&D VSM

BACKGROUND

he configuration, inventory, and operational requirements of carbon plants at aluminium smelters have been targeted by Bechtels continuous improvement process as areas for lean optimisation. A typical aluminium smelter produces approximately 350,000 metric tons (385,800 tons) per year of molten metal and consumes more than 450 metric tons (almost 500 tons) per day of baked carbon anodes. The carbon plant receives shipload quantities of petroleum coke and liquid pitch that are blended and formed into metric-ton-sized green anode blocks.

Pitch Tanks

Coke Silos

Paste Plant

Green Anode Cooling Anode Handling and Storage Rodding Pallet Storage Area Fume Treatment Centre

Anode Bake Furnace

Bath Treatment and Storage

Figure 1. Proposed Lean Carbon Plant Configuration

68

Bechtel Technology Journal

and variations that occur later during the execution, startup, and operation phases destroy value. To mitigate and overcome these challenges, a model-based approach was developed and integrated with the lean improvement process. The resulting enhanced process was used to develop a lean carbon plant configuration (see Figure 1) and to predict and validate the proposed systems capability to meet operational maintenance needs.

SIMULATION-BASED APPROACH KEY ELEMENTS FOR SUCCESS Understanding of Customer Needs A successful lean design begins by defining customer needs under the following categories and task requirements: Specify the Process DataProcess data for the overall smelter and subsystems is captured and presented in the basic design data (BDD) mass balance model (see Figure 2). This model is a standard tool

A successful lean design begins by defining customer needs.

Figure 2. Sample Basic Design Data Model

December 2009 Volume 2, Number 1

69

that ACE uses to coherently summarise and communicate key process data to the team. It also forms the basis for model validation. The example shown in Figure 2 is the BDD model used to define and summarise the key process data for the project that is the subject of this paper. Develop and Document the Work Design A detailed understanding of how a system will be operated, maintained, and staffed, coupled with the desired organisational culture, is an essential input to the lean system design. The process of working with the process owner to develop and document the work design defines the critical customer and supplier interfaces and consumer needs. The flow of the value stream map created during this phase will later help determine where improvements are necessary. Define Questions that the Model Must AnswerWith input from the process owner, concise questions that the dynamic model needs to answer are developed. The questions must be quantitative or binary so that the systems capability can be determined. Questions should be based on the systems capability to, for example, reliably deliver product, sustain inventory, or recover from a transient event.

Develop Key Metrics for SuccessSimple measures must be developed for each question to determine what output variable is to be measured and what the criteria are for acceptance or failure. Construct Process Maps and Flow ChartsThese logic visualisation tools are used to capture the inputs resulting from the above activities and to analyze and discuss the process being evaluated (in this case, anode production and system maintenance). Constructing these maps and charts provides a solid foundation for the modeling phase, contributes to continuous process improvement, and is invaluable to the learning process. Ultimately, this step forces alignment between the process owner and the design engineer. It helps to close the information, data, and planning gaps that typically exist early in a project. An example of just one of the subsystems for moving anodes from the carbon plant to the potline is shown in Figure 3. [2, 3] Risk Assessment Failure mode and effect analysis (FMEA) is used to identify design and process risks associated with the proposed lean configuration and to quantify these risks in terms of their severity,

Ultimately, constructing process maps and flow charts forces alignment between the process owner and the design engineer.

Read Schedule

Load Closest Butt Pallet

Is Full Pallet Available?

Move to Butt Storage Area (Via Assigned Route)

Unload Butt Pallet Load Pallet

Move to Pot Location (Via Assigned Route)

Second Pallet?

Move to New Anode Storage

Unload Pallet

Execute Bath Bin Delivery Logic

Sequence Completed?

End

Figure 3. Flow ChartPallet Delivery


70 Bechtel Technology Journal

likelihood of occurrence, and detectability. This FMEA activity is performed with the process owner (the operations team in this case). Equipment and system reliability-based risks are entered into the models in the form of probabilities assigned to process units as mean time between failures (MTBF) and mean time to repair (MTTR) parameters. Other risks associated with operator error and external factors are identified and quantified with the process owner and entered into the model as worst-case scenarios. Mitigating actions and countermeasures are then developed and applied to the models, and the results are evaluated with the process owner. Core Competencies Key core competencies required to analyze the system performance predicted by the model include: In-depth knowledge of a carbon plants subsystem technologies The overall system operational and maintenance requirements Knowledge of system break points, sensitivities, and limits The ACE Knowledge Bank provides codified information captured from an extensive suite of aluminium smelter projects executed by Bechtel (and others). ACE specialists apply any available tacit knowledge and other relevant information to the initial analysis. The development of countermeasures to mitigate the risks identified requires advanced simulation and modeling skills to understand cause-andeffect relationships and to identify a problems root cause. All of these core competencies are essential to ensure the success of the overall modeling activity.

support sensitivity analyses and answer key questions regarding a systems capability to meet customer needs. Though extensive, the model library serves as a collection of building blocks with causes and effects that may guide the creator on building future models. The modeling effort for each application must start with project-specific customer needs and inputs. More importantly, each model must be verified and validated before it is used as a predictive tool. Verification, testing of model input parameters and boundary conditions, and validation of model dynamic outputs are essential steps in model development. A process owners confidence in model inputs and outputs is paramount for the owners acceptance of a proposed lean configuration. To achieve this desired outcome, model outputs are extensively tested against the BDD and known baseline performance from similar operating systems before the model is used to predict system performance.

Lean focuses on the relentless pursuit of identifying and eliminating waste.

INTEGRATED MODEL-BASED LEAN PROCESS ombining the best of Six Sigma and lean manufacturing methods is an established and widely accepted improvement process. While Six Sigma reduces variation and shifts the mean to improve the output of a process, Lean focuses on the relentless pursuit of identifying and eliminating waste, which, from the end customers point of view, adds no value to a product or service. Figure 4 lists Leans eight forms of waste. Simulation tools complement and enhance improvement results by incorporating system

Excess Conveyance

Waiting

ADVANCED PROCESS MODELING AND SIMULATION CAPABILITY ecently, process modeling and software simulation of systems have become integral parts of smelter studies and projects. As a result, an ever-growing, comprehensive model library has been developed and covers the main process sectors of an aluminium smelter. The models, which range from mass balance spreadsheets to discrete dynamic simulations,

Unused Intellect

Over-Production

WA S T E
Excess Inventory

Rework

Over-Processing

Excess Motion

Figure 4. Leans Eight Forms of Waste

December 2009 Volume 2, Number 1

71

Project Scope, Lean Measures

DEFINE

Process Analysis, Model Building

MEASURE

Simulation tools complement and enhance improvement results by incorporating system reliability, variability, and risk into the design and optimisation process.

Inputs N Complete?

Experiment Design, Parameter Study

ANALYZE

Simulation Tools

Lean Techniques, Process Optimisation

IMPROVE

Satisfied?

Tuning Loop

CONTROL
Implement

Figure 5. Model-Based Lean Approach (After El-Haik and Al-Aomar [4])

reliability, variability, and risk into the design and optimisation process. Dynamic simulations also increase confidence that the proposed solution will deliver a lean, cost-efficient plant. These tools provide a cost-effective, flexible way to reduce and even eliminate scope changes and design variations in the proposed system beyond the projects early definition phase. There are five steps to these software simulation exercises: DefineCharacterise project scope, lean measures, structure, and variables MeasureQuantify current state, process model, and dynamic value-stream mapping (VSM); identify sources of variation and waste

AnalyzeExamine the plan and design, simulation experiments, and process flow ImproveOptimise process parameters, apply lean techniques, validate improvement, and develop future-state VSM ControlDevelop control strategy, test control plans, implement control plans, and monitor performance over time Conceptualising, building, and validating the process model are linked to the Define and Measure phases of the Six Sigma process. Applying the simulation tools intended to corroborate the outcomes of proposed improvements is done in the Analyze, Improve, and Control phases (see Figure 5).

72

Bechtel Technology Journal

Fire Move 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10
2 2 2 2 2 2 2 2 1 1 1 1
11 6 1

9
1

8
1

7
1

6
1

5
1

4
1

3
1

2
1

1
3

Sections

1
12 30 1

1
13 54 1

1
14 78 0

27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
2 2 2 23 70 1 2 22 46 1 2 21 22 1 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 33 62 1 3 32 38 1 3 31 14 1 3 3

Sections

Code 1 2 3 4

Status Empty Forced Cooling Cooling Full Fire Preheating Packed

Fire Move

Days: 1

Hours: 14

Time Elapsed (h): 38

Figure 6. Fire Movement AnimationExcel Model

An iterative tuning loop refines the system operating parameters that define the optimum design for the required inventories to be carried, the types and extents of countermeasures required, and the robustness of the proposed lean configuration.

movements, and empty pit locations for an anode baking furnace. The objective was to determine if a simulated group of sufficiently cooled, empty pits could be made available for an extended period so that maintenance and repair activities could be performed safely. To address the issue, we created an Excel spreadsheet model built with simple interfaces and integrated detailed operating logic (see Figure 6). We kept all main process parameters adjustable (for example, definition of a fire train, fire move direction, location of burners and bridges, required baking time, and initial positions). By simulating several months of operation, the model helped us to develop and debug the operational logic. Once the simulated model verification and validation testing was completed, we transferred the operational logic to the more detailed dynamic discrete event model for the anode baking furnace. Combined Anode Plant Facilities We built a dynamic simulation model of the anode plant facilities to validate the basic operating

MODELING AN IMPROVED CARBON AREA CONFIGURATION arious simulation modeling tools can be used to validate carbon area configuration and operation. The two we used were: Anode baking furnace fire-train model (built and animated in a Microsoft Excel spreadsheet format) Carbon area operation, a discrete-event model (built using Flexsim dynamic simulation software from Flexsim Software Products [www.flexsim.com]) Anode Baking Furnace Fire-Train Logic ValidationExcel-Based Model We dynamically analyzed the operating sequence, fire configuration, cycle times, fire

December 2009 Volume 2, Number 1

73

Pallet Storage

Rodding Shop Paste Plant

G&B Anode Storage

The objective of the modeling effort was to identify potential fatal flaws, system weak links, and other conditions that could interrupt or delay the process.

Anode Baking Furnace

Figure 7. Modeling of Proposed Lean Carbon Plant Facilities

capability of our proposed lean configuration (see Figure 7). The objective of the modeling effort was to identify potential fatal flaws, system weak links, and other operational conditions that could interrupt or delay the process of delivering baked anodes to the rodding shop and rodded anodes to the potroom, or that could render the system unable to recover from potential transient events in the paste plant, baking furnace, or rodding shop. We designed the model to answer the following core plant safety, design, and operation questions: Could the proposed anode supply system sustain normal potroom operation without interruption? Does the proposed storage capability (combined indoor and outdoor) of green and baked anodes support the baking furnace and paste plant operating and maintenance plans as specified in the BDD? Could a single automated stacker crane reliably manage the inventory for a common green and baked anode storage facility? Could the proposed system recover from a transient event within reasonable time to sustain the potroom demand for carbon without depleting the rodded anode inventory in the pallet storage area? Does the rodded anode buffer (pallet storage area) between the reduction area and the rodding shop have sufficient capability to ensure that cooled product was available to sustain the scheduled rodding shop operations?

Is the system capable of sustaining the carbon supply operation over the long term when equipment reliability and system availability are considered? We identified what, if any, weak links existed. We designed the model to simulate the production and subsequent handling and storage of green anodes up to the anode baking furnace. The model also simulated the handling and storage of anodes from the baking furnace to the anode rodding shop and then on to the pallet storage area before their removal for use in the potrooms. Since the anode baking furnace operation had been previously modeled and proven using other software tools, we incorporated only the summary logic for green anode consumption and baked anode production into the discreteevent software simulation model. Metrics for Success We used the following metrics to determine whether the core plant safety, design, and operation questions had been correctly and accurately answered and whether the proposed optimised configuration was suitable for the project: Feed potroom based on pull (i.e., demand); do not interrupt pot-tending activity Maintain minimum anode inventory for normal operation, with short, minor dips below 10% during random breakdowns (8 hours for either green or baked anodes) in the anode storage facility

74

Bechtel Technology Journal

Do not deplete rodded anode inventory in the pallet storage area during critical events Demonstrate the systems capability to recover within a specified period without disrupting production in the potroom or rodding shop Model Inputs Model inputs included the following information: The work design (working and down periods) applied to the paste plant, anode baking furnace, rodding shop, and potlines as defined in the project BDD Preventive maintenance schedules Figure 8. Green and Baked Anode Storage AreaDetailed applied to all components of the anode handling system (conveyors, Green and baked anode storage was an accumulating conveyors, elevators, area that presented significant optimisation stacker crane, pushers, anode tilters, opportunity; thus, all major components anode turners, and turn tables) as defined (stacker crane, conveyors, elevators, rotating by the project BDD units, anode blocks, and other outputs) were Component breakdowns captured and modeled individually (see Figure 8). implemented by MTBF and MTTR, as When the cold butt and pallet inventories driven by random functions and based on were monitored, a simple, pallet-based historical data representation of anodes was applied. Color codes marked the status (red = hot butt, Model Granularity yellow = cold butt) and pallet type (grey = Addressing the whole carbon area, we set rodded anodes) (see Figure 9). the model granularity in accordance with the particular interest in the sector studied. Model Simulation Validation We also used modeling blocks and complete Before the model was used to run any sector models with different granularities; production scenarios, it was fully validated for example:

Before the model was used to run any production scenarios, it was fully validated using data from the BDD.

Hot Butt

Cold Butt

Rodded Anodes

Butt Pallets Stored: 274 (Hot 203; Cold 71)

Anode Pallets Stored: 190

Figure 9. Pallet Storage AreaSimplified

December 2009 Volume 2, Number 1

75

using data from the BDD. To improve the certainty of model predictions, we also parametrically checked predicted outputs against the ACE Knowledge Bank. We tested each section of the model (paste plant and anode cooling, anode storage and handling, anode baking furnace, rodding shop, and pallet storage) individually with inputs from the BDD and constants for availability and reliability. Using these known data inputs, we were able to modify and debug each section until it reliably produced the predicted outputs and mass balance. After all sections were tested, we reintegrated the model and then tested it again under known conditions in order to: Test the handshakes between the various sections Verify that repeatable results could be obtained against known outputs Finally, we made a full model run to simulate a year of production, with all data per the BDD. We then compared the outputs over this time period with the predicted 1-year values in the BDD. We introduced reliability and statistical variability into the model runs only after the model was fully tested. Effect of Transient Events in Rodding Shop In selected model runs, we introduced transient events into the model. For example, we simulated an extended shutdown of the rodding shop. The normal scheduled rodding shop maintenance shift is 8 hours. To test restart problems, we

increased the restart time by 8 hours, 16 hours, and 24 hours. As a worst-case scenario, we shut down the rodding shop for 32 straight hours (8 scheduled plus 24 unscheduled). Next, we observed the impacts on the pallet storage area and the green and baked anode storage area (see Figures 10 and 11, respectively). During these transient events, the green and baked anode storage area was able to accommodate the storage of baked anodes that could not be sent to the rodding shop, without affecting baking furnace production. From these model runs, we concluded that reasonable transient events within the rodding shop have no effect on potroom production. Countermeasures To manage the planned long-term system interruptions that would be needed to perform certain proposed actionsa reduction of the covered storage building area, a reduction of inventory costs, and the elimination of the second stacking cranewe developed countermeasures that included scope changes or actions such as: Developing a planned outdoor green anode storage area to accommodate the paste plants annual shutdown Allocating identified critical spare parts on site Taking steps to increase equipment reliability and reduce MTTR Configuring the equipment so that handling, storage, and conveyance operations could be performed manually

From selected model runs, we concluded that reasonable transient events within the rodding shop have no effect on potroom production.

Pallet Storage Area Capacity


100 90 80 70 Percent 60 50 40 30 20 10 0 1 101 201 Hot Butt Pallets 301 Time (Number of Shifts) Cold Butt Pallets 401 Rodded Anode Pallets 501 Rodding Shop Down for 32 Hours Full Recovery in 2 Weeks

Figure 10. Changes in Pallet Storage Area Capacity

76

Bechtel Technology Journal

100 90 80 70 Percent Full 60 50 40 30 20 10 0 1

Baked Anodes Accumulate in Storage Full Recovery in 2 Weeks Empty Rows Indoors

101

201

301 Time (Number of Shifts)

401

501

Figure 11. Changes in Green and Baked Anode Storage Area

To investigate the availability of existing countermeasures outside the plant (and possibly the company) for emergencies having a low probability of occurrence but a high severity, we elevated risk to a corporate level. This assessment included considering countermeasures such as supplying anodes from sister plants or from outside suppliers. Results Based on the proposed aluminium smelter plant design, the dynamic model results predicted that all defined metrics for success could be met. BenefitsThe benefits of adopting a validated lean carbon plant configuration compared with using conventional designsto handle, store, and convey green, baked, and rodded anodes include: Significant reductions in green and baked anode inventories An anode storage configuration sharing a common area for green and baked anodes and serviced by one stacker crane Reduced conveyor lengths and handling operations so that customer and supplier connections are direct and short Reduced maintenance costs as a result of simplifying and reducing the handling and conveyance equipment Projected Cost SavingsBased on the data generated by the study team, the cost savings that would be realised by using the proposed optimised lean anode storage facility, as validated by the discrete event modeling methodology described in this paper, would be approximately

US$2 million in capital cost savings and US$400 thousand in operating cost savings (expressed in terms of present value). RecommendationWe recommend that the proposed lean carbon plant configuration, along with the identified countermeasures, be implemented for the aluminium smelter project.

Scenarios were performed under projected normal, transient, and extreme operating conditions.

CONCLUSIONS

ntegrating simulation-based tools with Lean and Six Sigma quality improvement methods is an effective approach to validating and improving lean configurations; indeed, with owners expressing the need for competitive life-cycle costs, simulation may be considered an essential design component. For the lean carbon plant configuration discussed in this paper, simulation submodels of the facilities used for anode fabrication, anode storage, anode baking, rodding, and pallet storage were linked by an overall anode handling, storage, and conveyance system model. Model inputs involved a rigorous approach to defining customer needs, including process design, work flow, and waste elimination (redundant material handling equipment and storage capacity, for example). To implement our simulation-based approach to validating the lean carbon plant configuration, we used the following general methodology: Scenarios were performed under projected normal, transient, and extreme operating conditions. Reliability and other risks were applied as worst-case scenarios.

December 2009 Volume 2, Number 1

77

The system dynamic response was recorded. Findings were fed back to designers and process owners and then analyzed against the lean design criteria. Our advanced simulation and 3D modeling methods and tools enabled us to create, develop, test, and validate a sequence of operations that add value to the customer with the least amount of waste. We targeted and achieved reduced storage space; a single anode stacker crane; and appropriate green, baked, and rodded anode inventories. We further demonstrated that adequate inventories could be achieved under all operating, maintenance, and transient operating conditions for the proposed lean carbon plant configuration. In any simulation effort, it is essential to follow the key elements for success identified near the beginning of this paper, to ensure that: Customer needs are fully defined and captured in a manner that can be transferred into the simulation model and measured against defined metrics for success. Reliability, operational risks, and worst-case scenarios are identified and quantified for the proposed system. The simulation software model is fully verified and validated before it is used as a predictive tool. This is an essential step for acceptance of the proposed lean configuration. An in-depth knowledge of subsystem technologies; overall system operational and maintenance requirements; and system break points, sensitivities, and limits is critical to the modeling success. These core competencies must be applied throughout the simulation model development, testing, and output analysis. A simulation-based approach delivers confidence to the process owners and project team that: The proposed solution will meet or exceed expectations. Uncertainties mitigated. have been defined and

A substantial productivity and cost-savings benefit of this early and ongoing collaboration is that it can reveal lessons learnt that may not otherwise be readily evident. Typically, these lessons remain overlooked until they are rediscovered after the plant is built. Such collaboration is a learning process by itself and serves as a solid foundation for the modeling phase. Thereby, it builds confidence in the model results. Taking the holistic approach presented in this paper to developing and validating a lean, cost-efficient configuration early enough in the project development cycle results in significantly added value, including: Reduced waste Lower capital and operating costs Improved productivity Assured reliability Certainty of outcome Alignment with customer needs

A simulation-based approach delivers confidence to the process owners and project team.

TRADEMARKS Flexsim is a trademark of Flexsim Software Products Inc. Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States and/or other countries.

REFERENCES
[1] C.M. Read, R.I. McCulloch, and R.F. Baxter, Global Delivery of Solutions to the Aluminium Industry, Proceedings of the 45th International Conference of Metallurgists (Aluminium 2006), MetSoc of CIM, COM 2006, Montreal, Quebec, Canada, October 14, 2006, pp. 3144, access via http://www.metsoc.org/eStore/info/ contents/1-894475-65-8.pdf. L. Tikasz, R.T. Bui, and J. Horvath, Process Supervision and Decision Support Performed by Task-Oriented Program Modules, presented at the 4th International Conference on Industrial Automation, Montreal, Quebec, Canada, June 911, 2003. E. Turban and J.E. Aronson, Decision Support Systems and Intelligent Systems, 6th Edition, Prentice Hall, Inc., Upper Saddle River, NJ, 2001, access via http://cwx.prenhall.com/bookbind/ pubbooks/turban2/. B. El-Haik and R. Al-Aomar, Simulation-Based Lean Six-Sigma and Design for Six-Sigma, Wiley-Interscience, John Wiley & Sons, Inc., Hoboken, NJ, 2006, access via http://www.wiley.com/WileyCDA/ WileyTitle/productCd-0471694908.html.

[2]

[3]

Value can be delivered in the form of scope reductions, scope stability during project execution, and reliable startup and operational performance. Alignment with the process owner is maintained at each step to ensure that customer needs are understood, captured, and integrated.

[4]

78

Bechtel Technology Journal

BIOGRAPHIES
Robert Baxter is a technology manager and technical specialist in Bechtels Mining & Metals Aluminium Centre of Excellence in Montreal, Canada. He provides expertise in the development of lean plant designs, materials handling, and environmental air emission control systems for aluminum smelter development projects, as well as in smelter expansion and upgrade studies. Bob is one of Bechtels technology leads for the Ras Az Zawr, Massena, and Kitimat aluminum smelter studies. Bob has 26 years of experience in the mining and metals industry, including 20 years of experience in aluminum electrolysis. He is a recognized specialist in smelter air emission controls and alumina handling systems. Before joining Bechtel, Bob was senior technical manager for Hoogovens Technical Services, where he was responsible for the technical development and execution of lump-sum, turnkey projects for the carbon and reduction areas of aluminum smelters. Bob holds an MAppSc in Management of Technology from the University of Waterloo and a BS in Mechanical Engineering from Lakehead University, both in Ontario, Canada, and is a licensed Professional Engineer in that province. Trevor Bouk is a technical specialist in Bechtels Mining & Metals Aluminium Centre of Excellence in Montreal, Canada, with 17 years of experience in the mining and metals industry. He is currently the carbon area specialist providing technical and process expertise for the development of the carbon facilities required to support the aluminum electrolysis process. Trevor provides support for aluminum smelter development studies and projects. In addition, Trevor has also performed lead roles on multiple projects. Most recently, he was the carbon lead on the Ras Az Zawr aluminum smelter FEED study and the lead carbon area engineer on the Fjaral project in Iceland, where he was responsible for the overall design and layout of the carbon facilities as well as involved with the onsite construction and startup. Trevor also provided technical and process troubleshooting services to operating smelters. Before joining Bechtel, Trevor was involved with the design and supply of automated process equipment to the aluminum industry, both in anode rodding shops and casthouses. He also supported the installation, startup, and early operation of many jobs, ranging from single machines at existing plants to multiple systems on large greenfield projects. Trevor has a BE in Mechanical Engineering from McMaster University in Hamilton, Ontario, Canada, and is a licensed Professional Engineer in that province.

Laszlo Tikasz, PhD, is the senior specialist for Bechtels Mining & Metals Aluminium Centre of Excellence in Montreal, Canada. He has 29 years of experience in advanced aluminum process modeling and is an expert on aluminum production and transformation, process modeling, and simulation. Laszlo has developed flexible process models and studies to provide information needed to support engineering and managerial decisions on aluminum smelter designs, upgrades, and expansions. Before joining Bechtel, Laszlo worked in applied research and industrial relations at the University of Quebec and the Hungarian Aluminium R&D Center. Laszlo holds a PhD in Metallurgical Engineering from the University of Miskolc, Hungary. His Doctor of Technology in Process Control and MSc degrees in Electrical Engineering and Science Teaching are from the Technical University of Budapest, Hungary. Robert I. McCulloch is manager of Bechtels Mining & Metals Aluminium Centre of Excellence in Montreal, Canada. He has global responsibility for aluminum smelter technology projects and studies, including reduction technology, carbon plants, casting facilities, and related infrastructure or systems. Bob is also responsible for the execution of aluminum industry projects and studies assigned to Bechtels Montreal office. Bob has over 40 years of experience in engineering and project management with Bechtel, primarily for projects in the mining and metals industries in Canada. His experience includes projects in the Canadian Arctic and management assignments in Montreal; Toronto; and Santiago, Chile. He recently returned to Canada after several years in Australia, where he had lead project management roles on two major projects. Bob is a member of the Association of Professional Engineers of Ontario and was previously a member of the Canadian Standards Association Committee on Structural Steel and a corporate representative supporting the Center for Cold Oceans Research and Engineering in Newfoundland, Canada. Bob holds a BEng in Civil Engineering from McGill University, Montreal, Quebec, Canada.

December 2009 Volume 2, Number 1

79

80

Bechtel Technology Journal

IMPROVING THE HYDRAULIC DESIGN FOR BASE METAL CONCENTRATOR PLANTS


Issue Date: December 2009 AbstractMine ownersparticularly copper mine ownersare seeking the economic benefits realized from adopting larger-scale mineral concentrator plants with increased process capacity. One of the consequences of the move toward increased capacity is the need to review design approaches and criteria for slurry handling, with an eye toward accommodating larger flows than have been handled with previous designs. This paper presents a number of hydraulic design approaches adopted by Bechtels Mining & Metals Global Business Unit as base metal concentrator plants have grown larger and slurry flow volume has increased. The discussion centers on the following key design elements: (1) combining high velocity supercritical flows, (2) bends in launders, (3) minimum launder slope for coarse solids transport, (4) flow conditions approaching sampler boxes and drop pipes, and (5) use of computational fluid dynamics modeling for distribution boxes. Design concepts, example applications, and potential added value of the approaches are addressed, and guideline plots for preliminary design are provided. All examples presented are drawn from copper concentrator design. Keywordsbase metal, computational fluid dynamics (CFD), concentrator, copper, hydraulic design, hydraulic jump, launder, sediment transport, slurry, slurry handling, supercritical flow

INTRODUCTION OverviewSlurry Handling in a Copper Concentrator A single line in a modern copper concentrator plant typically receives about 75,000 metric tons (82,673 tons) per day of ore at about 1% copper content to produce a concentrate containing approximately 30% copper. The remaining ore is disposed of as tailings in engineered storage facilities. Since the mineral ore, concentrate, and gangue (commercially worthless mineral matter found with the valuable metallic minerals) passes through different processing steps as a slurry, the hydraulic design requirements for properly handling large slurry flows are critical to the overall plant design. Individual ore-water slurry flows can be up to 15,000 m3/hr (66,000 gpm) and comprise solids concentrations of up to 70% by weight. To allow the mineral slurry to be transferred from one process step to another by gravity in pipes or open channels (launders), concentrators are designed on a downward slope from the initial grinding process to the final concentrate and tailings thickening. Pumping raises the slurry into tanks at specific stages of the process.

Jos M. Adriasola
jmadrias@bechtel.com

The buildings containing these operations are large (17,00035,000 m 2 [183,000377,000 ft2]), and significant capital savings in civil-structural works can be achieved by optimizing this slope and the overall elevation difference in the plant. In performing process design, the responsible engineer must consider a design range that may extend from 160% of nominal design flow down to 75%, depending on conditions. To illustrate this point, a plant with a nominal capacity of 75,000 metric tons (82,673 tons) per day may be required to process more than 120,000 metric tons (132,277 tons) per day of softer ore during the initial several years of mine life. The same plant must also function effectively when turned down to process only 56,250 metric tons (62,000 tons) per day during an outage at the mine or of a mine conveyor. A key requirement, then, in designing a slurry handling system is to not only design for a maximum flow rate, as is common in most hydraulic engineering applications, but also consider in the design a minimum flow rate less than half that of the maximum. This aspect of design is an important consideration for transporting solids and is discussed later in this paper.

Robert H. Janssen, PhD


rjanssen@bechtel.com

Fred A. Locher, PhD


falocher@bechtel.com

Jon M. Berkoe
jberkoe@bechtel.com

Sergio A. Zamorano Ulloa


sazamora@bechtel.com

2009 Bechtel Corporation. All rights reserved.

81

ABBREVIATIONS, ACRONYMS, AND TERMS


2D 3D CAD CFD SAG USACE two-dimensional three-dimensional computer-aided design computational fluid dynamics semi-autogenous grinding US Army Corps of Engineers limit velocity

Generally, the hydraulic design of in-plant gravity slurry handling systems is based on Newtonian hydraulic engineering principles regarding turbulence. Not only must a system be able to accommodate maximum flow, it must meet the key requirement of transporting solids particles at all times. High velocities are therefore required in pipelines and launders to avoid deposition and blockage, as described in Discussion 3 below. The required high velocities result in launders for slurry handling that are typically designed for a supercritical flow regime. Unique design considerations come into play with this regime, particularly when combining flow streams (see Discussion 1), at launder bends (see Discussion 2), and at launder interfaces with other equipment (see Discussion 4). Some of the sequential process steps occur in parallel trains, which necessitates combining some slurry flows and splitting others. Some of the splits require control of both gross volume and solids volume, as well as a uniform slurry particle size distribution in the split(s). Designing distributors and slurry transfer boxes for high turbulence is important to ensure that solids remain suspended and uniformly distributed (see Discussion 5).

The responsible engineer must consider a design range that may extend from 160% of nominal design flow down to 75%.

VL

Trend Toward Larger Concentrator Plants, and Resulting Design Challenges Figure 1 illustrates the continuing trend of constructing larger copper concentrator plants to benefit from economies of scale. This trend is represented by the increasing power applied to single semi-autogenous grinding (SAG) mills. [1] One consequence of the move toward increased capacity is that slurry-handling equipment must now be able to convey much larger flows than handled with previous designs. With these changes it has been necessary to review the hydraulic design criteria and methods used in the recent past and develop an updated toolkit for hydraulic design reflecting the new process requirements.

25

20

MW per SAG Mill

15

10

0 1950 1960 1970 1980 Year 1990 2000 2010

Figure 1. Copper Concentrator Size Trend, Represented by Applied SAG Mill Power [1]

82

Bechtel Technology Journal

UPDATING THE HYDRAULIC DESIGN TOOLKIT: KEY DESIGN ELEMENTS Discussion 1Combining High Velocity Supercritical Flows General Given the supercritical flows in the launders, combining them in slurry systems poses a particularly difficult design challenge. To help visualize the difficulties, an analogy can be made between subcritical and supercritical flows in launders and subsonic and supersonic flows in gas dynamics. The surface waves generated in supercritical open channel flows can be thought of as a visual representation of the shock wave patterns in gas dynamics. Just as the design of supersonic aircraft (recalling the Concorde) mandated a shape significantly different from that of the subsonic Boeing 747 or Airbus 380, the design of bends and junctions for supercritical flow must consider free-surface waves and shapes very different from those of subcritical flows. Launders must be designed so that a hydraulic jump (a transition from supercritical to subcritical flow with an attendant increase in the flow depth) does not occur and cause overtopping of the sides of the launder. Here, the analogy with gas dynamics is the transition from supersonic to subsonic flow with a shock wave, with the change in pressure across the shock corresponding to the change in flow depth. Waves and splashing in supercritical flows require increases in the height of launder sides and also affect the overall layout of the system. Prior to todays modern systems with large flows, it was common to join launders converging at 90 degrees (or nearly at right angles) by dropping the combining flow vertically into the main collection launder with a drop box. Ninety-degree changes in launder alignment were also handled with drop boxes because of unsatisfactory experience with bends in launders. Not only did this approach require a difference in elevation that translated into taller buildings, more elevated equipment, and longer structural support columns throughout the facility, but it also created problems with splashing and overtopping of the main launder when the vertical stream joined the supercritical flow in the collection launder. In one Bechtel project, eliminating the vertical changes in elevation by suitable design of launder connections at grade reduced the height of the entire mill building by more

than 600 millimeters (24 inches), resulting in a cost savings of about US$600,000. Launder Junctions at Grade Joining launders at the same elevation with a straightforward tee connection simply does not work. If the angle is less than 90 degrees, there are designs that can be used successfully, with guidance from the literature on supercritical junctions for flood control channels by the US Army Corps of Engineers (USACE). [2] (These designs are complex and are not discussed here.) For 90-degree connections, Bechtel has successfully designed 90-degree bends in the joining launder upstream from the junction so that the two flows join as parallel streams. For this to work, the flow depths and velocities of the two streams should be nearly equal. Generally, this condition is satisfied only at the design flow rate. Fortunately, in supercritical launder flows, if the velocity and depth requirement is met at the design flow, for all practical purposes it remains satisfied over a range of flows encompassing the plant operating conditions. In addition, if the flow in the joining launder is less than the design flow rate, the junction downstream from the two joining streams acts as an expansion. Expansions in supercritical flow are generally the least susceptible to unacceptable cross waves, so a properly designed junction operates satisfactorily without launder covers. In some cases, Bechtel has had to design launder contractions in the combining launder to achieve the correct flow conditions for the two joining streams. The design of supercritical flow expansions and contractions is well documented in the literature. Particular attention needs to be paid to crosswaves in contractions. The full range of flows should be considered in the design, since the cross-waves are very dependent on the approach flow Froude number (a dimensionless parameter that characterizes the importance of gravitational effects such as waves in open channel flow), which often varies significantly with approach flow velocity and depth. Expansions should be gradual, not abrupt, to avoid flow separation from the walls of the transition. A separated flow leads to excessive cross-waves in the launder, poor flow distribution in the launder downstream from the expansion, and zones in which sanding followed by plugging can occur. Vertical Combining of Flows In-plant arrangements incorporate multiple slurry launders and pipes. Many in-plant processes are composed of several individual

The required high velocities result in launders for slurry handling that are typically designed for a supercritical flow regime.

December 2009 Volume 2, Number 1

83

Incoming Launder X 2, V2, Q2 2 1 2 y Wsin y1 1, Q1 y3 3, Q3 0 W L 1 3

A poorly designed supercritical combining flow can result in the formation of a hydraulic jump, which can lead to spillage and uncontrolled overflows.

Collection Launder (Constant Width = B or Diameter = D)

Figure 2. General Sketch of Vertical Combined Flows Showing Main Variables Involved [3]

units that deliver separate slurry streams downstream or upstream in the production line. For example, grinding area hydrocyclones (devices that classify, separate, and sort particles in a liquid suspension based on particle densities) work in parallel, each one discharging its overflow at discrete points into a launder to be transported to the flotation process. The underflow from the hydrocyclones is returned for further grinding. The challenge for Bechtel is to improve the hydraulic design of this equipment, which traditionally has been designed based on old rules of thumb that result in conservative

over-sizing and/or poor hydraulic performance in large, modern plants. In particular, a poorly designed supercritical combining flow can result in the formation of a hydraulic jump, which can lead to spillage and uncontrolled overflows. A sound hydraulic design leads to improved hydraulic performance under these critical conditions. The vertical combining of high velocity supercritical flows is a typical case of rapidly varied flow; therefore, an approach using the momentum equation is needed. Figure 2 is a graphical depiction of vertical combined flows and the main variables involved.

For a defined control volume, the momentum equation can be stated as follows: [3]

Q2 1 1

A1
Where: Q A g 2 W 0 L = = = = = = = = = =

+ 1 g 1 A1 +

Q2 2 2

A2

cos (2 ) + W sin =

Q2 3 3

A3

+ 3 g 3 A3 + 0 Pm L

(1)

fluid density in a section (designated by numerical subscript) flow in a section (designated by numerical subscript) cross-section area in a section (designated by numerical subscript) gravity constant vertical distance between flow surface and mass center of cross-section area (designated by numerical subscript) slope of incoming flow jet with respect to the horizon slope of main collection launder weight of fluid contained in control volume mean shear stress in collection launder mean wetted perimeter in control volume length of control volume

Pm =

84

Bechtel Technology Journal

Typically, Q1, Q2, 1, 2, , and 2 are known data, as are B1 (launder width in section 1) and B3 (launder width in section 3) for rectangular launders, or D1 (launder diameter in section 1) and D3 (launder diameter in section 3) for U-shaped or circular launders. V2 is calculated separately, considering special arrangements in each case (for this purpose, gradually varied flow calculations are usually required). In solving Equation 1, the y3 value (the flow depth immediately downstream of the confluence) is of interest to ensure that the launder has been designed with enough freeboard and to check that the flow remains supercritical. Especially when dealing with slurry flows, maintaining the supercritical condition is essential to keep the sediments moving, while subcritical flow is associated with the risk of sanding. Figure 3 illustrates three possible solutions for the vertical combining of flows. Figures 3(a) and 3(b) show combining of flows with enough momentum to ensure that section 1 is not influenced by downstream conditions. This is the preferred hydraulic design condition. In Figure 3(a), section 3 presents decelerated flow conditions, while in Figure 3(b), section 3 presents accelerated flow conditions. These hydraulic conditions can be established by solving Equation 1, and the solution is found without varying y1 (flow depth in section 1, which is equal to y1athe depth of flow approaching section 1in these cases). In Figure 3(a), the confluence of these flows produces the increase of y3 to reach the momentum equilibrium. In cases like this, it is important to have enough freeboard and to maintain the supercritical condition, especially in the presence of slurry flows. Downstream of section 3, the flow tends to accelerate to reach the normal depth. In Figure 3(b), the confluence produces the decrease of y3 to reach the momentum equilibrium. Here, it is important to check the maximum velocity in section 3 to control local abrasion. Downstream of section 3, the flow tends to decelerate to reach the normal depth.

In cases different from those above, the left side of Equation 1 (momentum in the direction of the flow) is not enough to reach the momentum equilibrium, taking into account just the approaching depth y1a. This means that there is a third possible water surface profile, as shown in Figure 3(c). Figure 3(c) shows combining of flows with section 1 influenced by downstream conditions. From section 3 and upstream until the hydraulic jump, decelerated flow conditions are presented. In these cases, it is not possible to reach the momentum equilibrium without changing the depth y1. Hence, y1 increases, generating subcritical conditions, which should be avoided where possible. The subcritical conditions force a hydraulic jump to occur upstream at a distance X that depends mostly on the slope of the collecting launder.

y3c y3N y1a y1 = y1a y3

Maintaining the supercritical condition of flow is essential to keep the sediments moving, while subcritical flow is associated with the risk of sanding.

Section 1Not Influenced by Downstream Conditions; Section 3Decelerated Flow Conditions (a)
y3c y3N y1a y1 = y1a y3

Section 1Not Influenced by Downstream Conditions; Section 3Accelerated Flow Conditions (b)

Hydraulic Jump y0 = y1a 0

yj

y1 > yR> y1a X

y3 3

y3c y3N

Section 1Influenced by Downstream Conditions; Section 3 and Upstream Until Hydraulic Jump Decelerated Flow Conditions (c)

Figure 3. Three Possible Solutions for Vertical Combining of Flows [3]

December 2009 Volume 2, Number 1

85

1.80 Q1/Q2 = 0.25 1.60 Q1/Q2 = 0.50

1.40

1.20

Q1/Q2 = 1.00

1.00

Q1/Q2 = 2.00 Q1/Q2 = 4.00

Failure to recognize the problems with supercritical flow in launder bend design has resulted in the extensive use of launder covers.

y3 /B 0.80 Q1/Q2 = 5.00 0.60 0.40

0.20

Slope = 1% V2 = 3.0 m/s 2 = 45 1 = 2 0.025 < Q1 and Q2 < 2.000 m3/s 0.50 < B1 = B3 < 1.25 m

0 0 0.10 0.20 0.30 0.40 y1a /B 0.50 0.60 0.70 0.80

Figure 4. Design Curves for Rectangular Concrete Launder

Besides the detailed approach presented above, Bechtel has developed practical design guideline tools for the plant designer to use during the first steps of studies and design, when design changes are common, to check space availability (three-dimensional [3D] models) and estimate costs. Figures 4 and 5 show design curves for a rectangular concrete launder that can be used as a first approach to evaluate whether flow conditions upstream of the confluence are affected by the incoming flow and to provide a preliminary estimate of flow depths for launder sizing. The data points were generated based on numerical solution of Equation 1, and the curves were fitted to these data points. The example presented is for a receiving launder with a 1% slope and an incoming flow velocity of 3.0 m/s (9.8 ft/s) that enters with an angle 2 = 45. Flow rates Q1 and Q2 vary from 0.025 to 2.0 m3/s (0.032 to 2.6 yd3/s), and width B varies from 0.50 to 1.25 m (1.64 to 4.10 ft) (same width for section 1 and section 3). Discussion 2Bends in Launders Introduction Any changes in launder alignment with supercritical flows causes shock waves in
86

the channel that can lead to splashing and overtopping of the sides of the launder. Launder bends are a case in point. The height of the waves cannot be determined with Mannings equation (an equation used to analyze open channel flow) or any of the methods used for subcritical flow. Failure to recognize the problems with supercritical flow in launder bend design has resulted in extensive use of launder covers and continual operation and maintenance problems that have cost the owner both time and money. Adhering to the methods outlined in this paper makes it possible to design bends that offer assurance that launder covers are not necessary, the bend will not sand up, and the launder sides will not be overtopped by splashing. General Characteristics of Supercritical Flow in Launders Figure 6 shows a definition sketch for supercritical flow in a launder bend. The approach flow velocity is V1, the depth is y1, and the approach flow Froude number is F1 = V1/(gy1)1/2. When the flow enters the bend, two shock waves form due to the change in direction. A positive wave forms at the outside of the bend, and a negative wave forms at the inside. represents the angle that the shock wave makes with the tangent to the circular curve.
Bechtel Technology Journal

1.80

1.60

1.40

1.20

Section 1 Influenced by Downstream Conditions

1.00

0.80

0.60

Section 1 Not Influenced by Downstream Conditions

Proper design of curves eliminates the need for launder covers.


Slope = 1% V2 = 3.0 m/s 2 = 45 1 = 2 0.025 < Q1 and Q2 < 2.000 m3/s 0.50 < B1 = B3 < 1.25 m

y3 /B 0.40 0.20 0 0.0 0.50 1.00 y1 /B 1.50 2.00 2.50

Figure 5. Regions Influenced and Not Influenced by Downstream Conditions


2 h/

c b a

V1

1 D

2 h/

a b c

c a b

Sec .

CD

A
rc

C C ma x min D

min max G

Sec. FG F

b a c

ma x min

mi ma x

b V
0

3 4

Figure 6. Supercritical Flow Conditions in Bends

As illustrated in Figure 7, the first maximum from the beginning of the bend occurs at C at an angle 0. The first minimum on the inside of the bend occurs at D, directly across the channel from the maximum located at C. The pattern of successive maximums and minimums then repeats along the length of the bend.

Figure 7. Layout for Compound Curves for Supercritical Flow Conditions in Bends

Analogous to a highway curve that is banked or superelevated, the free surface in a launder bend is superelevated, as shown in Figure 7. In subcritical flow, the rise in liquid surface at the outside of the bend in a launder of width B is given by Equation 2, where g is the
87

December 2009 Volume 2, Number 1

r c

acceleration of gravity and V is the average velocity of the flow approaching the bend.

y =

V 2B 2 grc

(2)

are transported as suspended load. However, when applied to transport of coarse solids, these methods result in excessively high limit velocities. An alternative design approach is presented here that allows finer solids to be transported as suspended load and coarser solids as bed load. Slurry Transport Mechanisms Slurry transport mechanisms can be divided into three fractions [5]: In the pseudo-homogeneous fraction, the fine particles are uniformly suspended in the liquid to form a pseudo-homogeneous equivalent fluid. As a first approximation, the maximum size of the pseudo-homogeneous fraction can be taken as 0.15 millimeter (0.006 inch). In the heterogeneous fraction, the intermediatesize particles are transported in suspension with a vertical concentration gradient. The methods given by Green, Lamb, and Taylor [6] can be used to estimate the limit velocity to maintain the solids particles in suspension. In the stratified fraction, the coarse particles are transported as bed load by sliding or bouncing along the bottom of the channel. The limit velocity required to maintain the stratified fraction in motion is discussed next. Limit Velocity for Stratified Fraction Transport Incipient motion of non-cohesive sediment particles in alluvial channels is typically evaluated based on the Shields number (a non-dimensional expression of the relative mobility of a sediment particle), discussed extensively in the sediment transport literature. [7] However, transport of solids in launders differs from transport in alluvial channels in two main respects: Limit velocity in launders is the velocity required to avoid stationary deposits. In alluvial channels, incipient motion is the start of motion of the deposited bed. Bed load transport in launders takes place over a rigid bed. In alluvial channels, bed load transport is over an alluvial bed. Experimental studies conducted at the University of Newcastle upon Tyne in the UK in the early 1970s [8] investigated the incipient motion and transport of solids in pipes and rigid bed channels. Nalluri and Kithsiri [9] extended these investigations to develop empirical relations for determining the minimum velocity in a rigid bed channel to avoid stationary deposits of

Sanding or plugging in launders requires consideration of both the flow depth (for example, in bends) and the flow velocity.

In supercritical flow, this rise in liquid level is twice the value for subcritical flow in bends. This leads to a higher liquid level on the outside of the bend, with the potential for overtopping the launder sides. Equally important, the decrease in liquid level on the inside of the bend is twice as much as for subcritical flow. This low liquid level can lead to a significant decrease in the ability of the liquid to transport the coarser fraction of the slurry, resulting in sanding at the bend and potential plugging of the launder. The particle size distribution for the slurry is required for a complete analysis, but for preliminary design, the minimum flow depth on the inside of a bend should not be less than 100 millimeters (4 inches). For rectangular launders, the effects of the cross-waves in a bend can be reduced by providing a compound curve that consists of an inlet transition curve with radius 2rc , a central curve with radius rc , and an exit transition curve of radius rc . With a compound curve, the superelevation in the bend for a rectangular-shaped launder is the same as for the subcritical flow given by Equation 2. The use of a compound curve is required where space limitations restrict the radius of curvature or the flow velocities in the launder are high. Process systems often require the compound curve, whereas tailings launders may be designed with a large radius of curvature. The design of bends in launders using the methods presented here is restricted to bends with a radius-of-curvature to launder-width ratio of rc /B = 10. Too sharp a bend leads to separation of the flow from the inside of the bend and potential hydraulic jumps at the bend. [4] Discussion 3Minimum Launder Slope for Coarse Solids Transport Launder Slope and Limit Velocity Key design parameters for slurry launders are related to the minimum flow velocity needed to transport solids particles and the minimum longitudinal launder slope needed to ensure that this velocity is maintained across the full range of input conditions. Methods traditionally used to estimate the minimum velocity in launders are generally based on the premise that solids

88

Bechtel Technology Journal

solids. The limit velocity for their methods is established at the limit condition of a stationary bed deposit. It is good design practice to avoid this condition by designing launders with a minimum design velocity at least 10% greater than the limit velocity. Case Study: Launder Design Based on Foregoing Methodology The methodology just presented in Discussion 3 is used here to assess the hydraulic conditions in a ball mill feed launder in a copper concentrator. The process design data for the launder is given in Table 1. Based on the fact that 20% of the solids material is finer than 0.15 millimeter (0.006 inch), the relative density of the pseudo-homogeneous
Table 1. Ball Mill Feed Launder Process Design Data
Process Parameter
Minimum Discharge Solids Concentration Solids Relative Density Particle Size Distribution d20 d50 dmax 0.15 mm 1.30 mm 40.00 mm

fraction is evaluated to be 1.26. The analysis presented is for a rectangular launder with an internal width of 500 millimeters (about 20 inches). Using the equivalent fluid to transport the heterogeneous and stratified fractions, numerical simulations are conducted to generate curves of limit velocities for suspended and bed load transport for the range of solids particle sizes plotted in Figure 8. Also identified in Figure 8 are regions of different sediment transport mechanisms. It can be seen that particles larger than approximately 3 millimeters (0.1 inch) can be transported by either bed load or suspended load, depending on the actual launder flow velocity. However, particles smaller than 3 millimeters (0.1 inch) will always be transported in suspension, since the limit velocity required to suspend the particles is lower than the limit velocity for bed load transport. Again referring to Figure 8, flow velocities required to transport solids particles in suspension are shown to increase rapidly as the particle size increases. For example, 10 millimeter (0.4 inch) solids particles require a velocity of 6 m/s (about 20 ft/s) to be transported in suspension. This is often impractical for launder design, since it causes high wear on the launder lining. Therefore, it is more desirable to transport the larger particles as bed load. The launder design limit velocity (VL) occurs at

When coarse solids are present, it is more desirable to transport the larger particles as bed load.

Design Value
1,000 m3/hr 70% by weight 2.8

10.0 9.0 8.0 7.0

Suspended Transport
Flow Velocity, m/s 6.0

Bed Load Transport


5.0 4.0 VL 3.0 2.0 Launder Design Limit

No Transport
1.0 0 1 10 Solids Particle Size, mm

Suspended Transport Limit Velocity Bed Load Transport Limit Velocity

100

Figure 8. Plot of Transport Regimes for Various Particle Sizes

December 2009 Volume 2, Number 1

89

the intersection of the suspended and bed load limit velocity curves, at approximately 3.5 m/s (about 11.5 ft/s). Should the flow velocity be lower than this limit velocity, stationary solids deposits would form, potentially leading to blockage of the launder. Therefore, designing this launder for a minimum velocity of 4 m/s (13 ft/s) would result in particles up to about 4.5 millimeters (0.2 inch) being transported in suspension, while coarser particles would be transported as bed load. Thus, the limit condition of stationary deposits is avoided. Discussion 4Flow Conditions Approaching Sampler Boxes and Drop Pipes Sampler Boxes Sampler boxes (see Figure 9) are typically provided by specialized vendors who are not involved in the overall project and therefore may not know the specific hydraulic conditions of the flow approaching the sampler box. Integrating these samplers into the plant layout without integrating the hydraulic design can lead to inadequate sampling performance. As described previously, launders and pipes with free surface flow (in this case, approaching sampler boxes) are usually designed for supercritical conditions in view of solids transportation issues (see Discussion 3). Sampler approach flow conditions are typically calculated using the Manning or Darcy-Weisbach formulas when Newtonian fluids are involved and using gradually varied flow equations when needed.

When the launder reaches the sampler box, the supercritical flow encounters a hydraulic control characterized by subcritical conditions (the upstream chamber into the sampler box). The flow conditions necessarily change from supercritical to subcritical, producing a hydraulic jump. Depending on the momentum provided by the approaching flow, the launder slope, and the geometric characteristics of the launder, the hydraulic jump is either into the sampler box itself (upstream chamber) or into the launder (closer to or farther from the sampler box inlet). If the hydraulic jump is into the launder, it is necessary to determine the distance from the box inlet to the point upstream where it occurs. To obtain enough turbulence to avoid settling of particles or segregation of particle size distribution, a short distance is best. This is similar to the case involving a hydraulic jump that is presented in the Vertical Combining of Flows subtopic of Discussion 1. Use of the momentum equation can help determine whether the hydraulic jump is located outside or inside the box, as well as the respective dimensional profiles. Engineering design can then ensure enough freeboard to avoid spillages. Drop Pipes Drop pipes are typically used for vertical discharges. The hydraulic characteristics of the flow approaching the vertical discharge are calculated the same way as described previously for sampler boxes or by using typical equations for full pipe flow, if needed. In such cases,

Integrating sampler boxes into the plant layout without integrating the hydraulic design can lead to inadequate sampling performance.

Launder or Pipe Inlet Free Surface Flow Supercritical Hydraulic Jump

Sampler Operation Level 1 Hydraulic Control Subcritical 2 Outlet Weir

Underflow Opening

Note: Operation level (2) is constant for a given flow; liquid level (1) is equal to level (2) plus head losses caused by the flow through the opening.

Figure 9. Sketch of Typical Sampler Box

90

Bechtel Technology Journal

Pipe Flow 1 Supercritical Hydraulic Jump

Full Pipe Flow 2 Orifice

Drop Pipe

Figure 10. Sketch of Typical Drop Pipe

the hydraulic control could be the entrance of the vertical pipe, modeled as an orifice (see Figure 10). Considering the orifice diameter and assuming that there is full pipe flow immediately upstream of the vertical discharge, head losses can be calculated for the design flow required. Once again, the momentum equation helps determine whether the hydraulic jump is located right above the orifice or upstream into the pipe. When a hydraulic jump occurs upstream of the orifice, it is necessary to determine its location and to verify if the flow condition affects solids transport. Discussion 5Use of Computational Fluid Dynamics Modeling for Distribution Boxes The Need for Computational Fluid Dynamics Computational fluid dynamics (CFD) is used to model fluid dynamics in three dimensions. CFD software employs computer-aided design (CAD) tools to construct a computational grid (i.e., mesh), advanced numerical solution techniques, and state-of-the-art graphic visualization. CFD eliminates the need for many simplifying assumptions because the physical domain is replicated in the form of a computerized prototype. A typical CFD simulation begins with a CAD rendering of the geometry, adds physical and fluid properties, and then prescribes natural system boundary conditions. Distribution boxes are used in many areas of a concentrator to distribute liquid and solids evenly among several process trains without incurring excess deposition (sanding) and successfully passing material of considerable size variation without segregation. Non-uniform

flow distribution of slurrythe solid particle component in particulardownstream of the distribution box can result in less-than-optimum production yields. CFD can be used during detail design to analyze expected performance of a distribution box and to optimize its design. The following CFD analysis was based on a specific case study from an operating plant in Malaysia. Onsite plant engineer reports indicated that equal pulp distribution (volume, solids, and particle size) was not being obtained from the distribution box. A limited amount of data taken at the plant was made available for comparison with the CFD model. Site observations were also reported to provide additional guidance and feedback. CFD Model Description The CFD model was set up using design drawings obtained from the plant. The model was composed of an upstream, open-channel launder through which slurry flows with increasing velocity due to a downward slope. In this plant, the launder is also curved, which induces a nonuniform velocity upon entrance into the SAG mill discharge sump, which acts as a distribution box among four ball mills. The distribution box baffling and size were selected to create mixing that, at minimum, would tend to even out the distribution of the coarse fraction to the ball mills. During operation, the slurry flow spills off the launder into the middle of the distribution box and impinges on the back wall of the center column. The flow drops into the midsection of the box, where it continues under the baffle created by the center column and then exits via four openings into launders

The momentum equation helps determine whether the hydraulic jump is located right above the orifice or upstream into the pipe.

December 2009 Volume 2, Number 1

91

FR-R

The solids passing through the distribution box are not uniform in size. The SAG screen undersize distribution, as provided by plant operators, is shown in Figure 12. CFD Model Results Figure 13 illustrates the results of the CFD model of the distribution box. Figure 13(a) shows an iso-surface of fluid volume fraction, which represents the calculated two-dimensional (2D) position of the fluid surface in the box in a 3D view. As mentioned previously, the transient, sloshing behavior of the fluid is realistically captured. The relatively uniform level indicates that the box is performing its function of mixing and distributing flow effectively to all four outlets. However, if the CFD solution is examined closely, the non-uniform aspects of the flow distribution to the outlets can be observed. Figure 13(b) shows flowstream lines started from the launder exit, which are calculated in the CFD post-processor. The progression of the flow inside the box is shown in this plot. Figure 13(b) also shows how the back wall of the mid-section causes the flow to deflect toward the front region of the box, where most of the turbulent mixing seems to occur, and that the resulting distribution of stream lines is weighted more heavily toward the front outlets (FR-L and FR-R in Figure 11). This effect has been observed in practice as plant operators reportedly have had to throttle the front outlets (using valves) to achieve a more balanced flow rate between the front and back outlets.

FR-L

BA-R

CFD eliminates the need for many simplifying assumptions.

BA-L

FR-L = Front-Left FR-R = Front-Right

BA-L = Back-Left BA-R = Back-Right

Figure 11. Distribution Box Model

feeding the four ball mills. The distribution box model includes a portion of the launder that starts from a plane located sufficiently upstream of the box to prescribe a velocity boundary condition calculated from the launder model. The effect of sand buildup on the bottom of the box was accounted for in a simplified manner by shortening the section below the four outlets. Sand buildup is an important operational issue for the slurry, and it is believed that the manner in which buildup occurs can affect slurry flow distribution. Figure 11 shows the resulting surface geometry of the distribution box model. Each of the four outlets is identified for reference.

100

80 Cumulative Distribution, %

60

40

20

0 1 10 100 1,000 Measured Particle Size, microns 10,000 100,000 1,000,000

Figure 12. Particle Size Distribution for Solids Plot

92

Bechtel Technology Journal

(a) Iso-Surface Profile of Slurry in Box

(b) Flow-Stream Lines Inside Distributor Box Started at Launder Outlet

The CFD solution replicates observed non-uniform aspects of the flow.

Figure 13. CFD Model of Distribution Box Results

40 35 Particles Through Outlet, % 30 25 20 15 10 5 0 75 300 850 1,700 3,360 Total Particle Size, microns Front-Left Front-Right Back-Left Back-Right

Figure 14. Distribution of Solid Particles from Distributor Box Outlets Based on CFD Predictions

The stochastic method of tracking particles is generally in good agreement with observations made at the site. Lagrangian model 1 particle traces were executed in the CFD program for a range of particle sizes approximating the range of measured particles reported from site data. Size representations of 75, 300, 850, 1,700, and 3,360 microns were used for the calculations. The statistics were computed for the four outlets

as shown in Figure 11, and the corresponding results are shown in Figure 14. These results augment the earlier fluid flow model results showing a higher percentage of flow to the front outlets. The differences between the left and right sides are likely caused by the upstream curvature of the launder. In this case, slightly more solids flow out the left side from both the front and the back of the box. It appears that the trends are consistent between larger and smaller particles, although the distribution for smaller particles is more uniform. This behavior was in agreement with observations at the plant.

1 A mathematical model used to compute trajectories

of a large number of particles to describe their transport and dispersion in a medium

December 2009 Volume 2, Number 1

93

CONCLUSIONS ith major mining companies continuing to plan for new or expanded mining and ore concentrating projects, Bechtels engineering specialists have developed and applied detailed methods of hydraulic analysis to meet the challenge of designing ever larger slurry handling systems for the concentrators. The approaches in this updated toolkit for hydraulic design are applicable to all stages of the project design life cycle, from preliminary process plant layout design to system commissioning and operation. Benefits afforded by the effective use of these methods include improved hydraulic performance and reliability, optimized system design, and savings in civil-structural capital costs realized by avoiding the use of overly conservative designs. In one example of how benefits may be derived, the effective design of launder connections in a mill building enabled the use of lower slopes and lower launder side-walls, reducing the length of structural steel columns and the amount of elevated equipment needed. The net result was a 600 millimeter (24 inch) reduction in overall building height at a savings of US$600,000 in total installed cost.

[5]

K.C. Wilson, G.R. Addie, A. Sellgren, and R. Clift, Slurry Transport Using Centrifugal Pumps, Springer Science+Business Media, Inc., 3rd Edition, New York, NY, 2006, http://www.amazon.com/Slurry-TransportUsing-Centrifugal-Pumps/dp/0387232621. H.R. Green, D.M. Lamb, and A.D. Taylor, A New Launder Design Procedure, Mining Engineering, August 1978, access via http://www.onemine.org/search/ ?fullText=a+new+launder+design+procedure. D.B. Simons and F. Sentrk, Sediment Transport Technology Water and Sediment Dynamics, Water Resources Publications, LLC, Fort Collins, CO, 1976, see http://www.wrpllc.com/books/stt.html. P. Novak and C. Nalluri, Correlation of Sediment Incipient Motion and Deposition in Pipes and Open Channels With Fixed Smooth Beds, Proceedings of the Third International Conference on the Hydraulic Transport of Solids in Pipes (Hydrotransport 3), May 1517, 1974, Golden, CO, Paper E4, pp. E4-4556, access via http://nla.gov.au/anbd.bib-an40253556. C. Nalluri and M.M.A.U. Kithsiri, Extended Data on Sediment Transport in Rigid Bed Rectangular Channels, IAHR Journal of Hydraulic Research, Vol. 30, No. 6, 1992, pp. 851856, access via http://cat.inist.fr/ ?aModele=afficheN&cpsidt=4474106.

[6]

[7]

Bechtels engineering specialists have developed and applied detailed methods of hydraulic analysis to meet the challenge of designing ever larger slurry handling systems.

[8]

[9]

ADDITIONAL READING Additional information sources used to develop this paper include:
C.D. Smith, Hydraulic Structures, University of Saskatchewan Printing Services, 1985, Saskatoon, Canada, pp. 117119. Fluent, Inc., Fluent 5 Software Users Manual, 1999. J.S. McNown and P.N. Lin, Sediment Concentration and Fall Velocity, Proceedings of the 2nd Midwestern Conference in Fluid Mechanics, The Ohio State University, Columbus, OH, State University of Iowa Reprints in Engineering, Reprint No. 109/1952, pp. 401411. T.W. Sturm, Open Channel Hydraulics, 1st Edition, McGraw-Hill Book Company, Boston, MA, 2001. T.W. Sturm, Simplified Design of Contractions in Supercritical Flow, Journal of Hydraulic Engineering, ASCE, Vol. 111, No. 5, May 1985, pp. 871875, access via http://cedb.asce.org/ cgi/WWWdisplay.cgi?8501175. T.W. Sturm, Closure of Simplified Design of Contractions in Supercritical Flow, Journal of Hydraulic Engineering, ASCE, Vol. 113, No. 3, March 1987, pp. 425427, http://cedb.asce.org/ cgi/WWWdisplay.cgi?8700317.

REFERENCES
[1] W.P. Imrie, New Developments in the Production of Non-Ferrous Metals, Invited Plenary Lecture, Proceedings of European Metallurgical Conference (EMC 2005), Dresden, Germany, September 1821, 2005, p. XVII (abstract), http://www.emc.gdmb.de/ 2005/proceedings2005cont.pdf. R.L. Stockstill, Lateral Inflow in Supercritical Flow, Report No. ERDC/CHL TR-07-10, Coastal and Hydraulics Laboratory, US Army Corps of Engineers, September 2007, http://chl.erdc.usace.army.mil/Media/ 9/5/4/ERDC-CHL%20TR-07-10.pdf. J.M. Adriasola, Confluencia Vertical de Flujos Supercrticos de Alta Velocidad Modelacin Preliminar y Anlisis, Proceedings of the XIX Congreso Chileno de Ingeniera Hidrulica, Via del Mar, Chile, October 2124, 2009 (on CD), http://www.congresohidraulica.cl/ index2.htm. A. Valiani and V. Caleffi, Brief Analysis of Shallow Water Equations Suitability to Numerically Simulate Supercritical Flow in Sharp Bends, Journal of Hydraulic Engineering, ASCE, Vol. 131, No. 10, October 2005, pp. 912916, http://cedb.asce.org/cgi/ WWWdisplay.cgi?0527792.

[2]

[3]

[4]

94

Bechtel Technology Journal

BIOGRAPHIES
Jos M. Adriasola, a civil engineer with 10 years of experience in hydraulic engineering, joined Bechtel in 2008. He currently serves as a technical specialist with the Mining & Metals Global Business Unit and participates in hydraulic and hydrologic engineering analysis and design on multiple projects. Joss technical knowledge and skills have been applied to hydropower and mining projects in Chile, including the Ralco hydropower plant and the Los Pelambres, Escondida, and Los Bronces copper concentrator plants, among others. Jos has taught courses in fluid mechanics and urban hydrology and hydraulics at Universidad de los Andes (Santiago, Chile) since 2005, where he has also supervised and performed research related to the efficient use of water in urban environments. His professional memberships include the Chilean Society of Hydraulic Engineering, the Colegio de Ingenieros de Chile, and the International Association for HydroEnvironment Engineering and Research. Jos has an MS in Hydraulic Engineering from Pontificia Universidad Catlica de Chile, Santiago. Robert H. Janssen, PhD, is a principal engineer for Bechtel Chile, with 20 years of experience in conceptual and detailed design, technical oversight, engineering management, study management, and construction of complex hydraulic and water resources infrastructure for worldwide power, infrastructure, airport, highway, rail, mining, petrochemical, hydroelectric, water, and wastewater projects. Robert is active in international hydraulics organizations, where he has published more than a dozen technical papers; is a member of the leadership team for the Hydraulics Structures Committee of the International Association of Hydro-Environment Engineering and Research; and is chair of the organizing committee for the International Junior Researcher and Engineer Workshop on Hydraulic Structures, to be held in Edinburgh, Scotland, in May 2010. Robert has a PhD in Civil Engineering Environmental Water Resources from the University of California at Berkeley; an MSc in Civil Engineering Hydraulics from the University of Michigan, Ann Arbor; and a BSc (Hons) in Engineering Science Civil Engineering from Durham University, in the UK. His particular research interests are sediment and slurry transport.

Fred A. Locher, PhD, is a principal engineer in Bechtels Geotechnical and Hydraulic Engineering Services Group, with over 35 years of experience in the hydraulic design of structures, including spillways, energy dissipators, and flood control channels; analysis of hydraulic transients in process systems for mining, petrochemical, and power industries; evaluation of scour and sediment deposition in structures and conveyance systems; analyses of non-Newtonian flows in pipelines and open channels; and methodology and design guidelines for slurry transport systems. Fred is a member of the American Society of Civil Engineers and the International Association for Hydro-Environment Engineering and Research. He was a member of ASCEs Task Committee on Standards in Hydraulics and received the ASCE Freeman Award, 19671968, and the Karl Emil Hilgard Hydraulic Prize in 1975. Fred served on the Intake Design Committee for development of the ANSI/HI9.8 American National Standard for Pump Intake Design published in 1998 and is the author of more than 30 publications in technical journals and conference proceedings. Fred received a PhD in Hydraulics and Fluid Mechanics and an MS in Mechanics and Hydraulics, both from the University of Iowa, Iowa City; and a BS in Civil Engineering from Michigan Technological University, Houghton. He is a licensed Professional Civil Engineer in California. Jon M. Berkoe is a senior principal engineer and manager for Bechtel Systems & Infrastructure, Inc.s, Advanced Simulation and Analysis Group. He oversees a team of 20 technical specialists in the fields of CFD, finite element structural analysis, virtual reality, and dynamic simulation in support of Bechtel projects across all business sectors. Jon is an innovative team leader with industry-recognized expertise in the fields of CFD and heat transfer. During his 21-year career with Bechtel, he has pioneered the use of advanced engineering simulation on large, complex projects encompassing a wide range of challenging technical issues and complex physical conditions. Jon has presented and published numerous papers for a wide variety of industry meetings and received several prominent industry and company awards. They include the National Academy of Engineerings Gilbreth Lecture Award, the Society of Mining Engineers Henry Krumb Lecturer Award, and three Bechtel Outstanding Technical Paper awards. Jon holds an MS and a BS in Mechanical Engineering from the Massachusetts Institute of Technology, Cambridge, and is a licensed Professional Mechanical Engineer in California.

December 2009 Volume 2, Number 1

95

Sergio A. Zamorano Ulloa joined Bechtel in 2008 and is a mechanical engineer with the Xtrata Mechanical Group. He has 4 years of engineering experience in the field of mining. In his previous assignment, he assisted and prepared mainly hydraulic calculations concerning channel-free surface fluid flow, pumps, and tanks. In 2008, he was selected as principal mechanical engineer on Bechtels Los Bronces project, near Santiago, Chile. Previously, Sergio was a mechanical engineer for Vector Chile Limitada, Innovatec YNC Ltda., Sergio Contreras y Asoc., and Inconsult, all in Santiago, Chile. Sergio co-authored the short paper, On the Use of the Weibull and the Normal Cumulative Probability Models in Structural Design, published online in January 2007 for Elsevier in Materials & Design 28 (2007) 24962499. He is affiliated with the Colegio de Ingenieros de Chile. Sergio received both Mechanical Civil Engineer and Material Civil Engineer degrees from the Universidad de Chile, Santiago.

96

Bechtel Technology Journal

Oil, Gas & Chemicals


Technology Papers

TECHNOLOGY PAPERS

99

Plot Layout and Design for Air Recirculation in LNG Plants


Philip Diwakar Zhengcai Ye, PhD Ramachandra Tekumalla David Messersmith Satish Gandhi, PhD

109

Wastewater TreatmentA Process Overview and the Role of Chemicals


Kanchan Ganguly Asim De

119

Electrical System Studies for Large Projects Executed at Multiple Engineering Centres
Sabine Pass LNG Terminal
A tanker with a cargo of liquid energy is moored at the Sabine Pass liqueed natural gas receiving terminal in southern Louisiana.

Rajesh Narayan Athiyarath

PLOT LAYOUT AND DESIGN FOR AIR RECIRCULATION IN LNG PLANTS


Issue Date: December 2009 AbstractThe disposition of waste heat in liquefied natural gas (LNG) plants has become increasingly important as train sizes approach 5 million tons per annum (MTPA). A major contributor to this problem is the large number of fin-fan, air-cooled heat exchangers (ACHEs) typically used to cool the gas to liquid phase. Since ACHEs reject heat to the atmosphere, their effect on local ambient temperature and wind conditions can contribute to loss of LNG production, particularly from the impact on the turbine drivers of refrigeration compressors. To develop and optimize plant layouts that minimize the effects of air recirculation, Bechtel uses computational fluid dynamics (CFD) models. This paper discusses typical air recirculation issues and mitigation measures and presents case studies. Keywordsair flow, air-cooled heat exchanger (ACHE), computational fluid dynamics (CFD), crosswind, data comparison, heat exchanger, liquefied natural gas (LNG), mitigation, multi train, propane condenser, self-recirculation, simulation methodology, skirts, stacks, temperature contamination, temperature rise, validation, virtual reality, wind rose

INTRODUCTION

A
Philip Diwakar
pmdiwaka@bechtel.com

Zhengcai Ye, PhD


zye@bechtel.com

Ramachandra Tekumalla
rptekuma@bechtel.com

s the typical train size in liquefied natural gas (LNG) plants has grown from 2 million tons per annum (MTPA) in 1990 to 4.5 MTPA today, the disposition of waste heat has become increasingly important. And solving this problem will only become more critical in the future, with trains of more than 5 MTPA capacity being considered by several projects. Thus, facility design economics will be driven not only by normal equipment and operating costs, but also by the need to optimize design margin with overall facility arrangement and capacity requirements.

Even with state-of-the-art equipment and thermally efficient designs employing combined cycle power and process integration, a large multi-train facility releases a significant amount of heat. Figure 1 illustrates how this waste heat can affect a facilitys ability to produce at relative design capacities for all potential ambient conditions. The production rate at LNG plants can be very sensitive to the inlet temperatures of compressor turbine drivers and plant air-cooling equipment. The inlet temperatures of this equipment depend on the local wind conditions, terrain, and climate,

David Messersmith
dmessers@bechtel.com
80,000 75,000 70,000 65,000 60,000 55,000 50,000 45,000 40,000

Plant Area vs. Capacity


m2/MTPA Power (m2/MTPA) 4,000 3,500

Heat Released vs. Capacity


Heat/MTPA Linear (Heat/MTPA)

MW
y = Cx0.8029 R = 0.982
z

Satish Gandhi, PhD


satish.l.gandhi@ conocophillips.com ConocoPhillips Company

m2

3,000 2,500 2,000 2.0 3.0 4.0 5.0 6.0 y = Ax + B R = 0.9833


z

2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0

MTPA

MTPA

Figure 1. Impact of Plant Capacity on Plant Footprint and Heat Release

2009 Bechtel Corporation. All rights reserved.

99

ABBREVIATIONS, ACRONYMS, AND TERMS


ACHE CFD LNG MTPA air-cooled heat exchanger computational fluid dynamics liquefied natural gas million tons per annum

THE IMPACT OF AIR-COOLED HEAT EXCHANGERS

NG plants typically use a large number of fin-fan, air-cooled heat exchangers (ACHEs) to cool natural gas to the liquid phase. These ACHEs use large axial-flow fans to blow air over finned tubes, thereby removing heat and condensing the process gas. Most manufacturers design and test an ACHE based on a single-bay setup. The performance guarantee of each condenser unit is based on the availability of sufficient air at design temperatures at its inlet face. The total number of condenser units required for a given duty, space considerations, and the equipment layout plan govern the actual construction of bays in the field. However, the performance of each bay in a multiple-bay design may differ from that of a single bay because of variations in air flow distribution and hot air recirculation. Since ACHEs reject heat to the atmosphere, they can affect the local ambient temperature and wind conditions. These effects can lead to loss of LNG production, particularly from their impact on the turbine drivers of the refrigeration compressors used to maintain stored LNG in the liquid phase. As train size increases, the potential impact of ACHEs on plant performance becomes even more significant and further establishes the value of performing CFD studies to evaluate ways of mitigating the effects of air recirculation through plant layout and other measures.

CFD enables air recirculation impacts to be analyzed for greenfield as well as brownfield sites.

as well as the air recirculation caused by exhaust from plant equipment. To design a plant layout for optimal production, an engineer must have a good understanding of the phenomenon of air recirculation within the facility. Bechtel uses computational fluid dynamics (CFD) for this purpose. CFD enables air recirculation impacts to be analyzed for greenfield as well as brownfield sites. Bechtel performs CFD analyses at two scales: macro, to evaluate overall siting requirements such as orientation and spacing, and micro, to examine the air velocity profiles around individual pieces of equipment. This approach is useful in evaluating a sites impact on process performance and in developing design margins for equipment. To demonstrate how CFD is applied in designing LNG plant layouts, this paper studies the interaction of a multi-train LNG facility with its environment. Validation of modeling methodology is also described.

Add stacks and skirts to individual components

Study the effects of horizontal and vertical skirts of varying length and the addition of longer stacks on hot air recirculation patterns Yes

Terrain data Wind roses

Drawing with dimensions (top and elevation views needed)

Generate grid Mesh propane condenser, air cooler, and compressor as individual components (Separate components reduce time requirements for geometry or component modifications.)

Obtain component specifications, flow rates, fan curve characteristics, and heat requirements

100

s s e co rP - e rP

Data Collection/ LNG Input Sheets

Pre-Processing g Solution
Merge all components into one grid file Set appropriate boundary conditions and run calculations for individual wind directions

Post-Processing
Obtain contour plots and animation sequences showing air recirculation patterns Is the temperature rise more than 2 C above ambient?

No Present temperature contours, animation, and write-up Develop prototype drawings and detailed design process, review validation plan based on analysis, and develop prototype build control plan Use lessons learned to reduce costs, risks, and environmental effects Benchmark and validate, if data available

Figure 2. CFD Analysis Work Process

Bechtel Technology Journal

Figure 3. Facility CFD Grid

Comparing the results obtained from CFD with measurements taken in the field is instrumental in assessing both model validity and prediction accuracy.

TYPICAL CFD STUDY OF AIR RECIRCULATION A process flow chart for an LNG air recirculation study using CFD simulation is provided in Figure 2. The chart depicts how mitigation measures are included in the evaluation process to reduce inlet air temperatures to the compressors and air-cooling equipment.

VALIDATION OF CFD METHODOLOGY

s in most simulation procedures, certain assumptions and simplifications are inherent in CFD models. While CFD has been used extensively in air recirculation studies, a question may be raised as to the validity of the models. The accuracy of CFD predictions may also be questioned. With answers to these questions, projects can better understand the design envelope of an analysis and design the equipment accordingly. Comparing the results obtained from CFD with measurements taken in the field is instrumental in assessing both model validity and prediction accuracy. The following example illustrates this validation process. Measurements In one case study at a three-train LNG plant whose CFD grid is shown in Figure 3, a plan was developed to statistically analyze the air-

cooling equipment inlet temperature rises and determine their variation with wind direction, wind speed, and ambient temperature. Wind and ambient temperature data and air-cooling equipment inlet temperatures were to be recorded every 15 minutes for 6 months. To collect the inlet temperature data, 25 sensors were installed on the ethylene and propane condenser racks. The condensers were located 12 to 18 m (39 to 59 ft) above ground. The sensors were mounted about 1.2 to 1.5 m (4 to 5 ft) below the tube bundles. Because of a wind vane problem at the site, wind measurements were only obtained for a 10-day period. Using the data obtained, a plot of the air-cooling equipment inlet temperature rises at this plant versus wind direction, wind speed, and ambient temperature is shown in Figure 4. Because of the scattered nature of the measurement data, no definite trend can be identified. Data Comparison for East Wind Direction The 10-day wind data was filtered for the east wind direction (270 degrees 10 degrees). The wind speed and ambient temperature were also filtered so that there was less than 10% variation in wind speed and less than 0.5 C (0.9 F) variation in ambient temperature. Twelve measurement conditions from the 10-day measurements fit

December 2009 Volume 2, Number 1

101

T2-214747 P22 12 10 8 6 DT(C) T2 Propane 4 2 0 2 4 6 8 10 22 24 26 28 30 32 34 Ambient Temperature, C

The discrepancies between the CFD model results and the temperature measurements are shown to be within 1 C (1.8 F).

The CFD results were compared with two instantaneous temperature measurements. Figure 5 shows the results of this comparison. The discrepancies between the CFD model results and the temperature measurements for the east wind direction are shown to be within 1 C (1.8 F) in 12 out of 16 locations. The two largest differences occur at locations E11 and P11. At location E11, the two cooling water pipes that circulate water from the compressors to the nearby vessel and air-cooling equipment may have contributed the higher local temperature rise measurements.

CATEGORIZING AIR RECIRCULATION CONTAMINATION hen a large amount of heat generating equipment is located in a limited plot space, interactions among that equipment are unavoidable. Many types of equipment at an LNG plant involve air intake and exhaust. When the pieces of equipment are close together, a limited amount of fresh air is available and some pieces may start to draw in exhaust air from other pieces or themselves. Such contamination results in loss of cooling surface area and/or higher inlet temperature above the ambient. The temperature contamination can be classified into two categories: Contamination from self-recirculation Contamination from other exhausts

Figure 4. Measured Air-Cooling Equipment Inlet Temperature Rise (Measured Inlet Temperature Minus Ambient Temperature) Versus Wind Direction, Wind Speed, and Ambient Temperature

these criteria. The corresponding wind speeds and ambient temperatures were averaged, and the results (east wind, 1.9 m/sec [6.2 ft/sec]; ambient temperature, 25.2 C [77.4 F]) were used as inputs to the CFD model. The air-cooling equipment inlet temperature data for these wind conditions was then used for comparison with the CFD model results.

9 8 7 6 5 4 3 2 1 0 1 E11 E12 P11 P12 P13 P14 P15 P16 P17 E31 E32 P31 P32 P33 P34 P35 P36 Within 2 Sigma CFD All Measured Data

Temperature Rise, C

Measurement Locations

Light blue bars = all measurement points Navy blue bars = range within which 69% of measurements fall 12 data points for each measurement location Wind speed: 1.9 m/sec (6.2 ft/sec) 0.2 m/sec (0.7 ft/sec) Ambient temperature: 25.2 C (77.4 F) 0.5 C (0.9 F)

Figure 5. Comparison of CFD Results with Inlet Temperature Measurements for East Wind Direction

102

Bechtel Technology Journal

Wind Direction Self-Recirculating Zone Propane Condenser

Temperature, C

28.5

29.5

30.9

31.5

32.5+

Figure 6. Temperature Profile Underneath Double-Bank Propane Condensers

Temperature Contamination from Self-Recirculation Because of site layout or wind conditions, exhaust air can sometimes recirculate to the inlets of the same unit. To illustrate this activity, a CFD model was constructed based on the use of a double bank of air-cooled propane condenser units. Different wind speeds and directions were tested, and the effects on the propane condensers were studied. Figure 6 shows the temperature profile just underneath the propane condensers fans. The simulations predicted that when wind direction is perpendicular to the length of the propane condensers (crosswind), the largest heated zone is created (Figure 7), resulting in the greatest self-recirculation. But when the wind blows along the length of the propane condensers, much less self-recirculation occurs because the edge facing the wind is shorter. However, under strong wind conditions, the turbulence created from the corners can swirl back into the inlets of the propane condensers, as shown in Figure 7. Based on these simulations, the plant can be reoriented or mitigation measures studied to minimize the self-recirculation during crosswinds. Self-recirculation may be offset by using horizontal or vertical plates bolted to the sides of the propane condenser units. As shown in Figure 8, the vertical skirt causes downward flow outside the skirt, resulting in a wider exhaust plume. The horizontal skirt offsets this flow away from the air-cooling equipment by a distance as wide as the skirt, resulting in a

narrow plume and causing more lift with less possibility of recirculation back into the inlet. While the same amount of fan throughput is predicted in both cases, the horizontal skirt results in a lower inlet recirculation temperature into the air-cooling equipment. In the crosswind case, the vertical skirt offers larger resistance to airflow, accelerating flow below the first one or two rows of fans. As seen in the vector plot, the downward pull of the airflow upstream of the vertical skirt renders the first row of fans nearly dysfunctional. Moreover, since the crosswind

Based on simulations, the LNG plant may be reoriented or mitigation measures studied to minimize self-recirculation during crosswinds.

Z X Y

Z X Y

Figure 7. Stream Lines Showing Self-Recirculation in Crosswind and Parallel Wind

December 2009 Volume 2, Number 1

103

Vertical skirt offers larger obstruction area, causing intense pull-down of flow near the skirt.
5.7 35.0 15.7 7.9 7.9 30.5 0.0 0.0 26.0 Z Y X Y Z X

115.7 35.0 15.7 7.9 7.9 30.5 0.0 0.0 26.0

Horizontal skirt pushes pull-down further upstream, preventing recirculation.


35.0 30.5 26.0 26.0 X Horizontal Y

35.0 35.0 30.5 30.5 26.0 26.0 26.0 Z X Vertical

Z Y

Figure 8. 10-Foot Horizontal Versus Vertical Skirt in a Crosswind

approaches the air-cooling equipment from one side, the entire airflow has to enter the coolers from that side, resulting in higher average and peak velocity below the skirts. What, then, should be the size of the skirt? Figure 9 shows a comparison of 1.5, 3, and 4.5 m (5, 10, and 15 ft, respectively) horizontal skirts in the crosswind case. As can be seen in the figure, the widest skirt (4.5 m [15 ft]) results in the least amount of recirculation due to the downward flow being pushed away from the air-cooling equipment, thus preventing it from recirculating back into the inlet. The first rows of fans appear to be severely affected when exhausting flow for the 1.5 m (5 ft) skirt case. The fan
35.0 30.5 26.0 1.5 m Skirt 35.0 30.5 26.0

performance gradually improves as the skirt width is increased to 4.5 m (15 ft). Increase in flow and decrease in inlet temperature are also apparent as skirt width increases. However, structural stability, flutter, and fatigue limit skirts to a maximum of about 3 m (10 ft). Temperature Contamination from Other Exhausts When multiple large pieces of heatreleasing equipment (such as ACHEs) are located within a limited plot area, temperature contamination from these various sources is also possible in addition to self-recirculation. Downstream equipment

35.0 30.5 26.0

3 m Skirt Z Y X Y Z X Y Z X

4.5 m Skirt

Figure 9. Comparison of Various Size Skirts in a Crosswind

104

Bechtel Technology Journal

can draw heated exhaust from upstream equipmenta more common scenario in multi-train LNG plants and harder to remedy once a plant is built. Unlike self-recirculation, there is no typical worst-case scenario associated with this type of contamination. For a multi-train LNG plant simulation, the CFD model shows the airflow path and the temperature increase. In Figure 10, the flow pattern shows how some exhaust air is drawn into the downstream equipment.

Figure 10. CFD SimulationExhaust Air from One Equipment Item Recirculating to Anothers Inlet

Terrain and wind rose (wind speed and direction), apart from neighboring equipment and plants, also have a significant effect on air-cooling equipment air intake. If the plant is located in a valley, downwash from neighboring hills causes an abrupt temperature rise in some units, as shown in Figure 11. Mitigation solutions to avoid cross-contamination among equipment or trains include the use of skirts and the use of fan hoods. Fan hoods help by ensuring that equipment exhausts are released at higher elevations. This approach is effective when horizontal wind speeds are closer to moderate than strong.

Another mitigation approach may include adjusting the height of various equipment items, within reason. Such an approach may work for propane condensers, for example. However, when one piece of equipment is tightly connected to another, as in the case of refrigeration system intercoolers, altering heights without incurring significant costs from piping changes may not prove feasible. The optimal mitigation solution is one that does not require significant mechanical alterations. Depending on the plot constraints, local terrain, and wind conditions, a proper plot orientation

Mitigation solutions to avoid cross-contamination among equipment or trains include the use of skirts and fan hoods.

Wind Directions from Wind Rose Train 1 Train 2

Buildings

Y X Compressor Building with Turbine Stack

10.0 9.5 9.0 8.5 8.0 7.5 7.0 6.5 6.0 5.5 5.0 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5 0.0 50.0 48.5 47.0 45.5 44.0 42.5 41.0 39.5 38.0 36.5 35.0 33.5 30.5 29.0 27.5 26.0 24.5 23.0 21.5 20.0

Z Y X

Velocity Vectors Colored by Magnitude of Velocity, Scaled from 010 m/s

Air Coolers

Double Bank Propane Condensers Z Y X

Temperature Contours Cut Slice Y Through All Units, X Scaled from 250 C

Figure 11. SWW Wind Direction at 4.2 m/sec (13.7 ft/sec) Stream Lines from All Units Colored by Temperature, Scaled from 2035 C (6895 F)

December 2009 Volume 2, Number 1

105

may minimize the impact of air recirculation under extreme conditions. A simple starting point, where feasible, is to consider orienting the plot so the propane condensers are axial to the prevailing wind direction at hightemperature conditions. However, it should be noted that this is only a starting point that is by no means absolute.

solved in ever shorter amounts of time using CFD simulations. Manipulating the design of a virtual plant is much less expensive than making changes in a real facility. And with CFD, results are usually available in less time than from testing using a physical model. Considering the combined value of its cost and timing benefits, CFD is certain to play an increasingly important role in the design and construction of large LNG plants.

CONCLUSIONS

CFD can be a fast and economic way to analyze the fluid dynamics around plants.

ith multi-train LNG plants becoming more common, the impacts from the surrounding environment are a vital consideration for production. Large vessels, towers, and buildings can block air feeding into air-cooled equipment. Turbulence generated behind these structures can also create local recirculation zones. If aircooling equipment is located within these zones, the equipment can be starved for fresh air. CFD can be a faster, cheaper way to analyze the fluid dynamics around plants. Using CFD in the design phase can help to minimize recirculation problems. In a simulation, the relationships among, and orientations of, open spaces and buildings can be evaluated and different kinds of weather conditions can be assessed. Different mitigation measures can also be examined, such as changing the orientation of the plant or relocating some of the equipment. Although no complete solutions exist for these problems, analyzing different scenarios can minimize the effects of recirculation on LNG production. In the authors experience, recirculation effects can be minimized by using hoods or horizontal or vertical skirts. The fluid dynamics of the plume emanating from the air-cooling equipment change significantly with the type of skirt (vertical/horizontal) and wind direction (parallel/crosswind). A horizontal skirt is a more aerodynamic design that helps air flow into the air-cooling equipment while reducing air recirculation and improving air-cooling equipment performance under all wind speeds and directions. However, as skirt width is increased beyond 10 feet, the return on improved performance diminishes while the cost increases. CFD can provide valuable insights to the plant designers or engineers and enable informed decisions to be made about these and similar design factors. As a mitigation tool, CFD enables various parameters to be changed in the virtual space so the most effective solution can be found. Computing technology continues to advance, and increasingly complex problems can be

REFERENCES
[1] A. Avidan, D. Messersmith, and B. Martinez, LNG Liquefaction Technologies Move Towards Greater Efficiencies, Lower Emissions, Oil & Gas Journal, Vol. 100, Issue 33, August 19, 2002, access via http://www.ogj.com/index/ current-issue/oil-gas-journal/volume-100/ issue-33.html.

ADDITIONAL READING Additional information sources used to develop this paper include: J. Berkoe, Fluid Dynamics Visualization Solves
LNG Plant Recirculation Problem, Oil & Gas Journal, Vol. 97, Issue 13, March 29, 1999 (access via http://www.ogj.com/index/currentissue/oil-gas-journal/volume-97/issue-13.html).

W.K. Yee, D. Lin, V. Mehrotra, and P. Diwakar, Predicting Environmental Impacts on MultiTrain LNG Facility Using Computation Fluid Dynamics (CFD), AIChE Spring National Meeting, Atlanta, GA, April 1014, 2005, access via http://www.aiche.org/ Publications/pubcat/0816909849.aspx. D. Lin, W.K. Yee, P. Diwakar, and V. Mehrotra, Validation of the Air Recirculation CFD Simulations on a Multi-Train LNG Plant, AIChE Spring National Meeting, New Orleans, LA, April 2529, 2004, access via http://www.aiche.org/Publications/pubcat/ listings/2004springmeetingcd.aspx.

BIOGRAPHIES
Philip Diwakar is a senior eng i neer i ng specia l ist for Bechtel Systems & Infrastructure, Inc.s, Advanced Simulation and Analysis Group. He employs state-ofthe-art technology to resolve a wide range of complex engineering problems on largescale projects. Philip has more than 15 years of experience in CFD and finite element analysis for structural mechanics. His more recent experience includes work on projects involving fluidsolid interaction and explosion dynamics.

106

Bechtel Technology Journal

During his 8-year tenure with Bechtel, Philip has received two full technical grants. One was used to determine the effects of blast pressure on structures at LNG plants, with a view toward an advanced technology for designing less costly, safer, and more blast-resistant buildings. The other grant was used to study fluid-structure interaction in building structures and vessels. Philip has also received four Bechtel Outstanding Technical Paper awards, as well as two awards for his exhibit on the applications of fluid-solid interaction technology at the 2006 Engineering Leadership Conference in Frederick, Maryland. Before joining Bechtel, Philip was a project engineer with Caterpillar, Inc., where he was part of a Six Sigma team. He applied his CFD expertise to determine the best approach for solving issues involving the cooling of Caterpillar heavy machinery. Philip holds an MTech in Aerospace Engineering from the Indian Institute of Science, Bengalaru; a BTech in Aeronautics from the Madras Institute of Technology, India; and a BS in Mathematics from Loyola College, Baltimore, Maryland. He is a licensed Professional Mechanical Engineer and is a Six Sigma Yellow Belt. Zhengcai Ye, PhD, is a CFD engineering specialist with more than 15 years of research and industrial experience in chemical engineering and related areas. Much of his work has focused on CFD modeling of chemical reactors and industrial furnaces, and chemical process modeling. He is currently engaged in air recirculation modeling of LNG plants and CFD modeling of chemical equipment. Before joining Bechtel, Zhengcai was a senior project engineer for IGCC projects at Mitsubishi Power Systems Americas, Inc., and a chemical and software engineer at Shanghai Baosteel Group Corporation, China. Zhengcai contributed a book chapter, Mathematical Modeling and Design of Ultraviolet Light Process for Liquid Foods and Beverages, in Mathematical Modeling of Food Processing (Taylor & Francis, 2009) and has authored more than 20 journal papers. He is a senior member of the American Institute of Chemical Engineers (AIChE). Zhengcai holds a PhD in Chemical Engineering from the Georgia Institute of Technology, Atlanta; an MS in Chemical Engineering from Florida State University, Tallahassee; and an MS in Inorganic Materials from East China University of Science & Technology, Shanghai, Peoples Republic of China. Ramachandra Tekumalla is chief engineer for OG&Cs Advanced Simulation Group, located in Houston, Texas. He leads a group of 10 experts in Bechtels Houston and New Delhi offices in developing advanced applications for various simulation technologies, such as APC, CFD, FEA, OTS, dynamic simulation, and virtual reality.

Ram has more than 11 years of experience in applying these technologies, as well as in real-time optimization, to ensure the successful completion of projects worldwide. Prior to joining Bechtel, Ram was an applications engineer with the Global Solutions Group at Invensys Process Systems, where he developed applications for refineries and power plants, including real-time control, performance monitoring, and optimization. Ram holds an MS from the University of Massachusetts, Amherst, and a BE from the Birla Institute of Technology & Science, Pilani, India, both in Chemical Engineering. David Messersmith is deputy manager of Bechtels LNG and Gas Center of Excellence, responsible for LNG Technology Group and Services, for the Oil, Gas & Chemicals Global Business Unit, located in Houston, Texas. He has held various lead roles on LNG projects for 15 of the past 18 years, including work on the Atlantic LNG project conceptual design through startup as well as many other LNG studies, FEED studies, and projects. Daves experience includes various LNG and ethylene assignments during his 18 years with Bechtel and, previously, his 10 years with M.W. Kellogg, Inc. Dave holds a BS in Chemical Engineering from Carnegie Mellon University, Pittsburgh, Pennsylvania, and is a licensed Professional Engineer in Texas. Satish Gandhi, PhD, is LNG Product Development Center (PDC) director and manages the center for the ConocoPhillips-Bechtel Corporation LNG Collaboration. He is responsible for establishing the work direction for the PDC to implement strategies and priorities set by the LNG Collaboration Advisory Group. Satish has more than 35 years of experience in technical computing and process design, as well as troubleshooting of process plants in general and LNG plants in particular. He was previously process director in the Process Technology & Engineering Department at Fluor Daniel with responsibilities for using state-of-the-art simulation software for the process design of gas processing, CNG, LNG, and refinery facilities. Satish also was manager of the dynamic simulation group at M.W. Kellogg, Ltd., responsible for technology development and management and implementation of dynamic simulation projects in support of LNG and other process engineering disciplines. Satish received a PhD from the University of Houston, Texas; an MS from the Indian Institute of Technology, Kanpur; and a BS from Laxminarayan Institute of Technology, Nagpur, India, all in Chemical Engineering.

December 2009 Volume 2, Number 1

107

108

Bechtel Technology Journal

WASTEWATER TREATMENT A PROCESS OVERVIEW AND THE ROLE OF CHEMICALS


Issue Date: December 2009 AbstractWhether in reference to a refinery, a chemical process, or a utility plant, zero discharge or minimum discharge of effluent from the plant boundary is a present-day motto in safeguarding our environment. The enforcement of stringent environmental norms has spurred scientists and process owners to develop comprehensive wastewater treatment programmes to constantly improve effluent discharge quality, promote water savings through recycling, and eventually minimise plant life-cycle costs. This paper provides an overview of the major wastewater treatment processes and the roles different chemicals play in these processes. Keywordsactivated sludge, biochemical oxygen demand (BOD), chemical oxygen demand (COD), clarifier, coagulation, colloidal particle, dissolved air flotation, dissolved oxygen, emulsion, flocculation, neutralisation, oily wastewater, polyelectrolyte, precipitation, redox reaction, suspended solids (SS), turbidity, wastewater treatment

INTRODUCTION ost industrial processes give rise to polluting effluents from the contact of water with gas, oils, liquids, and solids. The release of effluents to water bodies and soil renders them unsafe for drinking, fishing, agricultural use, and aquatic life. The need for sustained development and industrial continuity calls for a systematic and comprehensive treatment of effluents to reduce all contaminants to acceptable limits, making the effluents environmentally safe before they are discharged outside the plant boundary. These standards are usually governed by legislative bodies and are modified from time to time. An example of typical discharge quality requirements for effluent water is provided in Table 1. Based on the complexity of the process and the process industry, industrial wastewater requires specialised treatment to remove one or more of these pollutants: Suspended solids (SS) and/or turbidity Oil and grease Colour and odour Dissolved gases Soluble impurities and contaminants Heavy metals Germs and bacteria
2009 Bechtel Corporation. All rights reserved.

Table 1. Typical Limits for Effluents Discharged into the Environment


Constituent
Ammoniac nitrogen Arsenic (As) Biochemical oxygen demand (BOD) Cadmium (Cd) Chlorine (residual) Chromium, total (Cr) Copper (Cu) Chemical oxygen demand (COD) Cyanide (CN) Oil Iron, total (Fe) Lead (Pb) Manganese (Mn) Mercury (Hg) Nickel (Ni) Phenols Phosphate, total (P) Selenium (Se) Silver (Ag) Sulphide Suspended solids Turbidity Zinc (Zn) pH

Desirable Limit, mg/L


5 0.1 30 0.1 1 0.1 1.5 150 0.1 15 2 0.1 2.0 0.001 0.2 0.2 30 0.05 0.05 0.2 30 90 0.5

Maximum Limit, mg/L


10 0.5 50 0.2 2 0.2 3.0 200 0.2 25 5 0.3 3.0 0.05 1.0 0.5 40 0.09 0.1 0.5 50 120 5

Kanchan Ganguly
kganguly@bechtel.com

Asim De
akde@bechtel.com

69 Standard Units (SU)

Note: Some countries provide limits on dissolved solids in treated effluent.

109

ABBREVIATIONS, ACRONYMS, AND TERMS


API BOD COD CPI MSDS NTU American Petroleum Institute biochemical oxygen demand chemical oxygen demand corrugated plate interceptor material safety data sheet nephelometric turbidity unit oxidation reduction specific gravity suspended solids standard units wastewater treatment plant

Neutralisation (pH control) The removal of excess acidity or alkalinity by treatment with a chemical of the opposite composition is termed neutralisation. In general, all treated wastewaters with excessively low or high pH require neutralisation before they can be disposed of in the environment. Dosing rates are decided based on treated effluent pH level. Primary acidic agents are hydrochloric or sulphuric acids. Primary base agents are caustic soda, sodium bicarbonates, and lime solutions. Filtration Filtration is a purely physical process for separating SS in which the effluent is passed through a set of filters with smaller pores than the contaminants, thus physically separating them out. The collected material is removed by backwashing for reuse of the filter element. Coagulation Coagulation destroys the emulsifying properties of the surface-active agent or neutralises the charged oil droplets. The free acid of alum breaks the emulsion by lowering pH. Zeta potential [1] is a convenient way to optimise the coagulation dosage in water and wastewater treatment. Figure 1 illustrates the effect of alum dosing on zeta potential versus turbidity. The dosage at which turbidity is lowest determines the target zeta potential. The most difficult SS to remove are the colloids, which, due to their small size, easily escape both sedimentation and filtration. The key to effective colloid removal is to reduce the zeta potential with coagulants such as alum, ferric chloride,

In the coagulation process, the most difficult SS to remove are the colloids, which, due to their small size, easily escape both sedimentation and filtration.

redox sp. gr. SS SU WTP

Generally, industrial wastewater treatment programmes differ from industry to industry, except for sewage treatment. As a general philosophy, wastewater treatment is performed in three stages: Primary treatment, which consists of grit and floating oil removal, pH neutralisation, etc., takes care of most of the pollutants and toxic chemicals that can be easily removed from raw wastewater at this stage. Such pretreatment creates conditions suitable for secondary treatment. Secondary treatment, which removes major pollutants to achieve the disposal quality, is designed to substantially diminish the pollutant load. SS, emulsified oil, and dissolved organics are the major pollutants removed at this stage. Tertiary treatment, which is carried out for recycle or reuse of the treated effluent, polishes it to bring the biochemical oxygen demand (BOD) and SS levels down to a range of 1020 milligrams per litre (mg/L).

+5 Turbidity 0

Zeta Potential

0.5 Turbidity of Finished Water, NTU

Zeta Potential, mV

0.4

10

0.3

AN OVERVIEW OF EFFLUENT TREATMENT PROCESSES

15

0.2

ll effluent treatment involves a few fundamental chemical and physical processes for isolating the impurities/ contaminants. A brief description of these processes helps in understanding the overall treatment philosophy.

20

0.1

25 10 20 30 40 50 60 Alum Dose, mg/L

0.0

Figure 1. Example of the Effect of Alum Dosing on Zeta Potential Versus Turbidity

110

Bechtel Technology Journal

Zeta potential (Smoluchowskis formula) is dependent on the property of the SS and is = zeta potential (mV calculated as: = viscosity of solutio = dielectric constant v = zeta potential (mV ) 4 = : electropho = U 300 300 1,000 = U viscosity of solution V/ L v dielectric constant = speed of particle = 4 V v = voltage U = : electrophoetic mobility (1) = U 300 300 1,000 L V /= electrode distance L v = speed of particle V = voltage L = electrode distance and/or cationic polymers. Once the charge is reduced or eliminated, no repulsive forces exist and gentle agitation in a flocculation basin causes numerous successful colloid collisions. Microflocs form and grow into visible floc particles that settle rapidly and filter easily. Flocculation A flocculent gathers floc particles together in a net and helps bind individual particles into large agglomerates. Aluminium hydroxide [Al(OH)3] produced after hydrolysis of alum [Al2(SO4)3] forms a net in water to capture fine SS. Oil/Water Separation Oily wastewater is common in any industry because oil and grease are universally used as lubricants and solvents. Oil is required to be separated before the wastewater is discharged or recycled. In addition, in a refinery or petrochemical plant, recycling the recovered oil adds some value apart from pollution control. Oil remains present in wastewater in two forms, floating and emulsified. Floating oil is separated during primary treatment, and emulsified oil is removed during secondary treatment. By virtue of oil being dismissible in water and having a density difference, the bulk of oil in wastewater remains in suspended form and can be separated through a settlement and skimming process. Oil is lighter and thus floats on top of the water surface, so it can be skimmed out through a mechanical separation process using mechanised skimmers in American Petroleum Institute (API) or corrugated plate interceptor (CPI) separators. More than one stage may be required to reach the discharge quality. For a stringent requirement, treated water may be passed through activated carbon filter adsorbers, which retain the oil particles within the carbon molecular space and provide for clear water to be discharged. Emulsified oil needs the addition of cationic or anionic polymer, increased temperature, or coalescing media. These can break the emulsion so that the oil particles can be subsequently removed by normal separation processes. Metal Precipitation Wastewater containing dissolved metals needs to be treated to reduce the metal concentration to below the toxicity threshold for organisms potentially exposed to the wastewater. Four main processes are available to accomplish this: The soluble metal ions can be converted to insoluble metal salts by chemical reaction to allow physical separation. Typical precipitation reactions are described by the following equations: M 2+ + 2OH- M (OH)2 (solid) M 2+ + S2 MS (solid) (2) (3)

Emulsified oil needs the addition of cationic or anionic polymer, increased temperature, or coalescing media to break the emulsion.

The metal ions can be oxidised to produce insoluble metal oxides. For example, the reactions that occur during the oxidation of iron and manganese by oxygen are: 2Fe2+ + O2 + 2OH- Fe2O3 + H2O 2Mn2+ + O2 + 4OH(4) 2MnO2 + 2H2O (5)

The pH of the effluent can be conditioned. A few toxic and nontoxic metals, such as iron, copper, zinc, nickel, aluminium, mercury, lead, chromium, cadmium, titanium, and beryllium, can be precipitated within a certain pH range. Liquid polymerised aluminium can be used as a coagulant. This has been found to be extremely effective in heavy metal precipitation processes in industrial wastewater. The insoluble compounds resulting from the application of any of the above processes are subsequently removed through the coagulation and clarification process by gravity settling, filtration, centrifugation, or a similar solid/liquid separation technique.

December 2009 Volume 2, Number 1

111

Zero discharge from an industrial unit is often a prerequisite in establishing a new plant.

Oxidation Chemical/biological oxidation processes use (chemical) oxidants to reduce chemical oxygen demand (COD)/BOD levels and remove both organic and oxidisable inorganic components (metals). These processes can completely oxidise organic materials to carbon dioxide and water, although it is often not necessary to operate the processes to this level of treatment. Oxidation via aeration of the effluent significantly reduces the COD of the treated liquid. Aerobic digestion (in presence of activated sludge) is effected when the BOD in the effluent ranges from 1001,000 mg/L. Redox Reaction An oxidationreduction (redox) process is used to transform and destroy targeted water contaminants. Substances such as chlorine, cyanide, chromium, and nitrogen dioxide can be removed by redox reaction. As an example, sodium hypochlorite solution is used to treat dilute cyanide in wastewater: NaCN + Cl2 + H2O NaCNO + H20 (6)

and inhibits formation of anaerobic bacteria that produce gases. Specially treated activated carbon may be used as an odour control medium to absorb hydrogen sulphide.

ROLE OF CHEMICALS IN TREATMENT

ncreasing concern about environmental damages from industrial pollutants poses new challenges daily to the discharge effluent quality requirement for industry. Zero/stringent quality of discharge from the industrial unit is often prerequisite to establishing a new plant, demanding more complex and controlled treatment of wastewater that cannot be achieved by standard treatment processes and chemicals. Researchers are engaged in upgrading the treatment processes and developing new chemicals to meet the stringent environmental norms while ensuring that the treatment cost remains reasonable. As an outcome, there have been significant developments in manufacturing proprietary chemicals, inorganic and organic polymers, and blended chemicals with polymers, among others, for wastewater treatment. These chemicals are designed to work in the specific treatment processes described in the overview section and require significantly lower dosing rates, thereby producing very low amounts of sludge, which is convenient for disposal. This section includes a general discussion of the types of chemicals and polymers and their applications, advantages, etc., in the treatment processes. It is not the intent of this paper to discuss the types and characteristics of all chemicals and polymers available from different manufacturers. Use of any specific brand or proprietary chemical in a project must be evaluated considering the inlet effluent characteristics, project environmental criteria, and recommendations from the chemical vendor. Table 2 lists typical inorganic treatment chemicals and their feed rates. Coagulants/Flocculants Adding coagulants to the wastewater creates a chemical reaction in which the repulsive electrical charges surrounding colloidal SS are neutralised, allowing the free particles to stick together and create lumps or flocs. The aggregation of these particles into larger flocs permits their separation from solution by sedimentation, filtration, or straining.

Chemical Conditioning Chemical conditioning improves sludge dewatering in sludge thickening devices. In this method, chemical coagulants/polymers are dosed to promote agglomeration of floc particles. The choice of chemical conditioners depends on the characteristics of the sludge and the type of dewatering device. Disinfection Wastewater before discharge or particularly for reuse needs to be disinfected. Chlorination is one of the most commonly used disinfecting methods; the newly developed processes employ ozone and ultraviolet rays. Ozone, a powerful oxidising agent mainly used to oxidise certain industrial wastewaters that cannot be treated effectively by conventional biological oxidation processes, can also simultaneously disinfect the effluent. Ultraviolet radiation, a kind of electromagnetic radiation, is also used to disinfect wastewaters to avoid any material addition. Odour Control Odour problemsmainly from gases such as hydrogen sulphide, ammonia, and methane present in a wastewater facilityare a concern for wastewater treatment personnel. The primary treatment for odour control is oxidation, which converts these gases to an odourless compound

112

Bechtel Technology Journal

Table 2. Typical Inorganic Chemicals Used in Wastewater Treatment


Purpose/Chemicals
Disinfection Chlorine Primary treatment effluent Activated sludge effluent Chlorine dioxide Primary treatment effluent Activated sludge effluent Ammonia Removal Chlorine Oxidation of Sulphides Chlorine Hydrogen peroxide Sodium nitrate Coagulant Feed Aluminium sulphate (alum) Ferric chloride Lime Ferrous sulphate Ferric sulphate pH Control (to maintain alkalinity) CaCO3 Lime 100500 200500 75150 4590 200400 >1.5 47 1015 1.01.5 1030 10 25 13 510 25

Primary disadvantages of alum are: High dosing requirement, 70250 ppm Excessive sludge formation (self-sludge) and its treatment, and loss of water with the sludge Lowering of the pH, needing lime dosing for pH control Ferric Chloride Ferric chloride is used as an inorganic emulsion breaker (described in the next section), especially to remove oil from water. The normal dosing rate works out to be 4050 ppm. Polyelectrolytes Polyelectrolytes are one of the most widely used chemicals serving as coagulants/flocculents in modern water/wastewater treatment. Their primary advantages are their very low dosing requirement and their applicability over a wide range of pH compared with alum or other inorganic coagulants/flocculents. Polyelectrolytes are categorised based on their product origin. Natural polyelectrolytes include polymers of biological origin derived from starch, cellulose, and alginates. Synthetic polyelectrolytes consist of single monomers polymerised into a high-molecular-weight substance. The action of polyelectrolytes changes according to their type. Cationic polymers, in which the cations (positive charges) form the polymer, reduce or reverse the negative charges of the precipitate and therefore act as a primary coagulant. Anionic polymers, based on carboxylate ions and polyampholytes, carry primarily negative charges and help in interparticle bridging along the length of the polymer, resulting in three-dimensional particle growth and thereby easy settlement. A third type of polymer, developed from cationic polyelectrolytes of extremely high molecular weight, is capable of offering both coagulation and bridging. Some of the widely used polyelectrolytes are described next. Polyaluminium chloride is well-suited as a primary coagulant in a wide variety of industrial and domestic wastewater treatment plants (WTPs). Typical applications include removal of organic impurity, metals, domestic and oily waste, and phosphate. Efficient and effective in coagulating particles with a wide range of

Dosing Level, mg/L

Polyelectrolytes are one of the most widely used chemicals, serving as coagulants/ flocculents in the modern wastewater treatment process, due to their very low dosing requirement and applicability over a wide range of pH.

AlumAl2 (SO4)318H2O The role of aluminium sulphatealumin water treatment is known historically. It is an inorganic coagulant/flocculent (see Table 2). When alum and lime are added to the treatment process, the chemical reaction produces the following: Al2(SO4)3 18H2O + 3Ca(HCO3)2 Al(OH)3 + 3CaSO4 + 18H2O + 6CO2 The dosing rate of alum depends on: Concentration (mg/L) of SS Nature of SS pH of effluent Type of flocculating equipment (7)

December 2009 Volume 2, Number 1

113

pH, the chemical offers very good turbidity removal and leaves no residual colour. Typical properties are: Available in 25%40% concentrate pH (neat) = 2.32.9 SU Freezing point = 5 C (23 F) Odourless and colourless Specific gravity (sp. gr.) = 1.2 Liquid polymerised aluminium coagulants are extremely effective in heavy metal precipitation processes and combined industrial wastewater. These coagulants have a low impact on process water pH. Aluminium hydroxide chloride/polymer is a technically advanced, high-performance coagulant based on aluminium hydroxide chloride blended with an organic polymer. The active component of the reagent is a highly cationic aluminium polymer present in high concentration, which is represented as [Al13 O4(OH)24(H2O)12]7+. Typical properties are: Sp. gr. = 1.30 pH = 1.02.0 SU Charge = +1950 High-molecular-weight cationic polymer (liquid) does not contain oil or surfactants and is designed for SS. It is specially recommended for use in non-potable raw water clarification, primary and secondary effluent clarification, and oil wastewater clarification. Dosing rate varies from 110 ppm for wastewater treatment. Typical properties are: Available in liquid form Sp. gr. = 1.211.23 Viscosity = <700 centipoise pH = 3.04.2 SU Dosing concentration = 0.01%0.1% aqueous solution Cationic guanidine polymer, a cationic liquid organic polymer based on an aqueous solution of cyanoguanidine, is designed to coagulate colloidal solids and SS and is therefore recommended for use in non-potable raw water clarification, primary and secondary effluent clarification, oil wastewater clarification, and enhanced organics removal. Organic polyamines are used as cationic emulsion breakers. Apart from the above, there are commercially available proprietary anionic polyelectrolytes. The important ones are polystyrene sulphonic
114

acids and 2-acrylamido-2-methyl sulphonic acids.

propane

Alkyl-substituted benzene sulphonic acids and their salts are used as anionic emulsion breakers. Carbamate solution and liquid thiocarbonate compound are used to precipitate chelated metals. Emulsion Breakers In the coagulation process, chemicals help break the emulsion that keeps oil particles floating in water. The chemicals neutralise the stabilising agents that keep the oil particles floating, allowing them to settle and be removed as sludge. Alum, ferric chloride, sodium aluminates, and acids are common inorganic chemicals used as emulsion breakers. However, they have some disadvantages. The primary ones are: Their effectiveness is restricted to a narrow pH range; therefore, a higher dosing rate is normally required. A large quantity of watery sludge is produced, necessitating elaborate and expensive disposal. Organic polyelectrolytes, mostly available as proprietary chemicals from different manufacturers, are highly efficient as emulsion breakers because of their cationic charges and effectiveness over a wide pH range. These chemicals help produce a lower quantity of sludge for easy and economical disposal and also add a lower level of chemicals in the treated effluent. As discussed in the polyelectrolytes section, a few popular polyelectrolytes are polyaluminium chloride, aluminium hydroxide chloride/polymer, high-molecularweight cationic polymer (liquid), and cationic guanidine polymer. Metal Precipitants Some process wastewaters include complexing and chelating agents that bond to the metal ions, making precipitation difficult, if not impossible, for many precipitating reagents. Commercially available proprietary precipitants are capable of breaking many of these bonding agents, thereby precipitating the metal ions without adding other chemicals. In some instances, a combination of pH adjustment and varying reaction times may be required along with precipitants and flocculants for optimum result. Liquid polymerised aluminium coagulants are extremely effective in heavy metal

Liquid polymerised aluminium coagulants are extremely effective in heavy metal precipitation processes and combined industrial wastewater treatment.

Bechtel Technology Journal

precipitation processes and are popular in treatment of combined industrial wastewater. These coagulants have a low impact on process water pH. Organosulphide compounds can be used to precipitate divalent metals in the form of insoluble metal sulphides. Hydrogen peroxide (H2O2), ozone, and oxygen convert metals into oxides, which are insoluble in water and hence separated out through coagulation and the settlement process. Oxidants Typical oxidation chemicals are: Hydrogen peroxide, widely used as a safe, effective, powerful, and versatile oxidant. The main applications are oxidation to aid odour and corrosion control, organic oxidation, metal oxidation, and toxicity oxidation. Ozone, primarily used as a disinfectant but also aids removal of contaminants from water by means of oxidation. Ozone purifies water by breaking up organic contaminants and converting them to inorganic contaminants in insoluble form that can be filtered out. An ozone system can remove up to 25 contaminants, including iron, manganese, nitrite, cyanide, nitrogen oxides, and chlorinated hydrocarbons. Oxygen, which can be applied as an oxidant to realise the oxidation of iron and manganese (see Equations 4 and 5). The method is popular because of oxygens abundant availability in the atmosphere. Chemicals for pH Control and Odour Control For pH control, lime solution, caustic soda, and sulphuric and hydrochloric acids are commonly used. For odour control, hydrogen peroxide is widely used as a safe, effective, powerful, and versatile oxidant. Other chemicals used are ozone, hypochlorite, permanganate, and oxygen. Activated carbon filters are also used to absorb bad odours. Disinfecting Agents Chlorine gas or sodium hypochlorite is used as a primary disinfecting agent because of its easy availability and residual protection. However, because chlorine is reactive to some metals, ozone is also used as an alternative. Ultraviolet radiation is preferred in some cases because it does not add new chemicals to the process.

Other Chemicals Antifoam is primarily used as a process aid. Antifoam blends contain oils combined with small amounts of silica and break down foam based on two of silicones properties: incompatibility with aqueous systems and ease of spreading. Lime, alum, ferric chloride, and polyelectrolytes are commonly used chemical conditioners for effective sludge thickening and dewatering. Organic phosphorous/polysulphonate compounds are used as antiscalant dispersants and corrosion inhibitors. Typical compositions are phosphonates and organophosphorous carboxylic acids and their salts. Organophosphorous carboxylic acid compounds are water soluble; usual dosing rates vary from 1525 mg/L.

DESIGN APPROACH esigning a WTP for any project is always a unique and challenging process for WTP personnel because: Process flow data is inaccurate; normally, most source data is estimated, with a wide variation in minimum and maximum flows and flow durations. Input characteristics of the flow are guesswork. Effluent disposal criteria are specific to the project and guided by the local norms. Plant designers should consider the following aspects in conceptualising a WTP: Optimally size the plant by integrating continuous and intermittent flows. Too small a plant does not provide the discharge quality, while too conservative a design requires high capital cost and leads to inaccurate treatment at lean flow conditions. Sequence and integrate the treatment processes for maximum effectiveness based on estimated effluent characteristics and their variations at different plant operating regimes. Conduct a jar test, described in more detail below, to optimise the selection and dosing rate of chemicals. Use vendor information to validate the design. Use a material safety data sheet (MSDS), described in more detail below, to build safety into process design, giving due consideration to safe handling, storage, and disposal.

Designing a good WTP for a project is always a unique and challenging task for WTP personnel.

December 2009 Volume 2, Number 1

115

Figure 2, a flow diagram for typical wastewater treatment in a refinery, highlights the processes and dosing chemicals. Jar Test Before a prototype plant is developed, it is customary to conduct a laboratory test to study the behaviour of an effluent, its response to the chemical treatment, and the chemical dosing rate. The jar test is a common laboratory procedure used to determine the optimum operating conditions, especially the dosing rate of chemicals for water or wastewater treatment. This method allows adjustments in pH, variations in coagulant or polymer dose, alternative mixing speeds, and testing of different coagulant or polymer types on a small scale to predict the functioning of a large-scale treatment operation. A jar test simulates the coagulation and flocculation processes that encourage the removal of suspended colloids and organic matter that can lead to turbidity, odour, and taste problems. Material Safety Data Sheet An MSDS provides relevant data regarding the properties of a particular substance/chemical. It is intended to provide designers, operators, and emergency personnel with procedures for

safely handling and working with a substance, and it includes information such as physical data (melting point, boiling point, flash point, etc.), toxicity, health effects, first aid, reactivity, storage, disposal, protective equipment, and spillhandling procedures. The format of an MSDS can vary from source to source and depends on the safety requirement of the country.

CONCLUSIONS astewater treatment in any plant normally takes a back seat as designers focus primarily on the high-end plant equipment and systems to ensure higher plant efficiency, reliability, and availability. Supporting this conclusion is the fact that reliable design inputs are obtained from the vendor data only after major plant systems and equipment designs are in place and related data becomes available from equipment vendors. This paper can be helpful at the initial project phase in conceptualising a WTP design in terms of the treatment processes and chemicals needed based on typical industry data and project environmental permits. The initial design can then be subsequently validated using actual equipment data and chemical vendors recommendations.

The jar test simulates the coagulation and flocculation processes and is used to predict the functioning of a large-scale treatment operation.

Demulsifier Dosing (Rate: 1530 ppmv)

Shop Oil Returned to Refinery

Dissolved Air Flotation Polymer Dosing (Rate: 510 gm/m3) Acid Gas (H1S) to Flare

Water Separated from Crude in Refinery Sour Water Process Storage Tank Antiscale Dosing (Rate: 050 gm/m3) Sour S Water W Stripper Corrosion Inhibitor Dosing (Rate: 1500 ppm, max.) Based on 5% Chemical in Water Thickened Sludge to Sludge Drying Bed

Skimmed Oil for Further Treatment

Wastewater astewater Collection Sump Liquid Separated Back to Sump Dissolved Air Flotation Media Filter Polymer Dosing (Rate: 0.30.5 litre/m3) Media Filter Biocide Dosing (Rate: 500 gm/m3) Thickener

Treated Effluent for Disposal Treated Water Storage Tank Cartridge Filter Media Filt di Filter

Figure 2. Dosing Scheme for a Typical Refinery Effluent Treatment Plant

116

Bechtel Technology Journal

REFERENCES
[1] E.R. Alley, Water Quality Control Handbook, 2nd Edition, McGraw-Hill, New York, NY/ WEF Press, Alexandria, VA, 2007 (see http://searchworks.stanford.edu/view/ 6796830 and http://www.infibeam.com/ Books/info/E-Roberts-Alley/Water-QualityControl-Handbook/0071467602.html).

BIOGRAPHIES
Kanchan Ganguly is a senior mechanical engineer working on OG&C projects in Bechtels New Delhi Execution Unit. Since joining the company in 2005, he has worked on the Takreer and Scotford refinery upgrades, Texas Utilities standard power plant, and Onshore Gas Development Phase 3 projects, focusing primarily on engineering review of equipment and package systems and documents in both hydrocarbon and process/utility water areas. Kanchan has more than 23 years of experience in the industry. During this time, he has assumed responsibilities as manager and process engineer in various water treatment, process chemical, and fertilizer plants and has become familiar with their design, operation, and maintenance. Kanchan is a Chemical Engineering graduate of Calcutta University, India. Asim De is a mechanical engineering supervisor and has worked on both Power and OG&C projects in Bechtels New Delhi Execution Unit. He is currently working on the Yajva power generation project. During his 4 years with Bechtel, he has also worked on the Jamnagar Export Refinery captive power plant and Takreer FEED projects. Asim has more than 31 years of experience in the industry and has worked in conceptual and detailed design for all power plant systems and equipment. In addition, he is familiar with commissioning power and desalination plants. He has assumed responsibilities as lead engineer, assistant chief engineer, and general manager in different organizations involved in the power generation business. Asim is a Fellow of The Institute of Engineers (India). He is a Mechanical Engineering graduate of the Indian Institute of Technology, Kharagpur, a premier engineering institute; has an ME degree in Project Engineering from the Birla Institute of Technology, Pilani; and took a post-graduate executive course in Management at the Indian Institute of Management, Calcutta; all in India. He is a Six Sigma Yellow Belt.

ADDITIONAL READING Additional information sources used to develop this paper include:
G. Tchobanoglous, F. Burton, and Metcalf & Eddy, Wastewater Engineering: Treatment, Disposal, Reuse, 3rd Edition, Tata McGraw-Hill, New Delhi, India, 1995 (see http://www.amazon.com/WastewaterEngineering-Treatment-Disposal-Reuse/ dp/B000K3A6WY/ref=sr_1_3?ie=UTF8&s= books&qid=1257373138&sr=1-3). F.N. Kemmer, The Nalco Water Handbook, 2nd Edition, McGraw-Hill Book Company, 1988 (see http://www.flipkart.com/ nalco-water-handbook-frank-kemmer/ 0070458723-wmw3f9p83j). G. Degrmont, Water Treatment Handbook, 7th Edition, Volume 1, Lavoisier, France, 2007 (see http://www.wth.lavoisier.net and http://www.lavoisier.fr/gb/livres/index.html). NALCO Chemical Company Engineering Reference Manual: Technical Bulletins on Wastewater Treatment and Treatment Chemicals (various). Manufacturers reference information about various chemicals (see ACCEPTA Environmental Technology [info@accepta. com], Chemco Products Company [info@ chemcoproducts.net], and Lenntech BV [http://www.lenntech.com/watertreatment-chemicals.htm#Oxidants]). Reference information about zeta potential (see Microtec Co. [http://www.nition.com/ en/products/zeecom_s.htm], Wikipedia [http://www.en.wikipedia.org/wiki/ Zeta_potential], and Zeta-Meter, Inc. [http://www.zeta-meter.com/5min.pdf]).

December 2009 Volume 2, Number 1

117

118

Bechtel Technology Journal

ELECTRICAL SYSTEM STUDIES FOR LARGE PROJECTS EXECUTED AT MULTIPLE ENGINEERING CENTRES
Issue Date: December 2009 AbstractElectrical system studies are carried out to verify that major electrical equipment is adequately rated, determine the conditions for satisfactory and reliable operation, and highlight any operational restrictions required for safe operation. The system studies for the Jamnagar Export Refinery Project (JERP) presented unique challenges because of the sheer size of the captive power generation (with a new 800 MW power plant operating in parallel with an existing 400 MW power plant), the plants extensive power distribution network, and the engineering work distributed amongst various Bechtel engineering centres and non-Bechtel engineering contractors around the world. The large number of system study cases (particularly transient stability analysis studies) to be evaluated also made the task challenging. This paper presents an overview of system study execution on the complex JERP electrical network, along with a brief report on the various studies conducted as part of this project. Keywordsanalysis, electrical system studies, Electrical Transient Analysis Program (ETAP), Jamnagar Export Refinery Project (JERP)

INTRODUCTION

lectrical system studies are carried out to verify that major electrical equipment is adequately rated, determine the conditions for satisfactory and reliable operation, and highlight any operational restrictions required for safe operation. The various units within the Jamnagar Export Refinery Project (JERP) were engineered at Bechtel engineering centres (London, Houston, Frederick, Toronto, and New Delhi), the Bechtel/ Reliance Industries Limited joint venture (JV) office in Mumbai, and the sites of non-Bechtel engineering contractors. The core Electrical group based in Bechtels London office (the London core group) was tasked with preparing a combined model of the electrical system and with conducting the system studies. The system studies for this project presented unique challenges because of the sheer size of the captive power generation (with a new 800 MW power plant operating in parallel with an existing 400 MW power plant), the Jamnagar plants extensive power distribution network, and the engineering work distributed amongst various Bechtel engineering centres and non-Bechtel engineering contractors around the world.

System studies are normally conducted on a selected set of study cases, and their results are used to determine the system behaviour under all operating conditions. For this project, it was difficult to select the cases to simulate and study because of the large number of possible operating configurations for such a complex industrial electrical network. The system studies themselves were a challenge because so many study cases (particularly transient stability analysis studies) had to be evaluated. This paper presents an overview of system study execution on the complex electrical network of the JERP, along with a brief report on the various studies conducted as part of this project.

OVERVIEW OF THE JERP

Rajesh Narayan Athiyarath


rnaraya1@bechtel.com

eliance Industries operates the Jamnagar Domestic Tariff Area (DTA) oil refinery and petrochemical complex located in Gujarat, India. The complex processes 650,000 barrels per stream day (650 kbpsd) of crude oil and produces liquefied petroleum gas (LPG); naphtha; gasoline; kerosene; diesel; sulphur; coke; polypropylene; and numerous aromatic products, including paraxylene, orthoxylene,

2009 Bechtel Corporation. All rights reserved.

119

ABBREVIATIONS, ACRONYMS, AND TERMS


AVR BSAP automatic voltage regulator Bechtel standard application program (a software application that Bechtel has determined to be suitable for use to support functional processes corporate-wide) captive power plant Domestic Tariff Area discipline work instruction electrical distribution management system energy management system Electrical Transient Analysis Program (a BSAP) front-end engineering and design gas turbine generator high-voltage direct current interconnecting transformer International Electrotechnical Commission Institute of Electrical and Electronics Engineers individual harmonic distortion Jamnagar Export Refinery Project joint venture load management system liquefied petroleum gas low voltage main receiving station medium voltage on-load tap changer personal computer refinery service transformer special economic zone steam turbine generator total harmonic distortion variable-speed drive

and benzene. The original project, which Bechtel designed and constructed, was the worlds largest grassroots single-stream refinery. The complex includes a captive power plant (CPP) designed to produce 400 MW of power (backed up by a 132 kV grid supply) to meet the refinerys power demands. The JERP comprises a new export-oriented refinery located in a special economic zone (SEZ) adjacent to the DTA site. The project aims to almost double the capacity of the Jamnagar refinery to more than 1,200 kbpsd; add crude distillation, associated secondary conversion facilities, and an 800 MW CPP; and modify the existing refinery to ensure the efficient operation of both it and the new refinery. On completion of the JERP, the Jamnagar complex will be the worlds largest refinery, surpassing Venezuelas 940 kbpsd Paraguana refining complex.

CPP DTA

The key task of conducting overall system studies on the JERP and DTA electrical networks was handed over to the London core group.

DWI EDMS EMS ETAP FEED GTG HVDC ICT IEC IEEE IHD JERP JV LMS LPG LV MRS MV OLTC PC RST SEZ STG THD VSD

ENGINEERING THE JERP

he JERP required approximately 6 million engineering jobhours within a short and challenging project schedule. Hence, project engineering was split up amongst the various Bechtel offices, headed by the London core group (Figure 1). The key task of conducting overall system studies on the JERP and DTA electrical networks was handed over to the London core group.

POWER GENERATION AND DISTRIBUTION

simplified depiction of the JERP power generation and distribution system is portrayed in Figure 2. JERP Power System As the JERP power source, the CPP consists of six 125 MW, 14.5 kV gas turbine generators (GTGs), with space allocated for three future GTGs. The GTGs are connected to the 220 kV switchyard bus via their dedicated 14.5/231 kV, 161 MVA step-up transformers. Eight 220/34.5 kV, 174 MVA refinery service transformers (RSTs) connected to the 220 kV switchyard feed the JERP plant substations through 33 kV switchboards in two main receiving stations (MRS-1 and MRS-2). Two 11 kV, 25 MW steam turbine generators (STGs) are connected to the switchboards in MRS-1 via 11/34.5 kV, 38 MVA step-up transformers. Finally, a pair of 220/132 kV, 107 MVA autotransformers are provided as the interconnecting

120

Bechtel Technology Journal

BECHTEL FREDERICK BECHTEL LONDON


DTA Expansion Group DTA Revamp FEED FCC Group FCC/VGO Units Captive Power Plant (CPP)

BECHTEL NEW DELHI


Captive Power Plant (CPP)

BECHTEL HOUSTON/ SHANGHAI


CFP, Crude & Alkylation Units

BANTREL TORONTO

BECHTEL LONDON CORE FUNCTIONS

Aromatics Unit CNHT/KHT Units

BECHTELRELIANCE JV MUMBAI
Offsites & Utilities CFP, Crude & Alkylation Units (Balance)

THIRDPARTY AND LICENSOR OFFICES JAMNAGAR ENGINEERING OFFICE (JEC) JAMNAGAR (SITE)
Merox, ATU, SWS, PRU Units FCC/VGO Units (Balance) DTA Revamp Coker (FWHouston) Sulphur & TGTU (BVPIKansas City) Sulphur Granulation Hydrogen (LindeMunich) Acid Regeneration (MECSSt. Louis)

System studies is the generic term for a wide range of simulations conducted on a model of an electrical system under various operating conditions encountered or anticipated during operation of the network.

ATU CFP CNHT CPP

amine treating unit clean fuel plant cracked naphtha hydrotreater captive power plant

DTA FCC FEED FW

Domestic Tariff Area fluid catalytic cracker front-end engineering and design Foster Wheeler

JEC Jamnagar Engineering Office KHT kerosene hydrotreater Merox (mercaptan oxidation process) PRU propylene recovery unit

SWS TGTU VGO

sour water stripper tail gas treatment unit vacuum gas oil

Figure 1. Project Execution Locations

transformers (ICTs) between the JERP and DTA electrical systems. The JERP electrical system incorporates an energy management system (EMS) that comprises an electrical distribution management system (EDMS) to control and monitor the electrical network and a load management system (LMS) to carry out load shedding, if required, within the JERP and DTA electrical networks. DTA Power System The DTA CPP consists of nine 28 MW GTGs and six 25 MW STGs that feed the five 33 kV switchboards, from which power is further distributed to the DTA plant substations.
Captive Power Plant (Six 125 MW GTGs)

220 kV Switchgear (1 Breaker Scheme)

Interconnection to DTA

Two 25 MW STGs

Four Sets 33 kV Main Receiving Switchgear

ELECTRICAL SYSTEM STUDIES

ystem studies is the generic term for a wide range of simulations conducted on

6.6 kV/415 V Distribution Network for Each Unit

Figure 2. JERP Power System Generation and Distribution

December 2009 Volume 2, Number 1

121

a model of an electrical system under various operating conditions encountered or anticipated during operation of the network. System studies analyse the behaviour of the electrical network components under various steady-state, dynamic, and transient conditions, and the results are used to predict the networks behaviour under actual operating conditions. System studies are conducted at different stages of a project. The results of system studies performed during the front-end engineering and design (FEED) and detailed engineering stages enable proper selection of equipment ratings, identification of the electrical system loading and operational modes for maximum reliability and safety, and selection of the control modes for major equipment. These early system studies can also assess the ability of the electrical network to meet present and future system energy demands. System studies conducted after the power system network is operational generally study the feasibility or effects of system expansion, check conformance with any changes in codes and standards, or analyse system behaviour to identify the underlying causes of a network disturbance or equipment failure. In the case of the JERP, the electrical system is planned to operate in parallel with the existing DTA electrical system and the grid supply from the local electricity utility. The JERP electrical system also has to be adequate for the addition of future units and high-voltage direct current (HVDC) links to the local electricity utility supply. Hence, this combination of large-scale greenfield project/major expansion of an existing network becomes a special case for system studies. The sheer size of the JERP and DTA electrical networks (with a combined power generation of 1.2 GW), the extensive power distribution network within the JERP and DTA plants, and the crucial need to ensure reliability of the power supply under all operating conditions make it important to conduct reliable and accurate system studies. Further, the study results can help in the design of a reliable electrical system suitable for the projects present and future requirements. Three key elements are at the heart of a proper system study: A dependable and versatile system study software program A reliable model of the electrical network Selection of studies to be conducted and study cases to be simulated

ELECTRICAL SYSTEM STUDY SOFTWARE PROGRAMS

ystem studies entail the analysis of the interactions amongst the various components of the electrical network to determine the power flows between elements and the voltage profile at the various buses in the network. Many mathematical computations are required to analyse even a small network, precluding the use of manual calculation techniques to conduct any but the most rudimentary system studies. These circumstances have led to an effort since the late 1920s to devise computational aids for network analysis. From about 1929 to the 1960s, special analogue computers in the form of alternating current network analysers were used for system studies. These network analysers contained scaled-down versions of the network components, such as power sources, cables, transmission lines, and loads, that were interconnected using flexible cords to represent the system being modelled. Although limited in scope and complexity, the network analysers were used to study power flows and voltage profiles under steady-state and transient conditions. The next stage in the evolution of system study software programs was the use from the late 1940s of digital computers to conduct system studies. These programs were initially limited in scope due to the programming methods used (punched-card calculators). However, the availability of large-scale digital computers from the mid-1950s gave a boost to the use of computer programs for system studies. Although these programs originally required mainframe computing power and specialised programming techniques, the growth in the computing power of desktop PCs and laptops has seen these programs become an essential tool for the electrical engineer. Current system study programs offer flexible and easy-to-use techniques for system modelling, analysis, and presentation.

The sheer size of the JERP and DTA electrical networks, the extensive power distribution network within the JERP and DTA plants, and the crucial need to ensure reliability of the power supply under all operating conditions make it important to conduct reliable and accurate system studies.

One of the more commonly used system study software programs is Operation Technology, Inc.s (OTIs) Electrical Transient Analysis Program (ETAP), which has been qualified as a Bechtel standard application program (BSAP). The offline simulation modules of ETAP 6.0.0, the most current release at the time of project execution, were used to conduct the JERP power system studies.

122

Bechtel Technology Journal

MODEL OF THE JERP ELECTRICAL NETWORK he various Bechtel and third-party engineering centres prepared models of the electrical networks for the individual JERP units. The London core group integrated these various submodels into a composite model of the overall plant electrical network. It was necessary to ensure that all the engineering centres used uniform modelling principles to prepare the individual models, to speed the process of integrating them. The London core group issued specific discipline work instructions (DWIs) to the engineering centres and held a series of conferences to explain the modelling principles to be followed to ensure uniformity. These work instructions covered key points such as model structure, division of responsibility for preparing and using the model, key data required, instructions for dealing with cases of incomplete/missing data related to network or equipment required for the model, and use of library data (accompanied by a common library database to be used to populate the model). The major items modelled were the GTGs/STGs along with their control systems (governors, exciters, and power system stabilisers), plant loads, and interconnecting power cables. The modelling of certain complex portions of the GTG control system required software such as Simulink, a specialised program used to model and simulate dynamic control systems. OTI constructed the models, which were later integrated with the overall model.

used International Electrotechnical Commission (IEC) standards as the basis for evaluating the results of all studies except harmonic analysis. Load Flow Analysis Once the refinery is commissioned and fully operational, the electrical system is expected to operate in a stable condition. Load flow analysis is a steady-state analysis that calculates the active and reactive power flows through each element of the network and the voltage profile at the networks various buses. A balanced load flow analysis is adequate because the vast majority of loads in the refinery are inherently balanced (e.g., three-phase motors). Load flow analyses help identify any abnormal system conditions during steady-state operation that can be harmful for the system in the long run. They also provide the initial basis for other detailed analyses such as motor starting and transient stability. It is also to be noted that the results of the load flow analysis affect these other analyses. For example, an electrical system operating under steady-state conditions is more likely to satisfactorily survive a transient event such as the step-loading or tripping of one of the operating generators if its initial operating conditions are favourable (e.g., voltages within limits, sufficient margin in the loading of various network elements). Some of the main parameters examined in a load flow analysis are presence of overvoltage or undervoltage at any point in the electrical network, overloading of any network element, and very low system power factor. To study system behaviour at the JERP under all expected operating conditions, the London core group carried out load flow analyses under these three sets of conditions: Normal system configuration, i.e., with redundant power feeds, where available, to various plant switchboards that simulate the normal operating condition of the electrical network Loss of redundant power feed, i.e., with single power feed to the various plant switchboards (This condition of single-ended operation can occur in the electrical network under a contingency like loss of plant transformers.) No-load conditions (This study case was selected to assist in identifying any dangerous overvoltage that may occur when the network is operating under no-load or lightly loaded conditions [e.g., plant startup conditions]).

It is very important to select the study cases whose results can be used to predict the system behaviour under all operating conditions.

ELECTRICAL SYSTEM STUDIES AND STUDY CASES

wide range of system studies can be conducted on electrical networks to study the behaviour of the system under steady-state conditions as well as conditions in which it is subjected to disturbances in normal operation (e.g., step loading or load sharing amongst generators) or unplanned events (e.g., electrical fault, generators tripping). Because it is not possible to analyse every expected operating condition, it is very important to select the study cases whose results can be used to predict the system behaviour under all operating conditions. As a result, the studies are usually conducted on the most onerous conditions expected during the refinery operation. The following system studies were carried out to analyse the behaviour of the JERP and DTA electrical networks. In line with the specification requirement for the JERP, the engineering centres

December 2009 Volume 2, Number 1

123

The results of these analyses revealed some instances in which the bus voltages exceeded the acceptable limits. The London core group recommended that the tap settings of the upstream transformers associated with these switchboard buses be changed to bring the voltages within the specified limits. The core group also highlighted cases of potential overloading of transformers under loss of redundant power feed (second case) for observation during actual plant operation.

A short circuit condition imposes the most onerous short-time duty on the system electrical equipment.

Short Circuit Analysis A short circuit condition imposes the most onerous short-time duty on the system electrical equipment. This fault condition arises as a result of insulation failure in the equipment or wrong operation of the equipment (e.g., closing onto an existing fault or closing circuit breakers when the associated earth switch is closed), leading to the flow of uncontrolled high currents and severely unbalanced conditions in the electrical system. The four main types of short circuits are: Three-phase short circuit with or without earthing (This is usually the most severe short circuit condition.) Line-to-earth (single-phase-to-earth) fault (In certain circumstances, the short circuit current for a line-to-earth fault can exceed the three-phase short circuit current.) Line-to-line (phase-to-phase) fault Double line-to-earth fault The electrical equipment has to be rated for the short circuit level of the system, which basically requires all of the following conditions to be met: The electrical equipment must be able to withstand the short circuit current until the protective equipment (relays) detects the fault and it is cleared by opening circuit breakers (i.e., thermal withstand short circuit current). The IEC standards specify a standard withstand duration of 1 second or 3 seconds. The JERP used switchgear rated for 1-second withstand time. The circuit breakers must be suitable to interrupt the flow of the short circuit current (i.e., breaking duty). The circuit breakers must be suitable to close onto an existing fault (i.e., making duty). Additionally, the protective system of the network has to be set to enable reliable detection of any short circuit condition (minimum and maximum short circuit conditions).

The calculation of the short circuit current for these conditions is made more complex by the behaviour of the short circuit current immediately after the fault. Depending on the network characteristics, behaviour of the generators in the network, and the exact instant of the fault, the short circuit current may contain significant amounts of transient alternating and direct current components, which decay to zero over time, depending on the characteristics of the network and the rotating machines. It is very difficult to account for the effects of these phenomena through manual calculation methods. This is particularly true because the presence of a large direct current component in the short circuit current imposes a very stringent breaking duty on the circuit breakers, since a natural current zero may not be achieved. The results of the short circuit analysis calculated the following various components of the short circuit current at each bus: ipPeak current in the first cycle after the short circuit IdcDirect current component at the instant the circuit breaker opened Ib sym and Ib asymSymmetrical and asymmetrical root mean square currents at the instant the circuit breaker opened IthThermal withstand short circuit current for 1-second rating These results were cross-checked with the equipment ratings to verify that the equipment short-time ratings were suitable for the short level of the system. Stability Analysis It is relevant to note the concept of stability as defined in standards such as Standard 399 of The Institute of Electrical and Electronics Engineers, Incorporated (IEEE). [1] IEEE 399 states that a system (containing two or more synchronous machines) is stable, under a specified set of conditions, if, when subjected to one or more bounded disturbances (less than infinite magnitude), the resulting system response(s) are bounded. System stability requirements can be generally categorised into steady-state stability, dynamic stability, and transient stability. [2] Steady-State Stability Analysis Steady-state stability is the ability of the system to remain stable under slow changes in system loading. The power transfer between two synchronous machines (generator G and motor

124

Bechtel Technology Journal

jX

EG

EM

the first swing of the machines, generally within 1 second of the event). Also, the traditional transient stability analysis ignored the action of the machine governor, exciter, and automatic voltage regulator (AVR) because they were slow-acting compared with the duration of the analysis. This approach to transient stability analysis has been modified in recent times since the advent of governors, exciters, and AVRs based on fast-acting control systems. It has also been seen that different sections of an interconnected network may respond at different times to a transient event that sometimes may be outside the traditional 1 second window for transient analysis. Also, the behaviour of different sections of the network may be different for the same transient event. Hence, to verify whether system stability is retained, the transient stability analysis needs to be carried out for a longer duration (preferably over a range of transient events having varying severities and durations). This kind of analysis was not possible in earlier years due to the high complexity of modelling and limited computing power, but today such an analysis can be performed because of the availability of practically unlimited computing power on desktop and laptop computers, coupled with specialised computer programs such as ETAP. Hence, the dividing line between dynamic stability analysis and transient stability analysis has been virtually eliminated. A range of stability analyses was carried out on the JERP refinery system. They covered the operation of the JERP electrical network while in a standalone condition as well as in parallel operation with the DTA electrical network. The stability analyses can be broadly classified into the following categories. Transient and Extended Dynamic Stability Analysis Fault withstand study: This study entailed simulation of single-phase and three-phase faults at various locations in the electrical network. It analysed the behaviour of the power system in the pre-fault stage, during the fault, and after the fault was cleared by the systems protective devices. Load throw-off study: A load throw-off condition can cause the machines to over speed. Temporary overvoltage conditions can also occur in the system. Hence, the behaviour of the electrical system was studied for all probable cases of load throwoff in which a substantial portion of the operating load was suddenly tripped.

Figure 3. Power Transfer Between Machines

Mrefer to Figure 3) with internal voltages of EG and EM, respectively, and a phase angle of between them is represented in Equation 1: [1]

P=

EM EG sin X

(1)

The maximum power that can be transferred occurs when = 90 degrees, per Equation 2:

P = max

EM EG X

(2)

For particular values of EG and E M, the machines lose synchronism with each other if the steady-state limit of Pmax is exceeded. The steady-state stability study determines the maximum value of machine loading that is possible without losing synchronism as the system loading is increased gradually. Dynamic Stability Analysis A steady-state scenario never exists in actual operation, however. Rather, the state of the electrical system can be considered dynamic, whereby small, random changes in the system load constantly occur, followed by actions of the generator governor, exciter, and power system stabiliser to adjust the output of the machine and match the load requirement. The system can be considered stable if the responses to these small, random disturbances are bounded and damped to an acceptable limit within a reasonable time. The dynamic stability analysis of any system is only practical through specialised computer programs such as ETAP. Transient Stability Analysis Transient stability is the ability of the system to withstand sudden changes in generation, load, or system characteristics (e.g., short circuits, tripping of generators, switching in large bulk loads) without a prolonged loss of synchronism. [1] Traditionally, transient stability analysis focused on the ability of the system to remain in synchronism immediately after the occurrence of the transient event (i.e.,

Stability analyses carried out on the JERP refinery system covered the operation of the JERP electrical network in standalone condition as well as in parallel operation with the DTA electrical network.

December 2009 Volume 2, Number 1

125

The results of stability analyses help define the limits of safe operation of the power system under various generation/load scenarios.

Load sharing on tripping of tie circuit breaker between JERP and DTA electrical systems: The behaviours of the JERP and DTA electrical systems were studied on disconnection of the JERPDTA tie line. This study was carried out for various combinations of operating GTGs/STGs in the JERP and DTA electrical systems (i.e., various power flow scenarios between JERP and DTA systems). When system stability could not be achieved by load sharing amongst the operating GTGs/STGs in each individual network, load shedding was simulated to try to achieve a stable system. Contingency Analysis Load sharing on tripping of JERP/DTA GTG: The behaviour of the power system was studied when one or more of the operating GTGs/STGs tripped, causing a loss of generation. This study was carried out for various combinations of operating GTGs/STGs in the JERP and DTA electrical systems. When system stability could not be achieved by load sharing amongst the remaining operating GTGs/STGs, load shedding was simulated to try to achieve a stable system. Operational Analysis Step-load addition study: A sudden addition of load on operating machines can cause loss of stability. A step-load addition scenario can occur in a variety of ways in an electrical system, the most probable being loss of one of the operating machines, which can cause a sudden increase in the load demand on the other operating GTGs/STGs. The behaviour of the system was studied for all probable scenarios of step-load addition. The results of these stability analyses helped define the limits of safe operation of the power system under various generation/ load scenarios. Motor-Starting Study At the instant of starting, synchronous and induction motors draw a starting current that is several times the full-load current of the motor. In the absence of assisted starting, this starting current is typically between 600% and 720% of the normal full-load current. This high current causes a voltage drop in the upstream electrical network, as well as in the motor feeder cable. The effects of this voltage drop include: The combined voltage drop in the supply network and the motor cable reduces the

voltage available at the motor terminals during the starting period. Because the motor torque is directly proportional to the square of the applied voltage, excessive voltage drops can mean that insufficient torque is available to accelerate the motor in the face of the load torque requirement, leading to very long starting times or a failure to start. The voltage drop at the switchboard buses can affect the other operating loads, mainly in the form of nuisance tripping of other loads on the network (e.g., voltage-sensitive loads or contactor-fed loads where the control voltage for the contactor is derived from the switchgear bus). There can also be cases in which the reduction in the terminal voltage for the operating motors causes the motor-torque curve to shift downwards. This reduction in the motor torque can cause the running motors to stall. For the other operating loads, a reduction in the motor terminal voltages causes the current drawn by the motors to increase as they strive to produce the power the process demands of them. This condition exacerbates the voltage problem because the increased current gives rise to an increased voltage drop in the system. Depending on the size of the motor being started and the generating capacity available, motor starting can impose a very high short-term demand on the operating generators. Studying motor starting can help identify these voltage-drop-related problems at the design stage. Usually, the worst-case motor-starting scenario is the starting of the highest-rated motor (or the highest-rated standby motor) at each voltage level with the operating load of the plant as the standing load. However, other worst-case scenarios may require evaluation in certain situations: Motors with an unusually long supply cable circuit Motors fed from a weak power supply (e.g., starting on emergency power supplied from a diesel generator set of limited rating) Simultaneous starting of a group of motors In the event of an unfavourable outcome from the motor starting study, various improvement measures are available, including: Specifying that motors be designed with a lower value of starting current, which is particularly feasible for the larger mediumvoltage (MV) motors

126

Bechtel Technology Journal

Specifying lower impedance for the upstream transformer after verifying the suitability through a short circuit analysis Starting the largest motors in the network with a reduced standing load Using larger cable sizes for the motor feeder to improve the motor terminal voltage Providing assisted starting, if required, for the larger HV/MV motors instead of direct on-line starting Using motor unit transformers to feed power to large MV motors, which ensures that the effect of the voltage drop on the rest of the electrical system is reduced Increasing upstream bus voltage temporarily (e.g., through on-load tap changers [OLTCs]) before starting large motors Various motor starting scenarios were modelled for the JERP, and the results indicated that the motors could be started satisfactorily. Transformer Energisation Studies The inrush phenomenon in transformers can inflict a very severe, albeit short-term, effect on the voltage profile at the refinerys various switchboard buses. The inrush current taken by the transformers is due to the behaviour of the magnetic circuit. The constant flux linkage theorem states that the magnetic flux in an inductive circuit cannot change suddenly. Hence, the magnetic flux immediately after energisation (t = 0+) should be equal to the magnetic flux immediately before energisation (t = 0). When a transformer is switched on, the magnetic flux immediately after energisation depends on the following factors that are essentially random: The point on the sine wave voltage waveform where the transformer is switched on, which decides the amount and direction of the flux requirement The amount and direction of the remnant flux, which depends on the point on the sine wave voltage waveform where the transformer was last switched off As explained by the constant flux linkage theorem, the magnetic flux after energisation retains a sinusoidal shape that is biased by the flux requirement at the point of energisation and the remnant flux. Depending on the design of the transformer, this condition can cause the flux requirement to be well above the knee-point voltage on the transformer magnetising curve, leading to very high excitation currents that may

reach large multiples of the full-load current of the transformer. The inrush current decays substantially within a few cycles. Although modern protection systems are wellequipped with algorithms to distinguish the transformer inrush current from the short circuit, the inrush current still causes a severe voltage dip at the other switchboards in the network. This voltage dip can cause nuisance tripping of other network loads. The London core group studied various probable transformer energisation scenarios (including group energisation of transformers) to confirm that the network voltages recover without tripping system operating loads. Because ETAP could not directly model transformer behaviour under inrush conditions, the impact of the transformer inrush current was simulated by switching a series of low power-factor loads in and out at intervals of 5 milliseconds. The load values were selected as exponentially decreasing to simulate the inrush current decay. To ensure accurate modelling, the inrush current data was based on transformer manufacturers data supplemented by the measurements recorded during site testing and commissioning. The results of the transformer energisation studies established the network conditions under which the JERP transformers can be safely energised. This finding was crucial because in certain scenarios, the JERP main transformers were to be energised from the DTA electrical system and any disruption to the DTA operating loads could lead to tripping of the DTA refinery. Harmonic Analysis The amount of periodic waveform distortion present in the power supply is one of the most important criteria for measuring power quality. Periodic waveform distortion is characterised by the presence of harmonics and interharmonics in the power supply. Harmonics are sinusoidal voltages and currents with frequencies that are integral multiples of the fundamental frequency of the system. Interharmonics are sinusoidal voltages and currents with frequencies that are non-integral multiples of the fundamental frequency of the system. For the JERP: f 1 = fundamental frequency = 50 Hz fharmonic = n x f 1 (n = 2, 3, 4, ) finterharmonic = m x f 1 (m > 0 and non-integral)

The results of the transformer energisation studies established the network conditions under which the JERP transformers can be safely energised.

(3) (4)

December 2009 Volume 2, Number 1

127

Conducting system studies on a complex project such as the JERP has highlighted areas where existing Bechtel procedures can be improved or fine-tuned.

Periodic waveform distortion is caused by nonlinear loads, which are loads that do not draw a sinusoidal current when excited by a sinusoidal voltage. The non-linear loads act as sources of harmonic currents in the power system, which cause a voltage distortion at the various buses because of the harmonic voltage drops across the impedances of the network. Hence, the quantum of voltage distortion depends on the harmonic currents injected into the system and the impedance of the system (the voltage distortion in a weak system, characterised by a high system impedance, is higher). The presence of excessive harmonics can lead to premature aging of electrical insulation due to dielectric thermal or voltage stress in equipment such as motors, cables, and transformers. Other possible effects of harmonics include reduced power factors, incorrect operation of protection systems, interference with communication networks, and occurrence of series and parallel resonant conditions that can lead to excessive currents and voltages in the system. Hence, it is important to carry out a harmonic analysis wherever the non-linear load forms a significant portion of the total load. The JERP electrical network includes a large number of harmonic-generating loads, mainly 22 kW and 37 kW low-voltage (LV) variable-speed drives (VSDs) that act as sources of harmonic currents. The London core group carried out a harmonic analysis of the JERP electrical network to verify that the voltage distortion at the networks various switchboards caused by these harmonic-generating loads is within the limits specified in Table 11-1 of IEEE 519 (Table 1).
Table 1. Harmonic Limits as Defined by IEEE 519
Individual Harmonic Distortion (IHD), %
3
1.5

(GTGs and STGs) were assumed to have no harmonic distortion. As a worst-case scenario, the harmonic analysis was carried out with the minimum generation configuration under normal operating conditions because this configuration corresponds to the maximum system impedance. The results of the harmonic analysis highlighted the switchboards whose power quality needs to be monitored further during plant operation. The London core group recommended that any corrective action (such as adding harmonic filters) to reduce the harmonics at these JERP plant switchboards be undertaken after measuring the actual harmonic levels at the various 6.6 kV/415 V switchboards when the plant is operating.

CONCLUSIONS

he results of the system studies of the JERP electrical network verified the adequacy of the ratings for the systems major equipment. The results also helped determine the conditions for satisfactory and reliable system operation and highlighted any operational restrictions required for safe operation.

LESSONS LEARNT onducting electrical system studies on a complex project such as the JERP and working with execution centres and non-Bechtel engineering contractors located across the globe have highlighted three major areas, discussed below, where existing Bechtel project procedures can be improved or fine-tuned to increase operating efficiency. Distributing Work The work distribution amongst the execution centres, non-Bechtel engineering contractors, and the London core group for carrying out ETAP modelling must be clearly defined through proper DWIs. Amongst other things, the instructions should include the structure of the model, the extent of modelling required, the data required to be populated in the model, the methodology of populating the data, the use of assumptions and approximations, the common library to be used to populate the standard data in the model, and the tests that must be carried out to ensure that sections of the model meet all requirements before they are transferred to the London core group for integration into the overall model.

Rated Bus Voltage

Total Harmonic Distortion (THD), %


5
2.5

69 kV and less Greater than 69 kV up to 161 kV 161 kV and greater

1.5

For shorter periods, during startups or unusual conditions, these limits may be exceeded by 50%.

All harmonic-generating process loads were modelled in the ETAP model used for harmonic analysis. The power sources in the JERP network

128

Bechtel Technology Journal

Handling Model Revisions Proper work procedures for handling ETAP model revisions need to be furnished to the execution centres/non-Bechtel engineering contractors so that the London core group can integrate the revised models or revised sections of same into the overall model without causing rework or loss of data. Identifying Study Cases To increase engineering efficiency, it is essential to optimise the types of studies to be conducted on a project and the number of cases to be analysed for each study. At the same time, it is essential to ensure that the number and types of study cases allow the engineer to determine system behaviour under all operating conditions. This opportunity is particularly valuable because the projects that Bechtel is bound to take up (in the role of engineering contractor or as a project management consultant or member of a project management team) are more likely to be of the scale of the JERP, and it is highly likely that the engineering work for such projects will be divided amongst various execution centres.

REFERENCES
[1] IEEE 399-1997, IEEE Recommended Practice for Industrial and Commercial Power Systems Analysis, The Institute of Electrical and Electronics Engineers, Inc., 1998, pp. 79, 209214, access via http://standards.ieee.org/ colorbooks/sampler/Brownbook.pdf. D.P. Kothari and I.J. Nagrath, Power System Engineering, 2nd Edition, Tata McGraw-Hill Publishing Company Ltd., 2008, Chapter 12, pp. 558560, access via http://highered. mcgraw-hill.com/sites/0070647917/information_ center_view0/.

[2]

ADDITIONAL READING Additional information sources used to develop this paper include: P.M. Anderson and A.A. Fouad, Power
System Control and Stability, 2nd Edition, IEEE Press Series on Power Engineering, John Wiley & Sons, Inc., 2003, pp. 510, access via http://www.amazon.com/ Power-System-Control-Stability-Engineering/ dp/0471238627#noop.

Systems, Controls, Embedded Systems, Energy, and Machines, The Electrical Engineering Handbook, 3rd Edition, Richard C. Dorf, ed., Chapter 5, 2006, CRC Press/Taylor & Francis Group, LLC, Boca Raton, FL, pp. 5-1 5-3, access via http://www.amazon.com/ControlsEmbedded-Machines-Electrical-Engineering/ dp/0849373476.

ACKNOWLEDGMENTS The author wishes to express his gratitude to R.H. Buckle (chief engineer), R.D. Hibbett (lead electrical engineerJERP), and David Hulme (project engineering manager) for their support and guidance during execution of the JERP system studies. The author also wishes to thank V. Shanbhag, B.S. Venkateswar, and M.A. Mujawar from Reliance Industries for their support and encouragement. BIOGRAPHY
Rajesh Narayan Athiyarath is a senior electrical engineer in Bechtels OG&C Global Business Unit. He has 16 years of experience in engineering oil and gas, petrochemical, and GTG power plant projects worldwide. During his 3 years with Bechtel OG&C (London), Rajesh has contributed to system studies and relay coordination studies on the JERP and to FEED for the Ruwais refinery expansion project. He has also acted as the responsible engineer for the energy management system and load shedding system on the JERP. Rajesh received a performance award for his work on the JERP system studies. Rajesh holds a BE from Mumbai University, India, and is a chartered electrical engineer (CEng, member of Institution of Engineering and Technology [MIET], UK). He is a Six Sigma Yellow Belt.

TRADEMARKS ETAP is a registered trademark of Operation Technology, Inc. IEEE is a registered trademark of The Institute of Electrical and Electronics Engineers, Incorporated. Merox is a trademark owned by UOP LLC, a Honeywell Company. Simulink is a registered trademark of The MathWorks, Inc.

December 2009 Volume 2, Number 1

129

130

Bechtel Technology Journal

Power
Technology Papers

TECHNOLOGY PAPERS

133

Options for Hybrid Solar and Conventional Fossil Plants


David Ugolini Justin Zachary, PhD Joon Park

145

Managing the Quality of Structural Steel Building Information Modeling


Martin Reifschneider Kristin Santamont

157

Nuclear Uprates Add Critical Capacity


Eugene W. Thomas

165 Prairie State Energy Campus


Exposed piling, which will support ue gas treatment equipment, catches light at sunset. The power block is in the background.

Interoperable Deployment Strategies for Enterprise Spatial Data in a Global Engineering Environment
Tracy J. McLane Yongmin Yan, PhD Robin Benjamins

OPTIONS FOR HYBRID SOLAR AND CONVENTIONAL FOSSIL PLANTS


Issue Date: December 2009 AbstractRenewable energy sources continue to add to the electricity supply as more countries worldwide mandate that a portion of new generation must be from renewable energy. In areas that receive high levels of sunlight, solar technology is a viable option. To help alleviate the capital cost, dispatchability, and availability challenges associated with solar energy, hybrid systems are being considered that integrate concentrating solar power (CSP) technology with conventional combined cycle (CC) or Rankine cycle power blocks. While briefly discussing Rankine cycle applications, this paper focuses primarily on the most widely considered hybrid approach: the integrated solar combined cycle (ISCC) power plant. The paper examines the design and cost issues associated with developing an ISCC plant using one of the three leading CSP technologiessolar trough, linear Fresnel lens, and solar tower. Keywordscombined cycle (CC), concentrating solar power (CSP), concentrating solar thermal (CST), heat transfer fluid (HTF), integrated solar combined cycle (ISCC), linear Fresnel lens, renewable energy, solar tower, solar trough

INTRODUCTION ore and more countries are mandating that a portion of new energy be from renewable sources such as solar, wind, or biomass. However, compared with traditional power generation technologies, renewable energy faces challengesprimarily related to capital costthat are only partially compensated for by lower expenditures for operation and maintenance (O&M) and fuel. Other challenges include dispatchability and the intermittent nature of some of these energy sources. These challenges can be overcome by using some form of storage. However, large-scale energy storage also has unresolved technical and cost issues.

Finally, hybrids are possible that combine the different forms of renewable energy to increase daily electricity supply. The focus of this paper is the ISCC power plant. For comparison purposes, integration options with Rankine cycle power blocks are also briefly discussed. In either case, the integration seeks to achieve efficient operation even though solar energy intensity varies according to time of day, weather, and season.

BACKGROUND oncentrated sunlight has been used to perform tasks since ancient times. As early as 1866, sunlight was successfully harnessed to power a steam engine, the first known example of a concentrating-solar-powered mechanical device. Today, conventional CC plants achieve the highest thermal efficiency of any fossil-fuel-based power generation system. In addition, their emissions footprint, including CO2, is substantially lower than that of coal-fired plants. Properly integrating an additional heat source, such as concentrating solar power (CSP), can dramatically increase CC system efficiency.

David Ugolini
dugolini@bechtel.com

Justin Zachary, PhD


jzachary@bechtel.com

Joon Park
hjpark@bechtel.com

A viable alternative that helps to alleviate the challenges associated with renewable energy is a hybrid system that integrates renewable sources with combined cycle (CC) or Rankine cycle power blocks. One such hybrid system is the integrated solar combined cycle (ISCC), which uses concentrating solar thermal (CST) energy as the renewable source. In regions with reasonably good solar conditions, CST hybrids involving conventional coal-fired plants are also feasible. For these plants, where steam pressures and temperatures are higher than for ISCC plants, the specific solar conversion technology used dictates how solar is integrated into the plant.

2009 Bechtel Corporation. All rights reserved.

133

ABBREVIATIONS, ACRONYMS, AND TERMS


ACC CC CSP CST HP HPEC air-cooled condenser combined cycle concentrating solar power concentrating solar thermal high pressure HP economizer HP evaporator HP superheater heat recovery steam generator heat transfer fluid integrated gasification CC intermediate pressure IP economizer IP evaporator IP superheater integrated solar CC low pressure LP evaporator LP superheater low-temperature economizer (US) National Renewable Energy Laboratory operation and maintenance Pacific Gas & Electric reheater (NREL) Solar Advisor Model Solar Electric Generating Station

daily startup. Moreover, during solar operation, steam produced by the solar heat source offsets the typical CC power loss resulting from higher ambient temperatures. Thus, ISCC is a winning combination for both CC and solar plants in terms of reduced capital cost and continuous power supply. When considering an ISCC system, the following must be examined: Solar technology to be used and its impact on steam production Amount of solar energy to be integrated into the CC Optimal point in the steam cycle at which to inject solar-generated steam

ISCC is a winning combination for both CC and solar plants in terms of reduced capital cost and continuous power supply.

HPEV HPSH HRSG HTF IGCC IP IPEC IPEV IPSH ISCC LP LPEV LPSH LTEC NREL O&M PG&E RH SAM SEGS

EXISTING SOLAR THERMAL SYSTEMS AND THEIR IMPACT ON STEAM PRODUCTION SP systems require direct sunlight to function. Lenses or mirrors and a tracking device are used to concentrate sunlight. Each system consists of the following: Concentrator Receiver Storage or transportation system Power conversion device Existing CSP technologies include: Solar trough Linear Fresnel lens Solar tower Solar Trough The solar trough is considered to be the most proven CSP technology. Since the 1980s, more than 350 MW of capacity has been developed at the Solar Electric Generating Station (SEGS) solar trough plants in Californias Mojave Desert. The solar trough is a cylindrical parabolic reflector consisting of 4- to 5-mm-thick (0.16to 0.20-inch-thick), glass-silvered mirrors. (The mirrors may also be made of thin glass, plastic films, or polished metals.) It is designed to follow the suns movement using a motorized device and to collect and concentrate solar energy and reflect it onto a linear focus. A specially coated metal receiver tube, enveloped by a glass tube, is located at the focal point of the parabolic mirror. The special coatings aim to maximize energy absorption and minimize heat loss. A conventional synthetic-oil-based heat

Compared with the cost of a steam turbine in a standalone solar power plant, the incremental cost of increasing a CC plants steam turbine size is considerably less. At the same time, the annual electricity production resulting from CST energy is improved over that of a standalone solar power plant because the CC plants steam turbine is already operating, avoiding time lost to

134

Bechtel Technology Journal

transfer fluid (HTF) flows inside the tube and absorbs energy from the concentrated sunlight. The space between the receiver tube and the glass tube is kept under vacuum to reduce heat loss. Several receiver tubes are connected into a loop. A metal support structure, sufficiently rigid to resist the twisting effects of wind while maintaining optical accuracy, holds the receiver tubes in position. Figure 1 shows a solar trough installation.

(Source: Solel)

Linear Fresnel Lens The linear Fresnel lens solar collector is a line-focus system similar to the solar trough. However, to concentrate sunlight, it uses an array of nearly flat reflectorssingle-axis-tracking, flat mirrorsfixed in frames to steel structures on the ground. Several frames are connected to form a module, and modules form rows that can be up to 450 meters (492 yards) long. The receiver consists of one or more metal tubes, with an absorbent coating similar to that of trough technology, located at a predetermined height above the mirrors. Water or a water-andsteam mixture with a quality of around 0.7 flows inside the tubes and absorbs energy from the concentrated sunlight. At the ends of the rows, the water and steam are separated, and saturated steam is produced for either process heat or to generate electricity using a conventional Rankine cycle power block. Figure 2 shows a linear Fresnel lens installation.

Figure 1. Solar Trough Installation

Many loops are required to produce the heat necessary to bring large quantities of HTF to the maximum temperature allowable, which is around 395 C (745 F) because of HTF operational limitations. In locations with good solar radiation, about a 1.5- to 2.0-hectare (4- to 5-acre) solar field is needed to generate 1 MW of capacity. Hot HTF goes into a steam generatora heat exchanger where HTF heat is transmitted to water in the first section to convert the water into steam and then transmitted to steam in the second section to generate superheated steam. From this point onward, the power block converting steam into electricity consists of conventional components, including steam turbine, heat sink, feedwater heaters, and condensate and boiler feed pumps. Advantages of solar trough technology include: Well understood, with proven track record Demonstrated on a relatively large scale May bring projects to execution faster than other competitive CSP technologies Disadvantages of solar trough technology are related to: Maximum HTF temperature, which dictates relative cycle efficiency Complexity of an additional heat exchanger between the Rankine cycle working fluid and the solar-heated fluid

(Source: Ausra, Inc.)

Figure 2. Linear Fresnel Lens Installation

Advantages of linear Fresnel lens technology over solar trough technology include: Direct steam generation without using intermediate HTF Less stringent optical accuracy requirements Decreased field installation activities because the construction design is geared toward factory assembly Use of conventional off-the-shelf materials Less wind impact on structural design Possible improved steam cycle efficiency if the temperature can be increased up to 450 C (840 F), as some technology suppliers are pursuing

December 2009 Volume 2, Number 1

135

Disadvantages with respect to solar trough technology are related to: Less mature, with only recent, relatively small-scale commercial developments Lower power cycle efficiency because of lower steam temperature Lower optical efficiency and increased heat losses because of no insulation around receiver tubes Solar Tower A solar tower is not a line-focus system. Rather, the system consists of a tall tower with a boiler on top that receives concentrated solar radiation from a field of heliostats, which are dual-axis-tracking mirrors. The heat transfer medium can be water, steam, molten salt, liquid sodium, or compressed air. However, in the more conventional arrangement, water is the working fluid. Figure 3 shows a conventional solar tower installation. The water temperature is higher, close to 545 C (1,020 F), in the solar tower system than in the line-focus systems. In addition, the solar tower can be connected to molten salt storage, thus allowing the system to extend operating hours or increase capacity during periods when power is most valuable. The main advantage of solar tower technology is the ability to provide high-temperature superheated steam. On the downside, the design requires accurate aiming and control

capabilities to maximize solar field heliostat efficiency and to avoid potential damage to the receiver on top of the tower. Summary CSP technology provides different options for introducing CST energy into a conventional fossil-fired plant. Table 1 summarizes the CSP technologies and their associated thermal outputs.
Table 1. CSP Technologies Summary
Technology
Solar Trough Linear Fresnel Lens Solar Tower

How steam generated by a given solar technology is integrated depends on the steam conditions that the technology generates.

Working Fluid
Synthetic Oil HTF Steam Steam

Maximum Temperature, C (F)


395 (745) 270 (520) (or higher) 545 (1,020)

INTEGRATION OPTIONS WITH COMBINED CYCLE POWER PLANTS ISCC Plants ISCC plants have been under discussion for many years. Table 2 lists plants that are proposed, in development, or under construction.
Table 2. ISCC Plants
ISCC Project
Kureimat Victorville Palmdale Ain Beni Mathar Hassi RMel Yazd Martin Agua Prieta

Location
Egypt California (US) California (US) Morocco Algeria Iran Florida (US) Mexico

Plant Solar Solar Output, Contribution, Technology MWe MWe


Trough Trough Trough Trough Trough Trough Trough Trough 140 563 555 472 130 430 3,705 480 20 50 62 20 25 67 75 31

Figure 3. Solar Tower Installation

Questions To Be Addressed How steam generated by a given solar technology is integrated depends on the steam conditions that the technology generates. It is critical to remember that all power generated in the CC

136

Bechtel Technology Journal

steam cycle is free, from a fuel perspective. That is, steam cycle power is generated from energy provided in the gas turbine exhaust gases, not by burning additional fuel. Therefore, care must be exercised to not simply substitute fuel-free energy from solar power for fuel-free energy from exhaust gases. When solar energy is being integrated into the CC steam cycle, the goal is to maximize the use of both energy sources. Therefore, the following questions must be answered: What solar technology should be used? How much solar energy should be integrated? Where in the steam cycle is the best place to inject solar-generated steam? There are no simple answers to these questions. Rather, detailed technical and economic analyses must be performed to evaluate various MWth solar inputs to the CC, different solar technologies and associated steam conditions, and the levelized cost of electricity for the sitespecific location under consideration. Solar Technologies For discussion purposes, the solar technologies under consideration are to be integrated into a new 2 x 1 CC plant using F Class gas turbines; unfired, three-pressure, reheat heat recovery steam generators (HRSGs); a reheat steam turbine with

throttle conditions of 131 bara/566 C (1,900 psia/ 1,050 F) and reheat temperature of 566 C (1,050 F); and an air-cooled condenser (ACC). Because solar technologies are evolving and improving, they have been categorized based on fluid temperature capability: High temperature, >500 C (>930 F) Medium temperature, 400 C (750 F) Low temperature, 250 C to 300 C (480 F to 570 F) Medium-temperature technology is discussed first, because it is the most proven technology. Medium-Temperature Solar Technology The solar (parabolic) trough is the most common medium-temperature solar technology. Previous studies indicate that, for parabolic trough systems generating steam up to around 395 C (745 F), it is best to generate saturated highpressure (HP) steam to mix with saturated steam generated in the HRSG HP drum. [1] A schematic of this process is depicted in Figure 4. Integrating HP saturated steam into the HRSG and sending heated feedwater from the HRSG is common in integrated gasification combined cycle (IGCC) plants. A contractor familiar with IGCC integration issues can easily manage ISCC integration issues. The key factor to keep in mind is that in an ISCC plant, it is important

When solar energy is integrated in the CC steam cycles, the following questions must be answered: What solar technology should be used? How much solar energy should be integrated? Where in the steam cycle is the best place to inject solar-generated steam?

Solar Field Steam Turbine

ACC ACC

Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C

Gas Turbine

Heat Recovery Steam Generator

Figure 4. Medium-Temperature Solar ISCC Technology

December 2009 Volume 2, Number 1

137

to take feedwater supply to the solar boiler from the proper location in the steam cycle. The objective is to maximize solar efficiency by maximizing feedwater heating in the HRSG and minimizing feedwater heating in the solar field. For a parabolic trough plant similar to the SEGS plants, the HTF temperature leaving the solar boiler is approximately 290 C (550 F). Therefore, allowing for a reasonable approach temperature, the feedwater temperature should be approximately 260 C (500 F).

The objective is to maximize solar efficiency by maximizing feedwater heating in the HRSG and minimizing feedwater heating in the solar field.

The most convenient place in the steam cycle from which to take feedwater is the HP feedwater pump discharge. On most modern CC systems, feedwater pumps take suction from the lowpressure (LP) drum. However, the typical LP drum pressure of approximately 5 bara (73 psia) in a three-pressure reheat system results in a feedwater temperature of only approximately 160 C (320 F) at pump discharge, which is too low for optimum results. Thus, it is beneficial instead to take feedwater after it has been further heated in the HRSG HP economizers; doing this maximizes the gas turbine exhaust energy used to heat the feedwater, thereby minimizing the solar field size needed to produce a given amount of solar steam. If feedwater is taken from the HP feedwater pump discharge at 160 C (320 F) rather than after an HP economizer at 260 C (500 F), the solar field would have to be approximately 30% larger to generate

enough solar energy to keep the amount of solar steam added to the HRSG the same as when using 260 C (500 F) feedwater. This represents a decrease in solar efficiency. Conversely, even though the change in net output drops 11% when the solar field size, hence the amount of solar energy added, is kept constant while the feedwater temperature is increased from 160 C (320 F) to 260 C (500 F), this configuration results in the highest solar efficiency. These effects of varying the feedwater supply temperature are summarized as follows.
Feedwater Temperature, C (F) Net Solar Energy Added, MWth Solar Steam Added, kg/h (lb/hr) Change in Net Output, MWe Solar Efficiency, %

160 (320) 96 173,000 (382,000) 37.6 39.2

160 (320) 124 230,000 (507,000) 48.1 38.8

260 (500) 96 230,000 (507,000) 42.3 44.3

Solar thermal input to an ISCC can reduce gas turbine fuel consumption, in turn reducing gas turbine power and exhaust energy. For the same plant net output with 100 MWth solar energy input, plant fuel consumption would be reduced by approximately 8%.

Solar Field Steam Turbine

ACC ACC

Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C

Gas Turbine

Heat Recovery Steam Generator

Figure 5. High-Temperature Solar ISCC Technology

138

Bechtel Technology Journal

In addition to solar troughs, some Fresnel lens systems fall into the medium-temperature category. However, design pressure limitations prevent their use in developing HP saturated steam. Therefore, integrating these systems would be more in line with a lowtemperature system. High-Temperature Solar Technology Solar tower systems can generate superheated steamup to 545 C (1,020 F)at high pressure. These conditions allow solar-generated superheated steam to be admitted directly into the HP steam line to the steam turbine. In addition, steam can be reheated in the solar tower like it is in the HRSG. Thus, there is minimal impact on the HRSG because solar steam superheating and reheating are accomplished in the solar boiler. A schematic of this process is depicted in Figure 5. Similar to medium-temperature technology, taking feedwater supply from the optimum location in the steam cycle is important to maximize system efficiency. A high-temperature system could be used in medium- or low-temperature applications; however, it is doubtful that this would result in optimum application of solar tower technology. Low-Temperature Solar Technology Most Fresnel lens systems fall into the lowtemperature solar technology category. These

systems generate saturated steam at up to 270 C/55 bara (520 F/800 psia), although recent technology has been enhanced to reach higher temperatures. This pressure is too low to allow integration into the steam cycle HP system. Therefore, two options exist: Generate saturated steam at approximately 30 bara (435 psia) and admit it to the cold reheat line. Generate steam at approximately 5 bara (73 psia) and admit it to the LP steam admission line. A schematic of this process is depicted in Figure 6. Similar to other solar systems, taking feedwater supply from the optimum location in the steam cycle is important to maximize system efficiency. However, in low-temperature systems, there is less flexibility in feedwater takeoff point selection because the takeoff temperature must be below the saturation temperature of the steam being generated. Economic Considerations To be able to select the appropriate solar technology for a given site, a detailed economic analysis must be performed to assess capital and O&M costs, performance data, and operating scenarios.

To be able to select the appropriate solar technology for a given site, a detailed economic analysis must be performed.

Solar Field Steam Turbine

ACC ACC

Fuel Air H P S H R H H P E V H P E C H P E C I P S H I P E V H P E C I P E C L P S H L P E V L T E C

Gas Turbine

Heat Recovery Steam Generator

Figure 6. Low-Temperature Solar Technology ISCC

December 2009 Volume 2, Number 1

139

An advantage of solar energy is that it produces energy when most needed during peak times of the day and the year.

Site data must also be examined to quantify solar facility energy contribution and to define CC performance characteristics. Hourly dry bulb temperature, relative humidity, and solar insolation data for various sites is available from the US National Renewable Energy Laboratory (NREL). This data can be used with software programs, such as the NREL Solar Advisor Model (SAM), to analyze a particular plant configuration. The representative graph shown in Figure 7 illustrates the results of an analysis of average hourly solar thermal energy production at a particular location versus time of day for January and August. To analyze a proposed plant configuration, performance characteristics must be defined, conceptual design established, and cycle performance model developed. Figure 8 shows performance characteristics for a 2 x 1 CC

configuration designed to accept 100 MWth of solar energy input in the form of HP saturated steam. An advantage of solar energy is that it produces energy when most neededduring peak times of the day and the year. Therefore, time-ofdelivery pricing, where energy payments vary with time of day, can greatly benefit a solar facility. For example, some PG&E power purchase agreements include time-of-delivery pricing that values energy produced during super-peak periods (from June through September between noon and 8 p.m., Monday through Friday) at rates almost double the rates at any other time. The pricing structure must be included in the economic analysis to assess the viability of any hybrid solar plant configuration.

120 100 80 MWth 60 40 20 0 0 2 4 6 8 10 12 Hours 14 16 18 20 22 24 August January

Figure 7. Solar Thermal Energy Production vs. Time of Day

600 550 500 MW 450

MWth 100 80 60 40 20 0

400 350 10 20 30 40 50 60 F 70 80 90 100 110

Figure 8. Net Output vs. Ambient Temperature for Various Solar Inputs

140

Bechtel Technology Journal

Solar Field Steam Turbine

ACC ACC

Boiler Feedwater Heaters

Figure 9. Medium-Temperature Solar Integration with Rankine Cycle Plant

INTEGRATION OPTIONS WITH RANKINE CYCLES any issues associated with integrating solar technology with CC plants apply to integrating solar technology with Rankine cycle plants. Similar analyses must be performed to determine the best solar system for the specific plant site and design. However, there are also differences between integrating solar technology with Rankine cycle plants and with CC plants. A major difference is that all electrical power produced in the Rankine cycle plant is generated by burning fuel. Therefore, it can be advantageous to use solar energy to displace fossil fuel energy. In addition, because boiler efficiency typically increases slightly as boiler load is reduced, solar energy can be used to reduce boiler load to save fuel. Integration options with Rankine cycle plants are discussed only briefly, since integration applications to date have focused primarily on ISCC. Medium-Temperature Solar Technology A typical subcritical Rankine cycle power plant has turbine throttle steam conditions of 166 bara (2,400 psia) and 538 C (1,000 F). Similar to its application in a CC plant, mediumtemperature solar technology can be used to generate saturated or slightly superheated steam

for injection upstream of boiler superheater sections. However, integrating solar steam into the boiler proper is a more complex proposition than in a CC plant because of the higher gas temperatures and the need to control fuel firing. Several options involving both water heating and steam generation in the solar field have been examined. [2] These options address using steam or heating feedwater to displace turbine extraction steam to feedwater heaters. A schematic of this process is shown in Figure 9. Reducing or eliminating extraction steam to feedwater heaters appears to be the most practical application for medium-temperature solar integration because it avoids complex boiler integration issues. High-Temperature Solar Technology Solar tower systems can be used to generate superheated steam for injection into the turbine main steam line. The same amount of cold reheat steam can be extracted and reheated in the solar field, minimizing integration with the Rankine plant boiler. A schematic of this process is shown in Figure 10. Low-Temperature Solar Technology Options for integrating low-temperature solar technology are limited to generating steam or heating feedwater to reduce turbine extraction steam to feedwater heaters.

December 2009 Volume 2, Number 1

141

Solar Field Steam Turbine

ACC ACC

When dealing with solar hybrid configurations, it is important to assess the impact of steam supply changes on the behavior of the conventional generation facility.

Boiler

Feedwater Heaters

Figure 10. High-Temperature Solar Integration with Rankine Cycle Plant

CONTROLS AND TRANSIENT BEHAVIOR

contractor experienced in IGCC or cogeneration plant design can easily manage the integration and control issues of hybrid plants integrating a solar power source. However, IGCC and cogeneration plants do not experience the solar-sourced steam supply variability associated with solar technology integration. Therefore, when dealing with solar hybrid configurations, it is important to assess the impact of steam supply changes on the behavior of the conventional generation facility. Total system transient behavior, including solar steam source and power plant, should be modeled early in the plant design stage. Complex issues associated with proper transient representation by equipment and controls should be addressed using computer simulation programs. Finally, the complete system should be optimized based on operational and cost considerations. The goal is to create an integrated system capable of predicting steam temperature and pressure variations during steady-state and representative transient conditions.

The solar trough, linear Fresnel lens, and solar tower technologies most widely used to concentrate solar thermal power are evolving and improving. Solar trough is considered the most proven CSP technology and has been implemented in the SEGS plants in California, as well as in other areas of the world. Regardless of the option selected to develop a hybrid solar and conventional fossil plant, determining the optimum solar field is a site-specific task that must consider the grid requirements and operational profile of the steam cycle components at night or during periods when the solar energy is not available. A carefully planned and executed hybrid plant, such as an ISCC, that integrates CST energy with existing fossil energy sources is a winning combination for both the solar field and the power plant, resulting in: Higher CC system efficiency Smaller CC plant carbon footprint Larger renewable energy portion of new generation Minimized effect of the intermittent nature of solar energy supply It is expected that the number of ISCC plants will continue to grow worldwide. As this happens, it is likely that the installed solar field price will decrease through economies of scale and increased manufacturing and installation productivity.

CONCLUSIONS

enewable energy sources continue to add to the electricity supply as more countries worldwide mandate that a portion of new generation must be from renewable energy. In areas that receive high levels of sunlight, solar technology is a viable option.

142

Bechtel Technology Journal

REFERENCES
[1] B. Kelly, U. Herrmann, and M.J. Hale, Optimization Studies for Integrated Solar Combined Cycle Systems, Proceedings of Solar Forum 2001, Solar Energy: The Power to Choose, Washington, DC, April 2125, 2001, http://www.p2pays.org/ref/22/21040.pdf. G. Morin, H. Lerchenmller, M. Mertins, M. Ewert, M. Fruth, S. Bockamp, T. Griestop, and A. Hberle, Plug-in Strategy for Market Introduction of Fresnel-Collectors, 12th SolarPACES International Symposium, Oaxaca, Mexico, October 68, 2004, http://www.solarec-egypt.com/resources/ Solarpaces_Fresnel_Market_Introduction_ final_pages_2004-07.pdf

[2]

Justin Zachary, PhD, assistant manager of technology for Bechtel Power Corporation, oversees the technical assessment of major equipment used in Bechtels power plants worldwide. He is engaged in a number of key activities, including evaluation of integrated gasification combined cycle power island technologies; participation in Bechtels CO2 capture and sequestration studies; and application of other advanced power generation technologies, including renewables. Justin was recently named a Bechtel Fellow in recognition of his leadership and development of Bechtels Performance Test Group and the key technical support he has provided as a widely respected international specialist in turbo machinery. Justin has more than 31 years of experience with electric power generation technologies, particularly those involving the thermal design and testing of gas and steam turbines. He has special expertise in gas turbine performance, combustion, and emissions for simple and combined cycle plants worldwide. Before coming to Bechtel, he designed, engineered, and tested steam and gas turbine machinery while employed with Siemens Power Corporation and General Electric Company. Drawing on his expertise as one of the foremost specialists in turbo machinery, he has authored more than 72 technical papers on this and related topics. He also owns patents in combustion control and advanced thermodynamic cycles. In addition to recently being named a Bechtel Fellow, Justin is an ASME Fellow and a member of a number of ASME Performance Test Code committees. Justin holds a PhD in Thermodynamics and Fluid Mechanics from Western University in Alberta, Canada. His MS in Thermal and Fluid Dynamics is from Tel-Aviv University, and his BS in Mechanical Engineering is from Technion Israel Institute of Technology, Haifa, both in Israel. Joon Park is a financial analyst with Bechtel Enterprises Holdings, Inc. Since joining Bechtel, he has contributed to a variety of projects as a financial analyst; built financial models for power and civil projects; analyzed the economics of various fossil, renewable, and nuclear power technologies; and conducted US power market research. Prior to working at Bechtel, Joon was a system design engineer for combined cycle power plant projects, where his duties included preparing heat balance and cycle optimization studies and technically evaluating major equipment. He was also a mechanical engineer for pipeline, refinery, and petrochemical plant projects overseas. Joon holds an MBA from the University of Chicago Booth School of Business, Chicago, Illinois; an MS in Mechanical Engineering from Seoul National University, Korea; and a BS in Mechanical Engineering from Konkuk University, also in Seoul, Korea. He is a registered representative of the Financial Industry Regulatory Authority (FINRA).

BIOGRAPHIES
David Ugolini is a senior principal engineer with more than 32 years of mechanical and cycle technology engineering experience on a variety of nuclear and fossilfueled power generation plants. He works in the Project Development Group as supervisor of the Cycle Performance Group and is responsible for developing conceptual designs and heat balances for Bechtels power projects worldwide. Dave also supervises efforts related to plant performance testing. Dave began his engineering career by joining Commonwealth Edison Company in 1977 as an engineer at the Zion nuclear power plant. He joined Bechtel in 1980 in the Los Angeles office as an engineer, working first on the San Onofre Nuclear Generating Station and then on the Skagit/Hanford nuclear project. Dave later transferred to the San Francisco office and worked as a mechanical engineer on several projects, including the Avon cogeneration project, the Carrisa Plains solar central receiver project, and two combined cycle cogeneration projects Gilroy Foods and American 1 . In late 1989, Dave moved to the Gaithersburg, Maryland, office and became supervisor of the Fossil Technology Groups Turbine Technology Group, where he directed activities related to developing technical specifications and bid evaluations for gas turbines, heat recovery steam generators, and steam turbines for combined cycle and Rankine cycle power plants. When this group was merged with the Cycle Performance Group, he became deputy supervisor and eventually supervisor. Dave is actively involved in ASME Performance Test Code committees PTC 52 (solar power plant testing) and PTC 6 (steam turbine testing). Dave received his BS in Thermal and Environmental Engineering from Southern Illinois University, Carbondale.

December 2009 Volume 2, Number 1

143

144

Bechtel Technology Journal

MANAGING THE QUALITY OF STRUCTURAL STEEL BUILDING INFORMATION MODELING


Issue Date: December 2009 AbstractManaging the quality of building information modeling (BIM) is essential to ensuring the effective and efficient use of data for the engineering, procurement, and construction (EPC) process. For structural steel, BIM uses both graphical and non-graphical data. Previously, only geometric data such as size, shape, and orientation of a structural member could be viewed graphically. However, current BIM tools allow non-graphical information (such as material grade, coating requirements, and shipping and erection status) to be easily visualized and reviewed in a three-dimensional (3D) model through the use of color, transparency, or other representation. This ability to visualize and verify non-graphical data is vital because this data often significantly affects the cost and control of delivering a project. Thus, ensuring the accuracy and validity of the data provides confidence in the reliability of BIM as part of the EPC process. By using a database that is independent of the graphical model, along with associated validation rules, the data behind the structural graphics can be validated and corrected for downstream processes, reports, and other software applications. Keywordsbuilding information modeling (BIM); database; engineering, procurement, and construction (EPC); quality; structural steel; Tekla model
BACKGROUND assessment of overseas fabricators connection design and detailing capabilities, which led to the realization that not all fabricators could satisfactorily offer the services expected. Seeing a need to change its traditional work process to position the company for the use of other countries steel and suppliers, Bechtel developed an in-house detailing capability and changed the design engineering deliverable. Specifically, Bechtel changed its 3D structural modeling philosophy from one in which the deliverable was a set of engineering drawings to one that allowed the engineer to produce a fully connected detailing model to be used directly for shop and erection drawing extraction. This change has increased the companys flexibility when selecting a structural steel fabricator because it is no longer bound to the traditional single fabricator. Rather, now that Bechtel delivers completed shop drawings, it is able to engage multiple fabricators based both in the United States and elsewhere, as project needs dictate.

Martin Reifschneider
mreifsch@bechtel.com

Kristin Santamont
kmsantam@bechtel.com

ess than 10 years ago, Bechtel, like most engineering firms in the process and power generation industry, was using a structural steel design process in which the results of engineering analysis and design were presented in a set of structural framing drawings. Generally, these two-dimensional (2D) drawings were extracted from an engineering threedimensional (3D) modeling tool; otherwise, they were manually drawn. Subsequently, these engineering drawings served as the contract documents with the structural steel fabricator, describing the detailed configuration of the structure. The fabricators scope typically included tracking and identifying the raw material supplied from the steel mill, arranging for the preparation of connection design calculations, creating a detailing model used to extract shop fabrication drawings, creating computer numerical control (CNC) data used to control the fabrication process, and fabricating and delivering the final assemblies to the project site. In the late 1990s, recognition of a forthcoming decline in the availability of US structural steel led Bechtel to evaluate overseas fabrication alternatives. This evaluation included an

NEW WORK PROCESSES

he change in the nature of Bechtels design engineering deliverables has had several

2009 Bechtel Corporation. All rights reserved.

145

ABBREVIATIONS, ACRONYMS, AND TERMS


2D 3D 4D API ASTM two-dimensional three-dimensional four-dimensional application programming interface ASTM International (originally American Society for Testing and Materials) building information modeling cast in place computer numerical control Central Steel Managementa Bechtel software system whose primary objective is to improve the quality of steel engineering in Bechtel projects engineering, procurement, and construction extraction, transformation, and loading

set of drawings. The most significant impact, however, is the ability the company has gained to manage more building information within the structural model. Working solely in the 3D model environment provides the ability to develop and manage procurement, fabrication, and erection data, which brings both cost and schedule advantages to Bechtel projects. The advantages of work process change have proven undeniable; yet, numerous challenges ensued at first. Initially, the project team needed training to work in a detailed building information modeling (BIM) environment. Though very experienced in developing and using a 3D plant model for collaboration among multiple disciplines, the team struggled with the regimen and the attention to minute detail that were necessary to deliver a fully connected structural steel model. It was no longer acceptable to model only the individual pieces in the correct location; instead, it was imperative to also ensure that all the associated data was correctly identified and added for each associated part or assembly. The steel modeling tool used in the new work process provides the flexibility to create any number of user-defined attributes (UDAs) for non-graphical information, which enhances the geometric data contained in a 3D model. To define relevant data for the UDAs, Bechtels structural engineering team first determined who the potential data users were, which existing processes they intended for the data to support, what data could best support those processes, and how it could do so. This data identification (see Table 1) and use determination was a gradual learning process for the team, as well as for Bechtels procurement and erection partners.

Working solely in the 3D model environment provides the ability to develop and manage procurement, fabrication, and erection data, which brings both cost and schedule advantages to Bechtel projects.

BIM CIP CNC CSM

EPC ETL

prelim mark preliminary marka numeric identifier attached as a UDA to a model component UDA user-defined attribute

effects on the companys work processes, the most problematic of which has been the paradigm change whereby its vendors and internal customers now receive a completely detailed 3D model rather than the traditional

Table 1. Model Data by Type


Owner/Purpose
Model Model Various/identification Various/status Engineering Procurement Construction

Type
Graphical Non-graphical Non-graphical Non-graphical Non-graphical Non-graphical Non-graphical

Number of Attributes
18 25 23 22 41 11 12

Examples
Height, width, length, position coordinates Member prefix, class, material grade, weight Identifications, bar code, shop drawing number, TEAMWorks1 identification On hold, fabrication start date, ship date, erection complete Originator, checker, loads Purchase order number, fabricator, detailer, vehicle, bundle number Construction sequence, leave-out, fieldwork, laydown

1 TEAMWorks is Bechtels proprietary corporate software used to track equipment and material and to report quantities.

146

Bechtel Technology Journal

Once it was determined what data was relevant to each end user, how the data would be merged into user processes, and what data format was desired, the next big challenge was to train a large and distributed workforce to properly enter the data into the model as prescribed. As with any database, its applications and reports wholly depend on the quality of the data. Even subtle variance of data entry limits its recognition by a database application. For example, if a data field is used to describe the member function as a beam and the expected entry is B, then entries such as b, BM, or Beam will not be recognized. This challenge of subtle variation in data structure led the team to seek a better way to manage the model quality. To begin a quality management process, only a few data fields were selected as vital. For these fields, specific criteria were established for their control. As other UDAs were added to the BIM work process, the quality management process expanded to include many of them.

Over the course of many years, numerous data applications and reports that facilitate project performance have been developed. However, many of these applications require manual data input or manipulation of various electronic files imported from Bechtels vendors. Having the ability to create and manage more data in the graphical structural model has led to less reliance on such manual input or data import. Examples of data in the steel design-to-delivery life cycle and the users/processes for which the model data is used are shown on the steel timeline in Figure 1. Engineering The Bechtel engineering organization makes extensive use of its own data. The quantity of steel modeled and progress relative to the project schedule are regularly monitored. To execute this type of monitoring, procedures were established to record the digital signatures of model originators and checkers in the model as a UDA. Modeling progress is evaluated by measuring the percentage complete of members originated, members checked, connections originated, and connections checked. Fabricators Early purchase of raw steel material from the mill or warehouse requires the identification of components to be purchased and their purchase definition (section profile, component length, and material grade). To track this raw material to the related final, fabricated, structural elements,

The immediate customers for the engineering design products are internal procurement, construction, and project management organizations.

DATA CUSTOMERS or Bechtel Power Corporation, an engineering, procurement, and construction (EPC) firm, the immediate customers for the engineering design products are internal procurement, construction, and project management organizations. From the purchase of raw material and development of detailed fabrication information to delivery planning, material staging, and erection status, data integration and quality are vital.

Start of Design Model

Status Reports Framing Modeling

Release for Drawing Extraction


MTO Misc. Steel Modeling Model Check Early ABOM

Release for Fabrication

Release for Shipping

Site Delivery

Erection Start

Connection Modeling

CNC TEAMWorks Shop Data & Load Shipping Drawings Reports Sizes Drawing Extraction Fabrication

Erection Complete

Updated Model Shipping Shakeout Erection

Member Originated Loads Input Member Checked Profile Name Material Grade Class Part Prefix Assembly Prefix

Constr. Steel Suplmtl Steel Release Phase On Hold

Shop Drawing Number Bar Code Drawing Checked

Fabrication Complete Fabrication Start

Delivery Forecast Vehicle Bundle

Delivery Date

Laydown Erected Bolting Complete

Inspection Complete

Construction Sequence

Part & Assembly Drawing Pay Numbers Originated Category Detailer & Leave-Out Fabricator Connection Preliminary Checked ID Steel Mark Loads Checked Connection Shop/Ground Originated Assembly

Cost Code

Coating Complete

ABOM advance bill of materials CNC computer numerical control MTO material take-off

Figure 1. Structural Steel Data and Design-to-Delivery Life Cycle

December 2009 Volume 2, Number 1

147

a numeric identifier, commonly known as a preliminary mark (prelim mark), is attached as a UDA to the model component. The purpose of the prelim mark is twofold: first, to group material purchases of like definition, and second, to ultimately associate the raw material purchased with the structural component for which it is intended. To meet this second need, the prelim mark UDA is included on the shop drawing bill of materials. Each prelim mark is unique to the three components that constitute its definition; for example, Prelim Mark 10049 is defined as a W12x50 profile that is 4.75 meters long and fabricated from ASTM International (originally American Society for Testing and Materials) A992 material grade steel. However, multiple components typically share a prelim mark. Erectors Structural steel erection planning determines the desired sequence of construction. The erector typically identifies a region of the structure to build first and what regions are to follow. The sizes of these regions and their order of erection are governed by site configuration, crane capacities, crane access, and other project-specific conditions. During early construction planning, the sequence of construction is identified by a construction sequence number, which is stored in the building model for later use in processes such as drawing extraction, fabrication, delivery, material tracking, and staging. The release of material for fabrication initiates a material tracking process that follows the material through fabrication, delivery, staging, erection, bolting, and inspection. An independent material tracking database (another component of project BIM) is used for this process. Data from this tracking database can also be returned to the 3D steel model to provide graphical status. Using the Bechtel-developed database tool TEAMWorks to track installation progress,

erection performance can be monitored relative to the established project budget and schedule. Performance is measured by comparing reported and predicted progress using historical installation rates for different types of model components (e.g., structural, miscellaneous, modular). These components are identified in the model by an identifying code (cost code). Each assembly in the structural model is pre-assigned a cost code based on its constituent makeup and arrangement. Detailers (Shop Drawing Extractors) Shop and erection drawings are released to the fabricator in a sequence that facilitates the delivery and erection plan. To aid in quick recognition of components on a shop drawing, the detailer-assigned shop drawing number combines several pieces of non-graphical information. It identifies the fabricator, detailer, construction sequence, component type (beam, column, brace, etc.), and piece number, as shown in Figure 2. A similar combination of data is used to create bar codes. By sorting on the shop drawing number containing these identifying details, all components needed (within a sequence) during erection can be quickly identified in the document management system. Procurement Structural steel raw material and its fabrication for large industrial facilities are typically purchased based on a unit rate (cost/ton of steel). A prescribed set of component definitions and estimated overall quantities are presented to the fabricator for pricing. In response, the fabricator provides its unit rate to reflect material costs and fabrication complexity for each type of component. These definitions are often also used to determine the costs of connection design, connection modeling, and detailing (shop and erection drawing extraction). A pay category number is assigned to each model assembly to match its unit rate definition (examples are

To aid in quick recognition of components on a shop drawing, the detailer-assigned shop drawing number combines several pieces of non-graphical information.

25316 Project Number

011

V1 A

SS01

- 117 B1021 -

001
Submittal No.

Facility Code Unit Number Fabricator ID Detailer ID

Piece Mark Assembly Prefix Construction Sequence No.

Figure 2. Shop Drawing Numbering

148

Bechtel Technology Journal

provided in Table 2), which is determined as the connected structural model is prepared. Upon pay category assignment, the cost of fabrication can be computed immediately by applying the associated unit rates to the summarized quantities.
Table 2. Sample of Pay Category Definitions
Pay Category
2.04 3.09 4.09 5.01 6.02 6.03 8.01 8.05 12.03 12.05

THE STEEL (STRUCTURES)

Unit

Description

ton ton ton ton ton ton ton ft 2 ton ton

Beam, rolled (W), 81150 lb/ft Column, fabricated, >257 lb/ft Q390B material, Q390B Horizontal brace, rolled (pipe, HSS round), round sections, 6 in. and smaller Vertical brace, rolled (WT and L), <40 lb/ft Truss, field-bolted truss, Grade 65 material, 65 ksi Truss, shop-assembled truss Grating, 1-1/2 in. x 3/16 in. serrated, galvanized Metal decking, 3 in. composite floor deck, 18 gauge, including accessories Shop assembly (W), walkway panel with grating shop-installed Shop assembly, stair tower partially assembled with handrail loose

he structural steel pieces originally considered by the structural engineering team when it was developing the data needs and work processes were the components required for the large, complex boiler-support structures typical in coal-fired power plants. Such boilersupport structures are on the order of 61 meters (200 feet) by 65 meters (215 feet) in plan and 91 meters (300 feet) in height and consist of 10,000 metric tons (11,000 tons) to 12,000 metric tons (13,200 tons) of structural and miscellaneous steel. This quantity roughly equates to more than 10,000 individual structural framing assemblies plus more than twice as many miscellaneous steel assemblies. The complexity of these structures, and the work processes needed to design, fabricate, and erect them, served to define many of the data management needs. The same data and work processes exist for smaller and less complex structures as well.

MODEL DATA

Project Management Design and erection quantities, as well as fabrication and erection costs, are means of assessing project completion and cost. Regular tracking and reporting of these indicators and comparison with estimated budgets and planned schedules provide project management with continual early assessments of trends and progress. The non-graphical attributes associated with modeled components can reflect the status of design, fabrication, or erection. For example, recording the engineers checking of components in the model allows a report to be generated comparing components checked with total components scheduled for design release, thus providing an indication of model checking completeness. Similar progress comparisons can be generated by recording fabrication status dates and erection completion status dates.

he building model is an ideal tool to store information used to manage material procurement, fabrication, and erection processes. The model attributes provide real-time access to information related to project costs associated with downstream processes such as fabrication and erection. The data is useful, however, only if it is reliable, consistent, and accurate. It is difficult to manage the accuracy of the model data because it can only be partially managed through procedures for software input control and is verified through manual checking of the model. Each control has limitations on how effectively it can ensure that all pertinent data is entered accurately. Data Quality Data quality is determined by comparing stored data with expected values. Initially, guides were prepared to define the format and content expectations for data entered into the model, and procedures were developed for manual model checking. Nevertheless, model data errors remained an obstacle to accurate data usage. To address this problem, Bechtel developed a model-independent database to shadow, or replicate, much of the graphical and nongraphical model data, storing and validating selected attributes. This shadow database,

Regular comparison of indicators such as erection quantities and costs with estimated budgets and planned schedules provides continual assessments of trends and progress.

December 2009 Volume 2, Number 1

149

Table 3. Sample Data Validation


Data Process
Part validation Part validation Assembly validation GRAPHICAL Assembly validation Part validation Part validation WARNING IGNORED SUCCESS ERROR ERROR ERROR WARNING Weight zero or null Reference material, not considered part of active model Part accepted in CSM database Invalid class code Invalid model check date Invalid assembly identification for part Invalid prelim mark, incompatible material

Type
ERROR ERROR ERROR

Description
Section profile not available for the project Invalid material name Invalid main part

After each ETL process, automated model data quality status and summary reports are created and transmitted to the project team for action.

Part validation Part validation NON-GRAPHICAL Assembly validation Assembly validation

part of Bechtels Central Steel Management (CSM) software system, is an Oracle database accessed by a Microsoft Windows-based user interface. Its purpose is to copy key attributes from the graphical model and then evaluate data accuracy and inform the modeler of any needed corrections. Data Quality Evaluation Data validation criteria are categorized by type. A type constituting a condition in which the data is not appropriate is identified as an error; a type that identifies reference material is listed as ignored; another type that identifies a model change, but not necessarily invalid data, is listed as a warning; and, finally, a type that indicates that data is validated is identified as a success. Table 3 provides examples of graphical and non-graphical data validation criteria built into CSM. Currently more than 100 independent model data format or content validations are performed in every extraction, transformation, and loading (ETL) process. The model checking/data validation process is performed daily for active models; thus, regular model quality feedback is sent directly to the modelers in a set of reports, facilitating prompt resolution. Feedback, via e-mail, includes copies of the error (rejected data) report and the validated data report. The reports are provided in a format that permits direct selection of parts or assemblies in the model for data correction. This regular modeling feedback resulted in a noticeable improvement in initial model quality over a short period of time. The data stored in the CSM production tables is replaced by each ETL process; thus, the data

always reflects the status of the model at the time of extraction. New data acquired is compared with what was previously stored, and all attributes for each transaction are recorded in the log tables as a new, modified, or deleted action. Data Extraction A standard ETL process is carried out to automatically collect data from the model with no user intervention. Extraction is done via an in-house program scheduled to run daily. An application programming interface (API) provided by the 3D modeling tool (Tekla Structures) allows the program to traverse and read the entire model data. Once extracted, the data is transferred and loaded to stage tables hosted by an Oracle database (Figure 3). The stage tables provide a mirror copy of the data found in the 3D model. Data from the stage tables is validated against a set of acceptance rules, and new or modified data that passes all validation checks is loaded into production tables. Acceptance rules, as well as coded functions, are specified in lookup tables. A set of log tables keeps a record of all changes made to production tables. After each ETL process, automated model data quality status and summary reports are created and transmitted to the project team for action. The ETL process is performed against one or several models, as required for a given project, and each ETL run is identified by a unique session number. The process is identical for every model and is managed by a customizable tool that allows the user to specify the set of models to run, the e-mail notice requirements, and the time and frequency of the ETL runs.

150

Bechtel Technology Journal

THE CSM SOFTWARE SYSTEM ith shadow data available outside the modeling environment, reporting and other applications are able to access the information without requiring direct model interface. To ensure data protection and thus provide assurance that the data indeed represents the model information, the CSM software system provides three levels of user access: user, administrator, and super user, each providing different degrees of functionality. Tools and Procedures The CSM user interface provides access to several tools and automated procedures. Administrative tools are used to insert, update, and delete records, actions that modify the many lookup tables. The lookup tables, used to validate attribute content in the model, fall into three categories: applicable to the enterprise (all models/all projects), applicable to all models on a given project, and applicable to an individual model only.

Project model file locations and other related information needed to perform the ETL process are also stored in lookup tables accessed by a CSM remote server application, allowing a CSM super user to initiate individual or multiple model ETL processes on demand or to a fixed schedule. Included in the scheduler is the capability to e-mail specified automated reports to specific customers. The CSM interface also provides utilities to assign prelim mark numbers, pay category numbers, cost codes, and TEAMWorks identification numbersall data previously determined via manual spreadsheet calculationsat the appropriate time. Prelim mark numbers are assigned by grouping modeled components that have similar purchase definitions. A UDA model import file is created, and the prelim mark definition is recorded both on the modeled component and in a CSM table used to validate the prelim mark assignments in subsequent ETL processes. The

The CSM interface provides utilities to assign prelim mark numbers, pay category numbers, cost codes, and TEAMWorks ID numbers.

Graphic Model

Data Check

Parts & Assembly Rejected Data Reports


& rts ly Pa semb As

Parts & Assembly Validated Data Reports


Assemble

Lookup Table Data Check


Data Errors Valid Data

Pa As rts & sem bly

Model Data ETL

Stage Tables

Production Tables

1 Prelim Marks 2 Pay Categories 3 Cost Codes

External Database Interface

Model data is extracted as initiated by the Scheduler. The CSM Production Tables reflect only valid data.

Model Data Errors

Figure 3. CSM Data Acquisition Process

December 2009 Volume 2, Number 1

151

prelim mark validation compares the purchase definition retained in CSM with the active model attributes. Because a structural design evolves, changes may occur after the original material order. To manage the use of the ordered material, a utility is used to evaluate and reassign purchased material that is no longer applicable to the component for which it was ordered. For example, if a design change makes it necessary to increase a framing member size from what was purchased, then the original purchased member, which is no longer appropriate, is placed into surplus material status. Before any subsequent material purchase (which usually occurs sometime before each design release), the surplus material is evaluated for its applicability for any new framing member not previously purchased. The utility searches model data for all sections that meet the surplus material original purchase specification (section profile, material grade, and length limit). If a match is found, the prelim mark for the previously purchased material is reassigned to the new member. In this way, the tool helps mitigate the amount of surplus material left at the end of a project. Pay category numbers are determined before shop drawing creation, using a procedure whereby all model parts constituting each structural assembly are collected and the assembly main part attributes are evaluated against a series of criteria that define the pay category. During the assignment process, all assembly parts are validated to ensure that the correct part is defined as the main part, that a

pay category definition matches the assembly modeled, and that only one pay category definition is valid for the assembly. The pay category number for any member or assembly is determined using model data alone, eliminating the subjectivity of manually assigning a unit price to a given steel assembly. As a result, early computation of fabrication costs yields an accurate prediction of the fabricator invoices, which, in turn, minimizes disputes at contract closeout. Pay category assignment is performed using a set of rules that evaluate the model data. These rules return a yes or no value or a discrete result and thus choose the category. For example, the following is the qualifying rule statement for Pay Category 4.09, as shown in Table 2: (IS_HOR_BRACE=Y) AND ((IS_TUBE=Y) OR (IS_PIPE=Y)) AND (IS_ROLLED=Y) AND (D<= 6) The first part of the rule evaluates whether the assembly is a horizontal brace, as defined by a set of data characteristics. If the assembly meets the prescribed definition, the answer is yes and the next qualifier is evaluated to determine if the profile is either a tube or a pipe. If the answer is yes, the rule next determines if the shape is rolled. If yes, then the final check is for the profile depth. If the largest depth in the profile is less than or equal to 6 inches, then the assembly falls into Pay Category 4.09. If any one of the above answers is no, then Pay Category 4.09 is not appropriate

Surplus material is evaluated for its applicability for any new framing member not previously purchased. This tool helps mitigate the amount of surplus material left at the end of a project.

Project Document Database

Model Data

Selected Model Data Released Drawing Clock

Material and Erection Tracking Database

Tracking Data

Selected Tracking Data

CSM Database

Figure 4. Graphic Model Data Sharing

152

Bechtel Technology Journal

and the process continues until the member data matches another pay category rule. Other pay category rules include checks on material grade, coating system, weight, and assembly type. Upon satisfactory rule evaluation, a model UDA import file is created with the pay category assignments and then imported back into the model. Subsequent ETL processes validate model assemblies against the previously assigned pay categories. TEAMWorks identification numbers are assigned using another utility that combines several attributes to create a unique identifier. This number, required for material tracking, is assigned and loaded back into the model on the release of a shop drawing to the fabricator. CSM uses the project document database to identify which drawings are released for fabrication (see Figure 4). Subsequently, CSM assembles a table containing a list of each TEAMWorks identification number and other relevant model data, such as assembly weight, shop drawing number, and construction sequence, for export to the TEAMWorks database application. CSM Reports In addition to data quality reporting as described above, each model ETL process also initiates a model status report. This report summarizes quantities by structural category (structural, miscellaneous, connection material, and construction aids) and summarizes the model completion status for the structural category

(model checked, connection originated, and connection checked). This and other selected progress and quality reports are delivered to an e-mail list. Within the CSM application interface, numerous other reports are available to use in checking the model data quality and completeness. Each report is created using a report template and is displayed via a viewer interface that allows the user to view summarized data as well as print the report using numerous desired formats. A significant benefit of collecting all model data in an external database is the ability to prepare and standardize reports without the need for direct access to each model. This feature gives project management the capability to analyze structural steel data even if they are not familiar with working in the modeling environment. Capturing all model data in a central database provides the added capability to collect, analyze, and compare data across multiple models. This capability reduces the effort associated with collecting and evaluating the entire project status, considering that each power plant project typically consists of a minimum of 8 to 12 models. The number of project models generally doubles when a copy of each one is created for its release to the drawing extraction team. Therefore, the CSM database monitors two models for every portion of the structureone identified as the engineering model and the other identified as the drawing model (see Figure 5).

Capturing all model data in a central database provides the added capability to collect, analyze, and compare data across multiple models.

Engineering Model
Model Release and Updates

Drawing Model

Model Data ETL Model Comparisons

Model Data ETL

Differences Report

Re

por

Production Tables

Production Tables

Figure 5. Multiple Model Tracking and Comparison

December 2009 Volume 2, Number 1

153

A BIM database allows standard processes and tools to be used to analyze and report reliable data to other essential project processes.

The engineering model is controlled by the design team, and the drawing model is controlled by the detailer or drawing extraction team. These models differ by ownership and by the data added by the detailer. The multiple models complicate management of new model information. The need to use two models arises from tight project schedules and the often different locations of the engineering and detailing teams. Additionally, it is common for the engineering team to continue to need access to the engineering model to make additions and/or changes as final equipment information becomes available. These modifications often occur in parallel with the shop drawing extraction process and thus are tracked by a separate release phaseanother UDA. The detailer then incorporates the new release phase into the drawing model to maintain consistency between the two models. Because the CSM database acquires model data from both the engineering and the drawing models, a set of reports is provided to quickly and concisely compare data between them, thus validating common data and highlighting any differences, even before shop drawing extraction. Interfaces with Other Process Control Tools Within the Bechtel EPC organization, several material and erection tracking databases exist to manage all procured and constructed commodities, including structural steel. Each database uses a unique identifier for each record. For structural steel, the identifier comprises several model attributes, much like the shop drawing number. The CSM application prepares and records the unique identifiers as part of the model, thus facilitating the linking of data tracked in the external databases with the graphical model components. This linking then offers the project the ability to view other non-graphical attributes in a graphical manner using the model.

The development of a BIM application for cast-inplace (CIP) reinforced concrete construction work processes is imperative as the power industry proceeds into the design and construction of the next generation of nuclear plants. Managing the congestion of reinforcing steel, anchorages, penetrations, and embedments in CIP concrete demands the use of 3D modeling. The ability to track CIP-concrete-related data (placement breaks, cylinder break results, rebar heats, rebar bundles, admixtures used, placement weather conditions, embedded item tags, etc.) directly to a model would greatly enhance construction work packaging, progress monitoring, and configuration control.

CONCLUSIONS

high-quality BIM application and process are essential to the effective and efficient use of data throughout the EPC process. From the purchase of structural steel raw material to the development of detailed fabrication information to delivery planning, material staging, and erection status, the integration of data and reliance on its quality are vital. Using a BIM database synchronized with the graphical model enables data to be evaluated, validated, and corrected, thus improving and ensuring its quality. Furthermore, by providing access to the data outside of the modeling environment, a BIM database allows standard processes and tools to be used to analyze and report reliable data to other essential project processes.

ACKNOWLEDGMENTS The authors would like to acknowledge Peter Carrato, PhD, principal engineer and Bechtel Fellow, for his assistance and contributions to this paper.

TRADEMARKS FUTURE DEVELOPMENT evelopment plans for the structural steel BIM application include integration of four-dimensional (4D), or schedule-tied, functionality, offered by most modeling applications with external tools to enhance erection tracking and progress monitoring; the ability to track material information such as heat numbers from the mill; and direct links to fabricator process monitoring applications. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. TEAMWorks Corporation. is a trademark of Bechtel

Tekla is either a registered trademark or a trademark of Tekla Corporation in the European Union, the United States, and other countries.

154

Bechtel Technology Journal

BIOGRAPHIES
Martin Reifschneider, a Bechtel engineering manager, has more than 31 years of experience in civil/structural engineering and work process automation. Currently serving as manager of the Central Steel Engineering Group, he is responsible for structural steel design work processes. Martin was the civil/structural/architectural chief engineer for the Power Global Business Unit for 5 years. In this role, his responsibilities included technical quality, staffing, and personnel development of nearly 200 civil/structural engineers and architects. Earlier, Martin was project engineer for a large fossil fuel project. He also served in leadership positions on combined cycle and nuclear projects. Martin has published and presented papers on building information modeling and on the performance steel embedments in concrete. Martin has an MS and a BS in Civil Engineering from the University of Michigan, Ann Arbor, and is a licensed Professional Engineer in Michigan and Wisconsin. He is a Six Sigma Yellow Belt. Kristin Santamont is the responsible civil engineer for the Ivanpah Solar Electric Generating Facility. Since joining Bechtel in 2004, she has worked on multiple power projects, with most of her experience focused on steel work processes, from design through construction. Kristin has performed detailed structural steel design and detailed connection design, worked extensively in the 3D building model environment, and recently helped manage and execute a 16,000-ton steel purchase order for the Sammis air quality control system project. She was also part of the Central Steel Engineering Group, collaborating to further improve the Bechtel steel work process as it relates to building information modeling and the Power EPC business. Kristin has an MS in Structural Engineering and a BS in Civil Engineering, both from the University of Illinois at Urbana-Champaign. She is a licensed Professional Civil Engineer in California.

December 2009 Volume 2, Number 1

155

156

Bechtel Technology Journal

NUCLEAR UPRATES ADD CRITICAL CAPACITY


Originally Issued: May 2009 Updated: December 2009 AbstractOver the past 20 years, nuclear plant uprates have added substantial capacity, including more than 5,600 MW since 1998. This paper focuses on one power uprate categoryextended power uprate (EPU). EPUs can increase a nuclear plants output by as much as 20% but usually involve significant plant modifications. Owners of at least 60 nuclear units are expected to seek approval for EPUs in the near future. Of these, approximately 50 are pressurized water reactors (PWRs). While most of the early EPUs were performed on boiling water reactors (BWRs), the present interest in PWR upgrades is primarily due to advances in technology and improvements in available fuel. An EPU offers the economic benefit of increasing power output while avoiding long lead times for constructing new nuclear plants. Moreover, combining an EPU with a maintenance upgrade and/or license extension allows costs to be shared among programs. However, regardless of the benefits, an EPU is a major undertaking that requires a significant commitment of resources. The success of an EPU rests on the quality of the management team and its ability to develop an effective implementation plan and to schedule work efficiently. Keywordsanalytical margin, boiling water reactor (BWR), design margin, extended power uprate (EPU), License Amendment Request (LAR), margin management program, Nuclear Regulatory Commission (NRC), operating margin, pressurized water reactor (PWR), uprate

INTRODUCTION

ew-generation nuclear plants may be having trouble getting started in the United States, but that does not mean that US nuclear capacity additions are at a standstill. In fact, the USs 104 operating nuclear units have added substantial new capacity in the form of reactor and plant uprates over the past 20 years. Power uprates alone have added more than 5,600 MW since 1998the equivalent of five new nuclear plants. The World Association of Nuclear Operators (WANO) released the power industrys 2008 report card in late March 2009, proclaiming that safety and operating performance remained Top Notch with the years average capability factor of 91.1%. Many observers have come to expect nothing less, since this is the ninth consecutive year that the US fleets capability factora measure of a plants online timehas exceeded 90%, continuing to mark nuclear power as the most reliable source of electricity in the US. Not content to merely improve and maintain these outstanding operating statistics, the

industry also embarked years ago on a path of upgrading US plants to produce additional power. The US Nuclear Regulatory Commission (NRC), which is responsible for regulating all commercial nuclear power plants in the US, classifies power uprates into the following three categories: measurement uncertainty recapture, stretch power, and extended power. Measurement Uncertainty Recapture Power Uprates Measurement uncertainty recaptures (MURs) entail improvements to feedwater mass flow measurement technology, through use of ultrasonic flow metering, to significantly reduce the uncertainty in core calorimetric computations. The NRC has updated regulations that now permit licensing using an uncertainty in the safety analysis allowance consistent with that determined in these improved calculations. Lowering the uncertainty can result in uprates of up to 2%. However, MUR activity remains sporadic. Only Calvert Cliffs Units 1 and 2 have received the NRCs approval for a 1.4% reactor thermal uprate.
157

Eugene W. Thomas
ewthomas@bechtel.com

2009 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


ACRS BOP BWR EPRI EPU FAC HP INPO LAR Advisory Committee on Reactor Safety balance of plant boiling water reactor Electric Power Research Institute extended power uprate flow accelerated corrosion high pressure Institute of Nuclear Power Operations Licensing Amendment Report LP MUR NRC NSSS PWR SPU SSCs WANO low pressure measurement uncertainty recapture (US) Nuclear Regulatory Commission nuclear steam supply system pressurized water reactor stretch power uprate systems, structures, and components World Association of Nuclear Operators

Table 1. EPUs Approved by the US NRC


Plant
Monticello Hatch 1 Hatch 2 Duane Arnold Dresden 2 Dresden 3 Quad Cities 1 Quad Cities 2 Clinton ANO-2 Brunswick 2 Brunswick 1 Waterford 3 Vermont Yankee Ginna Beaver Valley 1 Beaver Valley 2 Susquehanna 1 Susquehanna 2 Hope Creek Total MWt = megawatts of reactor thermal output MWe = megawatts of electrical output
(Source: NRC)

Nuclear Steam Supply System


BWR BWR BWR BWR BWR BWR BWR BWR BWR PWR BWR BWR PWR BWR PWR PWR PWR BWR BWR BWR

NRC Approval
9/16/98 10/22/98 10/22/98 11/6/01 12/21/01 12/21/01 12/21/01 12/21/01 4/5/02 4/24/02 5/31/02 5/31/03 4/15/05 3/2/06 7/11/06 7/19/06 7/19/06 1/30/08 1/30/08 5/14/08

Uprate, %
6.3 8 8 15.3 17 17 17.8 17.8 20 7.5 15 15 8 20 16.8 8 8 13 13 15

Added MWt
105 205 205 248 430 430 446 446 579 211 365 365 275 319 255 211 211 463 463 501 6,733

Approximate MWe Added


35 68 68 83 143 143 149 149 193 70 122 122 92 106 85 70 70 154 154 167 2,224

158

Bechtel Technology Journal

Table 2. EPU Applications Under NRC Review


Plant
Browns Ferry 1 Browns Ferry 2 Browns Ferry 3 Monticello Point Beach 1* Point Beach 2* Total MWt

Uprate, %
15 15 15 12.9 17 17

Added MWt
494 494 494 229 260 260 2,231

NRC Approval Expected


To Be Determined To Be Determined To Be Determined December 2009 To Be Determined To Be Determined

MWt = megawatts of reactor thermal output * These applications were undergoing NRC acceptance review as of May 2009. (Source: NRC)

Stretch Power Uprates A stretch power uprate (SPU) typically increases the original licensed power level by up to about 7%, usually by taking advantage of conservative measures built into the plant that previously were not included in design and licensing activities. SPUs involve, at most, modest equipment replacement and little or no change to either the nuclear steam supply system (NSSS) or turbine by limiting pressure increases (2% to 3%) to allow sufficient mass flow margin in the high-pressure (HP) turbine. Stretch uprate modifications concentrate on procedures and equipment setpoints, making the uprate capability plant-dependent. Extended Power Uprates An extended power uprate (EPU) increases the original licensed thermal power output by up to 20% but requires significant plant modifications. The focus of this paper is to review EPU requirements and provide estimates of the power added to the nuclear inventory by past, current, and future EPUs.

1970. The plant is a single-unit Westinghouse two-loop PWR. The original steam generators were replaced in 1996, enabling an almost 17% EPU to be approved by the NRC 20 years later. Table 2 identifies EPU applications under review by the NRC. Three of them are for 15% EPUs for the three Tennessee Valley Authority Browns Ferry nuclear power plant units (Figure 2). The NRC operating licenses for Units 1, 2, and 3 were renewed in May 2006, which allows continued operation of the units until 2033, 2034, and 2036, respectively.

An EPU increases the original licensed thermal power output by up to 20%.

EARLY EPU ACTIVITY ost of the early EPUs were performed on boiling water reactor (BWR) plants. Table 1 provides a list of NRC-approved EPUs. Fifteen BWRs have been approved to date, and nine were approved before the first pressurized water reactor (PWR) EPUs received approval from the NRC. A total of five PWRs received approval. Four of them requested rather modest increases. The exception is Constellation Energys Robert E. Ginna nuclear power plant (Figure 1), which received NRC approval for a 16.8% power uprate increase in July 2006. Located along the south shore of Lake Ontario in Ontario, New York, Ginna is one of the oldest nuclear power plants still in operation in the US, having begun operation in

(Source: NRC)

Figure 1. R.E. Ginna Nuclear Power Plant Approved for 16.8% Power Uprate

(Source: TVA)

Figure 2. Browns Ferry Nuclear Power Plant Units 1, 2, and 3 Anticipating Approval for 15% EPUs

December 2009 Volume 2, Number 1

159

Table 3. Expected Applications for EPUs


Year
2010 2011 2012 2013 Total

Number of Plants
6 8 1 2 17

MWt
2,274 3,173 522 870 6,839

Approximate MWe
758 1,058 174 290 2,280

The added power from uprated units is effective in reducing greenhouse gas emissions for the entire utility fleet of plants.

MWt = megawatts of reactor thermal output

MWe = megawatts of electrical output


(Source: NRC)

PWR UPGRADES EMERGE he plants whose owners have already contacted the NRC about an EPU and are expected to file an application by 2013 are listed in Table 3, but the real number of prospective uprates is far higher. At least 60 nuclear units are most likely candidates for an EPU program in the near future, of which about 50 are PWRs. Taken together, these likely EPU upgrades would provide added capacity equivalent to that of 6 to 12 new plants. The increased interest by PWR owners has been prompted, in part, by advances in technology and, perhaps more importantly, improvements in available fuels. Over the past two decades, fuels have become available with slightly higher enrichments, improved cladding, low noncondensable gas releases, and improvements in burnable poisons. These fuels also have better structural stiffness that makes them less vulnerable to vibration and fretting. Also, improved manufacturing processes have resulted in better process control, which leads to less statistical variation design margin. With less design margin required, an output uprate is possible. Additionally, engineers have found ways to safely place more fuel in existing reactor vessels. All of the aforementioned improvements lead to greater thermal power output. Moreover, improvement to neutron fluence through the use of low-leakage cores has provided additional margin in NSSS components and helps to accommodate higher power levels over the long term. The added thermal power output can be achieved with little or no increase in output steam pressures (based on redesign and replacement of the HP turbine rotor) by increasing steam mass flow. In some cases, an increase of 2% to 3% pressure has been adopted for increased fuel margins.

MANY ECONOMIC ADVANTAGES

he economic incentive of an EPU is to increase power output at competitive costs while avoiding the long lead times for constructing new generation. Information provided in a June 2008 Nuclear Energy Institute Seminar, based on a small number of plants currently involved in EPU programs, indicates that the capital cost of this incremental power ranges from about 15% to 50% on a cost-per-kilowatt basis, compared with the cost of a new nuclear plant. Of course, these costs are very site dependent and tend to increase on a per-kilowatt basis with larger uprates, so it is difficult to generalize other than to suggest that on a capital-cost-per-kilowatt basis, EPU uprates are very competitive with new power plant construction. Other intangible benefits are also associated with an EPU. Many plants have been operating for 30 years or more and require major equipment replacement. These and other plants either have already completed, or are contemplating, license renewal. Combining an EPU with a maintenance upgrade and/or a license extension allows some of the cost and cost recovery to be shared among programs. Integrating the total project minimizes future outage risks because upgraded/modified equipment would be used that had already considered the new life-extension requirements. The added power from uprated units is also effective in reducing greenhouse gas emissions for the entire utility fleet of plants in a timely fashion, and can reduce utility costs as well. Although not necessarily an economic advantage, a utility does not have to wait as long to reap the benefits of an EPU. An EPU can be brought into operation in about one-half the time required to license and build a new plant.

160

Bechtel Technology Journal

MORE THAN REPLACING PARTS his does not mean that an EPU can be completed by merely making a few simple plant modifications. Increased thermal output requires greater thermal input into many of the plant systems and components, potentially reducing required margins through lowered material properties and adding burden on pumps, bearings, and seals. Increased flow accentuates flow accelerated corrosion (FAC) in pipes and other components. Increased mass flow has the potential to raise flow-induced vibration levels in systems and components to unacceptable levels or change the frequency of the exciting forces, causing vibration where it previously did not exist. The Electric Power Research Institute (EPRI) maintains a lessons-learned database that identifies issues observed and resolved in previous power uprates and serves as an excellent information base for future uprates. It is difficult to generalize about the perfect plan to complete an optimum EPU for a given plant. Differences in initial regulatory approaches, past responses to regulatory issues, and previous modifications and equipment changeouts to maintain plant operation all combine to make the EPU program for each plant unique. Even side-by-side identical plants frequently require separate plans to accomplish an equivalent EPU. Therefore, detailed studies are required for each plant. However, some general trends have been observed. Design duty for overpressure protection and required relief capacity in the reactor coolant pressure boundary from normal operating and transient design conditions typically increase with increased power. This may require that the primary- and/or secondary-side safety valves and safety-relief valves be modified. Otherwise, reactor coolant pressure boundary modifications have not been a major concern. Industry experience with power uprates to date has shown that the installed capacity of emergency core coolant systems is nearly always sufficient without modification. However, auxiliary feedwater systems and emergency service water systems may require modification. Major balance-of-plant (BOP) upgrades have been the focus of most EPUs. The turbine, main generator, main power transformer, and power train pumps often have to be replaced or modified. Components such as feedwater

heaters, moisture separator reheaters, and heat exchangers are frequently replaced with larger units. Feedwater, condensate, and heater drain pumps, along with supporting components, typically have to be replaced or modified. This increases the demands of isophase bus duct cooling. Increased steam and feedwater mass flow often require that piping be replaced to accommodate greater mass flow or to counteract the effect of FAC. The design must also consider any increased demand for demineralized water. For each EPU, an HP turbine retrofit (at least) is required, and because throttle margin can be achieved through the retrofit without an attendant increase in operating reactor pressure, the uprate can be analyzed and performed at constant pressure. Depending on the existing margins, the magnitude of the uprate, and the condition of the turbines, it may be necessary to replace, repower, or modify the low-pressure (LP) and/or HP portion of the turbine. In many cases, the condenser is either replaced or retubed. Plants with closed-loop cooling may also have to consider cooling tower upgrades, and plants with open cycles need to evaluate thermal effects from the condenser outfall. Major modifications to the generator and stator (rewinding) are expected. This may also require increased cooling for the generator. Transformers may need to be replaced with larger units. Replacement components are generally larger and heavier, which means that structures supporting these components are challenged and frequently have to be strengthened by modifying the building structure and other foundations.

Major BOP upgrades have been the focus of most EPUs.

MANAGING MARGINS

n EPU is a major undertaking for an operating plant that requires the combined expertise of the plant staff, NSSS contractors, turbine contractors, and, in most cases, nuclear EPC contractors. An initial but important step is to establish a margin management program (if the plant does not already have one) to ensure that adequate margins are available in systems, structures, and components (SSCs). Developing or updating the margin management program may be done in parallel with other EPU preparation steps. Several margins are of interest in a margin management program. The Institute of Nuclear Power Operations (INPO) identifies

December 2009 Volume 2, Number 1

161

Ultimate Capacity Analyzed Design Limit Operating Limit Analytical Margin Design Margin Operating Margin Range of Normal Operations

Most utilities are finding that, compared to other available alternatives, it is cost-effective to implement the greatest amount of added power possible from the EPU.

(Source: INPO)

Figure 3. Managing Margins

three different nuclear plant design margins: operating, design, and analytical (Figure 3). Operating margin is the difference between the operating limit and the range of normal operation. The operating limit is analogous to design values in engineering terms. It accounts for, and envelops, all the potential operating conditions of the plant. Design codes and licensing criteria include a certain margin, or safety factor, beyond the design limit, which addresses uncertainties in design, fabrication durability, reliability, and other issues. The difference between the analyzed design limit and the operating limit is this conservatism, which INPO calls the design margin. Normal aging and plant operationwhich require constant attention by ownerscan decrease each of these margins. Increased thermal output from an EPU imposes further demands on the operating limit. Even systems or components not directly affected by the power increases may not function as efficiently as intended following an EPU. For all of these reasons, the margin management program becomes an important tool in performing an EPU. The margin management program has two basic parts. One is analytical: ensuring that the design documents are current, correct, and consistent with the plant design features. The second part is more complex in that it requires a systemic assessment of the current condition of the physical plant through engineering walkdowns and reviews of condition reports and other operational data. A thorough review of EPRIs generic lessons-learned database is also important for identifying potential future issues.

Assuming that the necessary assessments have been performed and a decision made to consider an EPU, the next step is to conduct a feasibility study. An integrated team consisting of the owners plant staff, an experienced architectural/ engineering firm, the NSSS supplier, and the turbine generator supplier should perform the feasibility study. This approach minimizes interface issues among the aforementioned parties in relation to current operating experience at the nuclear plant and the NSSS, BOP, and turbine generator equipment. Potential modifications to the NSSS, the nuclear systems, the turbine and cooling system, and the BOP are studied. Initial evaluations are conducted to identify the potential power increases available through modifications of the NSSS, as discussed above. The turbine generator is also evaluated to determine modifications required to meet the proposed uprated power needs. And finally, the potentially affected nuclear and BOP systems and components are evaluated to determine the pinchpointsthose items that have suffered margin erosion due to preexisting factors or would suffer erosion due to the EPU modifications. A cost-benefit analysis is included in, or prepared in parallel with, the feasibility study. Typically, the greater the uprate, the greater the cost of the last kilowatt added. Most utilities are finding that, compared with other available alternatives, it is cost-effective to implement the greatest amount of added power possible from the EPU, provided that other outside factors demonstrate that the need exists. The next phase of the feasibility study is to identify modifications necessary to meet the EPUs requirements and ensure that the

162

Bechtel Technology Journal

modifications reestablish required margins. In some cases, margin can be restored solely through more sophisticated analysis. In other cases, hardware changes or plant modifications are required. Next, equipment specifications are prepared and purchase orders are placed for long-lead-time components. Typical components in this category include: HP and LP turbines (replacement) Main and auxiliary generators (upgrades) Transformers (replacement) Feedwater heaters (replacement) Pumps and motors (feedwater, condensate, heater drains, component cooling water) Spent fuel pool cooling heat exchangers Main steam reheaters Condenser and/or cooling tower (upgrades) Water treatment system (upgrades) Based on the feasibility study, including the cost-benefit analysis, the owner decides on the final upgrades/modifications required to meet the EPU goals. With this final list, a more detailed evaluation is performed that supports a Licensing Amendment Report (LAR) for NRC review and approval. The LAR requirements are provided in NRC document RS-001, Review Standard for Extended Power Uprates. The LAR incorporates the completed analytical results along with additional detailed evaluations of SSCs directly or indirectly affected.

surely be requested through the NRC Request for Additional Information process. After the NRC accepts the owners application, the NRC again issues a public notice in the Federal Register. As before, members of the public have 30 days to comment and 60 days to request a hearing. And again, additional information is expected to be requested. Next, all EPU submittals require an Advisory Committee on Reactor Safety (ACRS) meeting. After the NRC and ACRS complete their reviews and consider and address any public comments and requests for hearings related to the application, the NRC issues its findings in a safety evaluation report. The NRC either approves or denies the power uprate request. A notice is then placed in the Federal Register regarding the NRCs decision. The LAR for an EPU is extensive, involves evaluating virtually every aspect of a plants operating experience, and exposes the licensee to questions about the current licensing basis, including public hearings. The licensee has to manage the risk that its LAR may bring attention to unique features that may be reevaluated by the NRC and be subject to public hearings. NRC commitments to review the LAR include a 12-month review cycle for acceptance of the EPU submittal.

Implementing the modifications requires extremely focused management planning and execution processes.

PERFORMING THE UPRATES

THE NRCS REVIEW OF THE PACKAGE he process for amending commercial nuclear power plant licenses and technical specifications for power uprates is the same as the process used for other license amendments. Therefore, EPU requests are submitted to the NRC as an LAR. This process is governed by 10 CFR 50.4, 10 CFR 50.90, 10 CFR 50.91, and 10 CFR 50.92. After a licensee submits an application to change the power level at which it operates the plant, the NRC notifies the public by issuing a public notice in the Federal Register stating that it is considering the application. Members of the public have 30 days to comment on the licensees request and 60 days to request a hearing. The NRC thoroughly reviews the application, any comments, and any requests for hearings. Additional information on the application will

mplementing the modifications requires extremely focused management planning and execution processes that are more complex than those of typical maintenance outage activities. Modifications are typically prepared in the form of design change packages. Some EPU programs require more than 50 major design change packages, all under the purview of a strict quality assurance program. These packages include detailed design documents and a step-by-step process for field implementation. Engineers, procurement staff, and construction experts work hand-in-hand to provide the design details to maintain the configuration management and design control process required by the utility. Equally important, they verify that the modification can be completed in a safe and efficient manner that results in a quality product, often in very confined quarters. Care must be taken to ensure that no damage occurs to adjacent equipment outside the boundaries of the modification.

December 2009 Volume 2, Number 1

163

INPO guidance suggests that each design package be completed and approved by designated plant personnel 1 year in advance of the planned outage, and utilities typically work toward this goal. To minimize plant downtime, actual hardware implementation is generally performed over two or more refueling outages. Because major portions of the plant, particularly the BOP, are subject to major rework, outage execution plays a major role in the overall success of the EPU.

capacity. As described in this paper, an EPU can achieve up to 20% additional output but is much more involved than an MUR or SPU. While EPUs represent the most challenging power uprate due to their wide range of design and licensing impacts and need for plant modifications, they offer competitive costs and more payback in terms of MWe. In addition, the time required to bring an EPU on line is about half that needed to license and build a new plant. Therefore, it is easy to understand why an EPU is an attractive prospect for a utility.

The success of an EPU program relies heavily on the quality of the management team and its ability to develop an effective integrated implementation plan.

Work in an operating plant introduces a whole new set of complexities, compared with new construction. During each plant outage, the utility must purchase replacement power. Outages are performed during off-peak periods, when electrical demand is low. Based on weather conditions and on other units that the utility may own (for example, less-efficient coal plants or gas turbines), replacement power may not be needed or the utility may operate less-efficient assets (and purchase replacement power). To keep the costs of replacement power as low as possible, it is important to keep outage time to a minimum. The success of an EPU program relies heavily on the quality of the management team and its ability to develop an effective integrated implementation plan, to schedule the work effectively, and to provide controls to ensure that those schedules are carried out. The scheduling effort is a critical component in controlling implementation costs. Minute details are included in the schedule, identifying construction installation activities that occur during each outage shift. Each work package is integrated with all other packages and with unrelated but required outage activities so that the needed cranes, access to space, and critical tools and other resources are available for all required tasks. It is also critical that the necessary trained human resources be available on a 24/7 basis during all implementation outages. A detailed power ascension testing and monitoring program needs to be developed early in the process so it can be implemented following outage completion, ensuring that each system and component performs its intended functions.

The original version of this paper was first published on May 1, 2009, in POWERan Internet magazine and the official publication of the Electric Power Conference & Exhibition (http://www.powermag.com/issues/features/ Nuclear-Uprates-Add-Critical-Capacity_1860.html). The paper has been edited and reformatted to conform to the style of the Bechtel Technology Journal.

BIOGRAPHY
Eugene W. Thomas has been with Bechtel for nearly 40 years, involved primarily in commercial nuclear power. For the past 8 years, he has been an engineering manager for nuclear projects in the Frederick, Maryland, office. Earlier, Gene was the chief engineer for the civil/structural/architectural and plant design disciplines. A large portion of his career was spent as a technical staff supervisor; in this role, he provided direct technical support to more than 35 nuclear power plants. After an initial stint with Boeing Company as a structural dynamist, Gene joined Bechtel as one of the early seismic specialists. His interests soon broadened to encompass engineering supervision, and he successfully led design efforts on several projects. Gene has served on both ASCE and ASME code committees and on various nuclear industry task forces. He was one of the earliest recipients of a Bechtel Technical Grant and was privileged to serve as a Bechtel representative on a special US Government Presidential Task Force to develop and promote use of high-strength, corrosionfree steel as part of a larger government effort to support US manufacturing, expand research and development, and improve the US infrastructure. Gene has written several technical papers and contributed to the Handbook on Structural Concrete, published by McGraw-Hill. Gene has an MS in Mechanical Engineering and a BS in Civil Engineering, both from Drexel Institute of Technology (now Drexel University), Philadephia, Pennsylvania. He is a Six Sigma Champion.

CONCLUSIONS

N
164

uclear power is currently the most reliable source of power in the US. As the demand for nuclear power increases, it can be advantageous for plant owners to perform uprates to increase

Bechtel Technology Journal

INTEROPERABLE DEPLOYMENT STRATEGIES FOR ENTERPRISE SPATIAL DATA IN A GLOBAL ENGINEERING ENVIRONMENT
Issue Date: December 2009 AbstractThe implementation of a Bechtel enterprise geographic information system (GIS) has greatly facilitated the sharing and utilization of vast amounts of diverse geospatial data to support complex engineering projects worldwide. With the GIS technical discipline and spatial data being relatively new centralized resources within the company, and given the great variety of computer-aided design (CAD), GIS, and other data sources and formats involved in supporting Bechtel projects in its five global business units, the issues of data interoperability, data model standardization, system reliability, security, scalability, and GIS automation continue to be central to the GIS implementation and deployment strategies being adopted for the company. The service-oriented architecture, enterprise desktop, and Web deployment methodologies for spatial data are being developed in conjunction with Bechtels larger interoperability efforts in data integration and encapsulation under International Organization for Standardization (ISO) Standard 15926. KeywordsANSI INCITS 353, engineering, enterprise GIS, geographic information system (GIS), ISO 15926, ISO 19115, metadata, Oracle Spatial, SDSFIE, spatial data standards

INTRODUCTION he success of geographic information systems (GIS) in increasing efficiency, accuracy, productivity, communication, and collaboration within any business or organization has been documented. [1] However, managing and deploying spatial information across any large organization can be a daunting task, and within the global engineering environment of a company such as Bechtel, it can be even more challenging. For Bechtel, the implementation of an enterprise GIS has greatly facilitated spatial data sharing and utilization. This success has been achieved by standardization in data model and work flows; enterprise architecture that provides reliability, security, and scalability; and GIS automation that streamlines work processes and assists with data discovery and access. Such interoperable deployment strategies for spatial data are being developed in conjunction with Bechtels larger interoperability efforts in data integration and encapsulation under International Organization for Standardization (ISO) Standard 15926, Integration of Life-Cycle Data for Process Plants including Oil and Gas Production Facilities.

BACKGROUND

Tracy J. McLane
tjmclane@bechtel.com

s Bechtel shifted from a home-office-based to a global, 24/7, work-shared, interbusiness-domain execution model, information technologies that support the engineering work processes began moving from technology-centric to standards-centric solutions. The need for an effective and affordable interoperability solution that fits the current execution model drove Bechtel to develop, implement, and deploy an approach based on ISO 15926. This generalized international standard for industrial automation systems and integration can be adapted to any business domain, such as fossil power, nuclear power, mining and metals, and civil. The first implementation of ISO 15926 at Bechtel occurred in early 2000. It was based on a proprietary middleware platform that used ISO 15926 reference data at the dictionary compliance level. This implementation, the so-called DataBroker solution, is now used by most of Bechtels business lines and projects worldwide. Currently, a new initiative is underway to upgrade the existing DataBroker platform to

Yongmin Yan, PhD


yyan1@bechtel.com

Robin Benjamins
rxbenjam@bechtel.com

2009 Bechtel Corporation. All rights reserved.

165

ABBREVIATIONS, ACRONYMS, AND TERMS


3D ADF ANSI BecGIS BSAP BIM CAD DEM ECM ESRI FGDC G&HES three-dimensional application development framework American National Standards Institute Bechtels enterprise GIS Bechtel standard application program building information modeling computer-aided design digital elevation model enterprise content management Environmental Systems Research Institute, Inc. Federal Geographic Data Committee (Bechtel) Geotechnical and Hydraulic Engineering Services (Bechtel) global business unit geographic information system(s) InterNational Committee for Information Technology Standards JV LIM MAPX MR NEXRAD NWS OGC P&ID PSN RDL SDSFIE ISO 19115 ISO ISO 15926 International Organization for Standardization The ISO standard for Integration of Life-Cycle Data for Process Plants including Oil and Gas Production Facilities The ISO standard for Geographic Information Metadata (Bechtel) Information Systems and Technology joint venture life-cycle information management mean areal precipitation material requisition next-generation radar National Weather Service Open Geospatial Consortium piping and instrumentation diagram project services network reference data library Spatial Data Standard for Facilities, Infrastructure, and the Environment technical working group

Spatial data is an essential component of the overall engineering and business information that is critical to Bechtels interoperability strategy.

IS&T

GBU GIS INCITS

interoperability The automatic interpretation of technical information as it is exchanged between two systems

TWG

one fully compliant with ISO 15926, maximizing the use of standardized technologies, including Web Ontology Language and the Semantic Web. This new implementation will facilitate the ubiquitous interoperation of information among diverse systems and across different disciplines and business lines. Thus, ISO 15926 establishes a fundamental element of Bechtels information management strategy (see Figure 1). Spatial data is an essential component of the overall engineering and business information that is critical to Bechtels interoperability strategy. The interoperability of mapping, imagery, and other related geospatial data is vital to support complex engineering projects worldwide. This paper describes interoperability strategies implemented at Bechtel that facilitate the sharing and integration of GIS content.

MANAGING DIVERSE SPATIAL INFORMATION ata originating from government agencies, GIS vendors, clients, subcontractors, and different disciplines within the company comes in a variety of formats, ranging from hand drawings, text files, e-mails, and spreadsheets to database files, AutoDesk AutoCAD and Bentley MicroStation computer-aided design (CAD) drawings, Pitney Bowes MapInfo Professional files, and Environmental Systems Research Institute, Inc. (ESRI) shapefiles and geodatabases. Making such vast and diverse data easily accessible and reusable for projects is a great challenge. To address this challenge, Bechtels GIS department developed standardized GIS desktop procedures and then diligently used them to catalog, verify, georeference, and load these datasets into a central Oracle Spatial database conforming to a

166

Bechtel Technology Journal

PSN Portal
3D Information Views P&ID Specification ISO 15926 Information Model

Data Sheet

MR

JV Partner

RDL

BSAP

BSAP

B BSAP

BSAP

BSAP

BSAP

BIM

Supplier Procurement ECM GIS LIM Construction

Customer

Figure 1. Engineering Information Management Strategy

standardized GIS data model, thereby creating a Bechtel enterprise GIS, known as BecGIS. Standardized Data Model and Workflows The implementation of the standardized data model within the BecGIS has facilitated the development of standard processes to capture, manage, and deploy spatial information. The underlying data model used by Bechtel in its Oracle Spatial database environment is the Spatial Data Standard for Facilities, Infrastructure, and the Environment (SDSFIE), a GIS data model first developed for the military but whose usage has spread throughout many other federal, state, and local GIS organizations. It is published by the American National Standards Institute (ANSI) as ANSI INCITS 353 and is listed in the Federal Enterprise Architecture Geospatial Profile. [2] The SDSFIE data model is distributed with a number of tools that have allowed the Bechtel GIS department to generate standard GIS architectures and automate portions of the data management workflow. Benefits of a standardized data model include: Living, breathing data dictionary Architecture reproducibility Facilitated use of GIS automation While the SDSFIE data model captures 90% of all GIS datasets that Bechtel may need to support its work, it has sometimes been necessary to add new GIS layers or fields to meet specific business needs. Controlling the addition of

new data architecture through the SDSFIE tools ensures that all structures are reproducible and documented. Adoption of a standardized data model for the BecGIS environment has facilitated the development of GIS automation, which has helped implement numerous process improvements for data-intensive workflows around the company. Interoperable Approach to Spatial Data Storage Because Bechtel uses a variety of software products to manage and maintain data, it has been essential that the GIS department take an interoperable approach to the storage of spatial data within the BecGIS. Thus, the decision was made to use Oracle Spatial, along with the ESRI ArcGIS Server environment, to ensure that a variety of client software could access the spatial information, if necessary. Examples of client software are ESRI ArcGIS Desktop, Pitney Bowes MapInfo Professional, AutoDesk AutoCAD, Bentley Map and gINT, Google Earth, and Mentum Planet. Where interoperable vendor solutions and plug-ins have not been available, the GIS department has developed some of its own solutions to feed and extract data for specialty modeling software packages. In Figure 2, the solid green lines represent clients that can directly access spatial data within the BecGIS, while the dashed red lines represent Bechtel custom solutions to data interoperability.

The implementation of the standardized data model has facilitated the development of standard processes to capture, manage, and deploy spatial information.

December 2009 Volume 2, Number 1

167

g GIS Data Clearinghouses

ArcG G ArcGIS
(Integrated Collection of GIS Software) CADD Engineering Drawings

AutoCAD CAD
(Design and Documentation Software)

Web Map Services

(Army Corps of Engineers Hydrometeorological Modeling Software) Boundaries

HEC-HMR52

Tabular Data, Charts & Reports

Cadastral (Parcel) Direct Read/Write Communications Propagation Models Climate Data Communications GIS Extrapolated xtrapolated Network Models
G IS Pr oc e s t ion oma s

Utilities

Bechtel Enterprise Corporate GIS (BecGIS)


Oracle Spatial Data Warehouse

Direct Read/Write

Topography

Transportation

USGS NED Pre-Construction and Post-Construction Surface Contours Land Cover

Aut

Geology Demographics

Direct Read/Write

COTS and Bechtel GIS Automation

COTS and Bechtel GIS Automation

Environmental

COTS and Bechtel GIS Automation

Hydrography

NLCD 3D Visualization of Borehole Plan and Subsurface Picks

Specialty Environmental and Permit Data

USDA Soils

3D Cross-Section Visualization (Communications Propagation Tool)

Mentum Planet

(Geographic Database Service)

Google Earth

Bentley Map
(Advanced, Full-Featured GIS)

(Geotechnical Data Analysis & Reporting Software)


CADD COTS GIS LIDAR computer-aided design and drafting commercial off-the-shelf geographic information system light detection and ranging NED NLCD USDA USGS

Bentley gINT

HEC-RAS

HEC-HMS52

(Army Corps of Engineers Hydrologic Modeling Software)


National Elevation Dataset National Land Cover Data US Department of Agriculture US Geological Survey

(Mapping and Geographic Analysis Application)

MapInfo I f Professional

Figure 2. Data Interoperability of Spatial Data

168

Bechtel Technology Journal

COTS and Bechtel GIS Automation

Population Density

Direct Read/Write

Daytime Population Density

Interpolation of Subsurface Stratigraphy

LIDAR Surface Elevation Data

Metadata Management Each geospatial layer, as well as its feature-level geometry, in the BecGIS is accomplished by metadata created to document information such as the currentness, scale, pedigree, data source, purpose, publication or revision date, and access and use restrictions of the dataset. To facilitate the creation of metadata that is compliant with both the Federal Geographic Data Committee (FGDC) standard (FGDC-STD-001-1998) and the ISO Geographic informationMetadata standard (ISO 19115), a custom metadata management tool was developed. This tool eliminates duplicated data entries by transferring metadata directly from SDSFIE metadata tables to metadata stored on a spatial view within the BecGIS database.

Security User access is controlled by a multitier security strategy implemented through operating system and database authentication, access control, and encryption. Web services add another layer of security on top of the database; this is further strengthened by application-level access control. Network-Level Security The first tier where the security of any GIS is implemented is at the network level. Access privileges to the computer network and underlying server architecture should be the first defense in protecting a valuable company resource. The entire BecGIS resides within Bechtels intranet environment, which is secured against outside Internet access.

User access is controlled by a multitier security strategy.

ENTERPRISE GIS ARCHITECTURE

well-implemented enterprise GIS is characterized by reliability, scalability, and high security. All three of these issues have been of key importance to the implementation of the system developed at Bechtel. Reliability An enterprise system should be reliable, have few downtimes, and be resilient to disasters such as data corruption and hardware failure. For a global engineering firm that operates 24 hours a day, system reliability is critical. Routine database backup and redundant backup servers should be used. For Bechtel, the use of a development/acceptance environment for spatial data development and testing ensures the quality of information before it is released into a production environment for the BecGIS user community. Scalability An enterprise system should be able to respond to an increased user base without sacrificing performance. As new projects and new users are constantly added to the system, there is a need to scale it as demand picks up. This can be accomplished by careful system design. A multitier system architecture is much more scalable than a single-tier system. Depending on where bottlenecks tend to occur, more hardware capacity can be added to the database, the Web, or the systems application tier. High performance can be achieved by both database tuning and application optimization. In a GIS Web mapping environment, the map engine is often the bottleneck that requires constant monitoring and performance tuning.

Network-Level Security Firewalls: The system is secured within the companys BecWeb firewalled intranet environment, preventing external access to this valuable company resource.

Database-Level Security The second tier in securing an enterprise-level GIS is implemented within the database. The use of traditional database instances, schemas, and roles should be the starting place for any GIS organization. However, the additional use of spatial views allows GIS personnel to fine-tune spatial data access at the feature and column level as well, while also providing a means to make the data more user-friendly. Spatial views have also been a means by which Bechtel has tied into other Oracle database environments outside of the BecGIS, which means dynamically tying information to spatial features from its source, rather than making a redundant, out-of-date copy. A conceptual diagram of database security implementation is provided in Figure 3. Here, project data and public data are segregated into separate database instances. Each schema has administrator, editor, and viewer roles. Bechtel global business unit (GBU)-based roles cover multiple schemas. System viewers cannot access the underlying spatial database structure and can access information only through spatial views.

December 2009 Volume 2, Number 1

169

GBU 1 Schema C

Spatial Views

GBU 2 Schema F

Spatial Views

Schema F Admin Schema F Editor

Schema C Spatial Views Project Spatial Data Schema B Spatial Views

Schema F Spatial Views

Schema F Viewer Schema E Admin

Schema B

Schema E Schema E Spatial Views

Schema E Editor Schema E Viewer Schema D Admin

Spatial views allow GIS personnel to fine-tune spatial data access at the feature and column level.

Schema A Schema A Spatial Views

Schema D Schema D Spatial Views

Schema D Editor Schema D Viewer

GBU 1 Viewer

GBU 2 Viewer

Public Spatial Data

Figure 3. Database Security

Database-Level Security Database instances: Separate database instances secure public versus projectspecific data layers. Schemas: Individual schemas separate data by project, geography, or data source (e.g., project data is stored in individual project schemas). Role-based user control: Roles like administrator, editor, and viewer were created for each schema and GBU. Administrators have the privilege to create objects and grant permissions to other users. Editors have permission to load and edit data, while viewers have read-only access to the data through spatial views. Spatial views: Spatial views allow GIS personnel to fine-tune spatial data access by hiding the underlying spatial tables and exposing only selected rows and columns to the end users. They also allow information from other databases to be dynamically tied to BecGIS features. Desktop/Application-Level Security The third tier through which to address the security of an enterprise GIS is at the desktop

and/or Web application level. Bechtels GIS department has taken a very innovative approach to implementing application-level security by establishing database user groups. The partitions within the database to which a user has access privileges are determined by the group of which the user is a member. Desktop/Application-Level Security Web map services: Security is set by Web map service levels that limit access to certain users. Desktop application level authentication and access control: User logons are checked against an access control table to see which schema the user has permission to access. Functionalities available to the user are also tied to the user logon. Web application level authentication and access control: Access is restricted at the Web server level and also within Web applications (particularly those that provide spatial content management functionality). Data encryption: Critical information is stored as encrypted data and unencrypted on the fly to prevent unauthorized access.

170

Bechtel Technology Journal

ENTERPRISE GIS DEPLOYMENT STRATEGIES eployment of a customized GIS client interface has helped Bechtel provide GIS users with a common experience within their GIS software while controlling the release of new software versions and service packs. Access to custom BecGIS utilities and data has helped users increase productivity and streamline data-intensive workflows.

GIS automation is a key means of standardizing GIS analysis processes and data access to an enterpriselevel system. At Bechtel, this GIS Applications GIS automation is built on Web Applications top of a Web-service-oriented, Desktop Applications component-based, multitier application development framework (ADF). Web map ADF services are essential in Common Objects Data Provider disseminating information across the company. Web Services Application Development Framework The ADF provides a foundation for enterprise-level GIS application development. It greatly simplifies access to data without compromising security. The ADF exposes functionalities and business logic as Web services or objects while hiding the complexity of security handling and

database interactions from the developers. Figure 4 shows the general ADF architecture of the BecGIS, and Figure 5 shows how different components are deployed. The data access component, which is used by Web services, handles database connections and access. Web services are consumed by both desktop and Web applications. Specific interactions with the Web services are packaged as data providers to be used by desktop applications; this further reduces the systems complexity.

Common Windows Controls

GIS automation is built on top of a Webservice-oriented, component-based, multitier ADF.

Data Access

Spatial Data

Spatial Data Spatial Data

Spatial Data

Figure 4. Application Development Framework

Web Services Web Browser GIS Web Users Local GeoDatabase Enterprise Database Server-Side Custom Components

Client-Side Custom Components

Application/Web Servers (Spatial Data Engine, Web Map Services, Web Applications)

GIS Workstation GIS Desktop Users

Database Servers (Spatial Data)

Figure 5. Deployment Plan

December 2009 Volume 2, Number 1

171

This multitier architecture eliminates duplicated codes, simplifies programming, and is much more scalable. It enables security to be implemented at different levels. Changes can be made to specific components without affecting other components. Thus, changes to the data access component or Web services do not necessarily invoke the recompilation and redeployment of client-side components. Custom GIS Tools Bechtel has developed custom GIS tools on top of the ADF and deployed them as logic groups of tools in GIS extensions. These tools streamline data processing and analysis, which can save substantial processing time and costs. One such custom GIS tool was used to derive radarbased mean areal precipitation (MAPX) data series by watershed sub-basins from 11 years of next-generation radar (NEXRAD) Stage III hourly precipitation data provided by the National Weather Service (NWS). Completed in a day, the whole process could have easily taken weeks or months if this data processing had been done manually. Another example is the BecGIS Terrain Profile Tool, which automates the generation of compass rose diagrams at a user-specified distance from a source location. The tool has the capability to use a digital elevation model (DEM) or other surface raster data source to generate three-dimensional (3D) information. It then allows the results to be output to a Microsoft Excel spreadsheet as individual profile charts. Custom GIS tools can provide interoperability between the GIS and mathematical and/or specialty modeling software packages. One particular GIS extension developed at Bechtel translates data between the BecGIS and the US Army Corps of Engineers hydrologic modeling software HEC-HMR52, which is used to model storm events. Spatial Content Management and Other GIS Automation Interfaces Finding the desired spatial data from hundreds of spatial views across dozens of schemas and multiple database instances can be a daunting task without some type of automation. Locating one of hundreds of maps created (and revised many times) in the past by a GIS organization is equally challenging. To facilitate easy discovery of and access to enterprise spatial data and map products, Bechtel developed a spatial data search and retrieval tool called the BecGIS

Spatial Data Tool. Using project, theme, and place keyword searches, this tool can easily locate and retrieve GIS layers and maps housed within the enterprise system. BecGIS users are authenticated based on their logons, and data access is controlled by an access control table that specifies which users have access to which database schemas. Search results are organized in tree views. Succinct metadata information is shown as tooltips, and full-length metadata can be retrieved easily. For consistency, both Bechtel and SDSFIE symbology can be selected by users to standardize their visualization of the spatial information. The BecGIS Spatial Data Tool makes the BecGIS accessible to the casual user and greatly increases the systems usability. Without such an easy-to-use tool, a sophisticated system can be a useless and wasted resource. Web Application Solutions Web map services are essential tools for disseminating information across the company. Users can quickly gain access to valuable data without having full-blown desktop GIS software and substantial GIS training. For Bechtel, one such quickly deployed Web application provided surveyors for a rail expansion project access to survey request forms, survey control points, operating areas, rail segments, railway station locations, utilities, aerial photos, and other information through a Web mapping environment. For another rail project, a Web application was quickly set up to share terrain information among project staff from different offices across the globe. Development of a GIS Knowledge Bank Of course, no enterprise GIS can be considered a complete success without having the capability to communicate with and train the end users. As part of the Bechtel GIS departments role as a corporate center of excellence, it has developed a GIS technical discipline home page on the BecWeb, where employees can access useful GIS resources. These resources include avenues for GIS training, both internal (through Bechtel University) and external. In addition, the GIS department sponsors informal GIS technical sessions at lunchtime and holds monthly GIS technical working group (TWG) teleconferences via Microsoft Live Meeting to share and discuss current GIS-related technical issues. The GIS department has also developed a body of literature called GIS Desktop Procedures.

GIS tools streamline data processing and analysis and save substantial processing time and costs.

172

Bechtel Technology Journal

These procedures document standardized processes and best practices for performing a variety of GIS-related tasks. These documented workflows have been assembled to provide GIS users with step-by-step examples of how to do anything from modeling subsurface bathymetry data to validating cut-and-fill calculations for engineering design. Communicating the benefits and uses of a GIS will continue to be a primary goal of the Bechtel GIS department if the BecGIS is going to be a fully realized technology for the company. Thus, ongoing efforts to provide training, expand Bechtels GIS Knowledge Bank, and develop Web-based GIS applications are the key to furthering awareness and use of this valuable corporate resource.

manager, Stew Taylor, and the many subject matter experts around the company with whom the GIS department has had the opportunity to work since its formation. This support, along with a continued close partnership with Bechtels Information Systems and Technology (IS&T) department, has contributed to the success of centralized GIS efforts within the company over the past few years.

TRADEMARKS AutoDesk and AutoCAD are registered trademarks of AutoDesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries. Bentley, gINT, and MicroStation are registered trademarks and Bentley Map is a trademark of Bentley Systems, Incorporated, or one of its direct or indirect wholly owned subsidiaries. ESRI and ArcGIS are registered trademarks of ESRI in the United States, the European Union, or certain other jurisdictions. Google is a trademark of Google Inc. MapInfo Professional is a registered trademark of Pitney Bowes Business Insight, a division of Pitney Bowes Software and/or its affiliates. Mentum Planet is a registered trademark owned by Mentum S.A. Microsoft and Excel are registered trademarks of Microsoft Corporation in the United States and/or other countries. Oracle is a registered trademark of Oracle Corporation and/or its affiliates.

CONCLUSIONS ecGIS, Bechtels enterprise GIS approach, has been successfully implemented and is currently employed in a variety of Bechtels businesses. Activities BecGIS currently supports include nuclear power combined license applications; large-scale market analyses for telecommunications clients; geotechnical investigations at construction, power, and mining and metals sites; and pipeline routing in the oil and gas industry. This range of uses demonstrates that an interoperable approach to enterprise GIS design and deployment through standardized data models, workflow processes, and automation tools can significantly improve the effective retrieval and usability of spatial data in a global work environment. Todays economic challenges make it even more important for a company such as Bechtel to embrace the opportunity to integrate GIS efficiencies and innovations into its current work processes.

Support from upper management and subject matter experts, as well as close partnership with IS&T, contributed to the success of centralized GIS efforts.

REFERENCES ACKNOWLEDGMENTS The Bechtel GIS department would like to thank Bechtel upper management for its support in the creation of a corporate-level GIS. Its recognition and support of a GIS white paper developed in the fall of 2006 [3] that made a case for a centralized GIS resource for the company helped realize the formal creation of the GIS technical discipline within the Geotechnical and Hydraulic Engineering Services (G&HES) organization in March 2007. The ongoing effort to integrate use of BecGIS into the daily workflow continues to be championed by the G&HES functional
[1] C. Thomas and M. Ospina, Measuring Up: The Business Case for GIS, ESRI Press, Redlands, California, 2004, access via http://esripress.esri.com/display/ index.cfm?fuseaction=display&websiteID= 79&moduleID=0. Architecture and Infrastructure Committee, Federal Chief Information Officers Council and Federal Geographic Data Committee, Federal Enterprise Architecture Geospatial Profile Version 1.1, January 27, 2006, http://colab.cim3.net/file/work/geocop/ ProfileDocument/FEA_Geospatial_Profile_ v1_1.pdf. S. Taylor and T. McLane, The Case for a Centralized Geographic Information System (GIS) Organization for Bechtel, unpublished white paper, 2006.

[2]

[3]

December 2009 Volume 2, Number 1

173

BIOGRAPHIES
Tracy J. McLane is Bechtels corporate GIS manager. She is responsible for developing GIS as a new corporate Technical Center of Excellence and has implemented an enterprise GIS database and a centralized GIS Knowledge Bank for the Bechtel GIS user community. As a member of Bechtels Geotechnical and Hydraulic Engineering Services Group, she also serves as the GIS technical discipline lead for the company. During her 11 years with Bechtel, she has provided GIS support for each of the companys global business units. Tracy has worked in the GIS industry for more than 16 years. Her GIS experience includes positions in the public and private sector, as well as experience with the Tennessee Valley Authority and the US Department of Energys Savannah River Site. Tracy holds an MSc in Geography from the University of Tennessee, Knoxville, and a BA in International Business from Eckerd College in Saint Petersburg, Florida. Yongmin Yan, PhD, is currently a GIS automation specialist for Bechtel Corporation. He is responsible for creating GIS application development standards and procedures and for leading GIS automation tasks. Yongmin has more than 15 years of experience in GIS automation and application development. He has provided GIS support related to nuclear licensing applications for Bechtels Power GBU, as well as GIS support for the Mining & Metals, Communications, and Civil GBUs. Yongmin holds a PhD and an MA in City and Regional Planning from the University of Pennsylvania, as well as an MS in Environmental Planning and Management and a BS in Physical Geography, both from Peking University, China.

Robin Benjamins is Bechtels engineering automation manager. In this role, he is responsible for developing and implementing the strategy for Bechtels Central Engineering & Technology and Information Systems & Technology Groups. Robin led Bechtels effort to create a standard, global interoperability solution, incorporating ISO Standard 15926 methodologies, that is now used by the company worldwide. Prior to joining Bechtel in 1990, Robin worked for other leading EPC firms, including Fluor Corporation, Brown & Root, and Ralph M. Parsons, Inc. He is an accomplished technologist with proven expertise in managing the engineering application portfolio, interoperability, and data integration solutions. His 32 years of experience in the engineering, procurement, and construction industry includes 16 years providing technology solutions to internal and external customers. He has established an invaluable expertise in business processes, combined with technological acumen. Robin is a board member of the POSC Caesar Association, which initiated ISO 15926. POSC Caesar Association is a global, nonprofit member organization promoting the development of open specifications to be used as standards for enabling the interoperability of data, software, and related matters. In this context, Robin is the project manager for two key industry collaboration projects focused on implementing ISO 15926. Both projects are joint PSOC Caesar Association and FIATECH projects. FIATECH, an industry consortium of leading capital project industry owners, engineering construction contractors, and technology suppliers, was created in 2000 and is a separately funded initiative of the Construction Industry Institute at The University of Texas at Austin.

174

Bechtel Technology Journal

Systems & Infrastructure


Technology Papers

177

Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands


Michael R. Lewis Ignacio Arango, PhD Michael D. McHood

193

Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility


Christine Statton August D. Benz Craig A. Myler, PhD Wilson Tang Paul Dent

205

Hanford Waste Treatment and Immobilization Plant


The WTP, under construction at the former nuclear production site in Hanford, Washington, will use a process called vitrication to transform some 200 million liters of radioactive and chemical waste into glass so that it can be safely stored. The plant will be the largest of its type. Communications Specialist Jenna Coddington looks at the maze of pipes being installed in the Chiller Compressor Plant.

Investigation of Erosion from High-Level Waste Slurries on the Hanford Waste Treatment and Immobilization Plant
Ivan G. Papp Garth M. Duncan

SITE CHARACTERIZATION PHILOSOPHY AND LIQUEFACTION EVALUATION OF AGED SANDS


Originally Issued: March 2008 Updated: December 2009 AbstractThis paper describes site characterization using the cone penetration test (CPT) and recognition of aging as a factor affecting soil properties. Pioneered by Dr. John H. Schmertmann, P.E. (Professor Emeritus, Department of Civil and Coastal Engineering, University of Florida), these geotechnical engineering methods are practiced by Bechtel in general and at the Savannah River Site (SRS) in South Carolina in particular. The paper introduces a general subsurface exploration approach developed by the authors. This approach consists of phasing the investigation, employing the observational method principles suggested by R.B. Peck and others. The authors found that borehole spacing and exploration cost recommendations proposed by G.F. Sowers are reasonable for developing an investigation program, recognizing that the final program will evolve through continuous review. The subsurface soils at the SRS are of Eocene and Miocene age. Because the age of these deposits has a marked effect on their cyclic resistance, a field investigation and laboratory testing program was devised to measure and account for this effect. This paper addresses recommendations regarding the liquefaction assessment of soils in the context of reassessing the SRS soils. The paper shows that not only does aging play a major role in cyclic resistance, but it should also be accounted for in liquefaction potential assessments for soils older than Holocene age. Keywordsaging, characterization, cone penetration test (CPT), cost, cyclic shear strength, exploration, geology, liquefaction, risk, soil, standard penetration test (SPT), uncertainty

INTRODUCTION he contributions made by Dr. John H. Schmertmann, P.E. (Professor Emeritus, Department of Civil and Coastal Engineering, University of Florida) to the Geotechnical Engineering profession, spanning over 50 years, have dealt with numerous aspects of soil mechanics important to practicing engineers. His papers, presentations, research reports, and technical discussions published in the ASCE geotechnical journals, at ASCE conferences, in ASTM special technical publications, at international conferences, and in public agency research reports cover many aspects of geotechnical engineering. In particular, they relate the application of laboratory and field testing to the strength and compressibility characterization of in situ soils. He published guidelines for the interpretation of cone penetration tests (CPTs) and standard penetration tests (SPTs) as early as 1970 (Schmertmann [1]). His ideas about the

potential of these two tests improved in the subsequent years through lessons learned from additional research and case histories. Although the SPT is giving way to many other in situ tests, the CPT and SPT still constitute two of the most important tools for geotechnical site characterization. For the ASCEs 25th Terzaghi Lecture in 1989, Professor Schmertmann chose the important topic of aging as it affects soil properties (Schmertmann [2]). In his lecture, he elaborated on the impact of aging on soil compressibility, stress-strain characteristics, static and cyclic strength, liquefaction resistance, and other properties, based on numerous laboratory test results and observations compiled from welldocumented case histories. The first part of this paper addresses exploration and the use of the CPT, while the second part of the paper addresses aging of soils and the role it plays in the dynamic strength of soils.

Michael R. Lewis
mlewis@bechtel.com

Ignacio Arango, PhD


iarango@bechtel.com

Michael D. McHood
mxmchood@bechtel.com

2009 Bechtel Corporation. All rights reserved.

177

ABBREVIATIONS, ACRONYMS, AND TERMS


BSRI CPT CRR DOE DMT FC Bechtel Savannah River Incorporated cone penetration test cyclic resistance ratio (US) Department of Energy dilatometer test fines content field vane shear test liquefied natural gas megapascal non-plastic pressure meter test clayey sand seismic piezocone penetration test silty sand poorly graded sand standard penetration test Savannah River Site total estimated cost ton per square foot University of California at Berkeley Unified Soil Classification System

ensure safety during construction, operation, and eventual decommissioning. From a geologic and geotechnical standpoint, the SRS presents a number of interesting challenges. We discuss two of those challenges in this paper: site characterization and how we use the CPT, and the effect that age plays in the cyclic strength of soil deposits.

Site characterization is the most important aspect of the work; without an accurate depiction of the subsurface conditions and the geology of a site, subsequent analyses are guesswork.

FVST LNG MPa NP PMT SC SCPTu SM SP SPT SRS TEC tsf UCB USCS

Greenville

South Carolina
Atlanta Augusta Columbia Aiken

SAVANNAH RIVER SITE


Georgia
Savannah

Charleston

Atlantic Ocean

0 N 0

100 miles 100 kilometers

Figure 1. Savannah River Site and Surrounding Region

SITE CHARACTERIZATION

BACKGROUND he Savannah River Site (SRS) is located along the Savannah River in the upper portion of the Atlantic Coastal Plain of South Carolina, approximately 160 km (100 miles) upstream of Savannah, Georgia (Figure 1). The SRS occupies about 830 km2 (320 mi2) and is owned by the Department of Energy (DOE). Since its inception in the early 1950s, the SRS has been an integral part of the United States defense establishment. As a result, several critical facilities have been, and will continue to be, constructed and operated at the SRS. By their nature, these facilities demand the very best in design and construction and all of the trappings that follow nuclear and defense-related projects, all in an effort to

or geotechnical engineers and geologists, site characterization is the most important aspect of the work; without an accurate depiction of the subsurface conditions and the geology of a site, subsequent analyses are guesswork. In recent times, however, it appears that this activity has been receiving less and less attention, or at least it may be taken for granted. We are not sure of the reasons, but we believe that one aspect is the ever increasing reliance on modeling, parametric analyses, and statistical inference. While these activities are important and clearly play an integral role in site characterization, they are no substitute for carefully planned and executed subsurface exploration programs. In fact, they should go hand in hand. What is site characterization? According to Gould [3], site characterization is a term used to describe a site by a statement of its characteristics. Sowers [4] describes it as: a program of site investigation that will identify the significant underground conditions and define the variability as far as practical. More recently, Baecher and Christian [5] describe site characterization as a plan of action for obtaining information on site geology and for obtaining estimates of parameters to be used in modeling

178

Bechtel Technology Journal

engineering performance. From our perspective, site characterization is the determination of subsurface conditions by: Understanding local/regional geology (through site visits, geologic mapping and interpretation, and aerial photo interpretations) Performing appropriate geophysical surveys, borings, sampling, in situ testing (SPT, CPT, dilatometer test [DMT], pressure meter test [PMT], field vane shear test [FVST], etc.), and groundwater and piezometric observations Completing appropriate laboratory testing and engineering analyses and modeling Reviewing and interpreting the performance of nearby facilities Applying individual and collective experience, including site-specific (local) knowledge and general professional judgment Additionally, and probably more importantly, our experience is that an effective site exploration program must be flexible and continually reviewed and adjusted in real time as it proceeds. This can and does present challenges with regard to budget and schedule considerations. The objective of site characterization is to better predict the performance of the proposed facility. As a means to this end, it is necessary to understand the geology; the groundwater conditions; the physical, mechanical, and dynamic properties of the affected strata; and the performance of existing facilities. To meet these objectives, the characterization program needs to be well planned and communicated to the project team and the customer and must include two key components. First, the quality of the characterization data must be assessed continuously. Are the data adequate and accurate? Second, the data need to be interpreted and analyzed on a near real-time basis. What answers are suggested by the data? Do they make sense, and is additional information needed? In addition, all exploration programs have some uncertainty attached to the resultsit is unavoidable. The question is one of how to keep uncertainty to a minimum given such constraints as budget and schedule. Level of Effort In developing an exploration program, the scope is invariably reduced to cost. Historically, our experience indicates that most nongeotechnical professionals attempt to limit this expenditure, not because the cost is not justified, but simply to manage the project.

This so-called low-cost/high speed mentality may be fine on some or even most projects, but it can also be a recipe for disaster. We clearly endorse project management principles and the need to manage the effort; as geotechnical professionals, we also recognize the need for flexible investigation programs that take into account actual site-specific conditions and the uncertainty inherent in all site investigations. Therefore, it is incumbent upon the geoscience professional to ensure that this philosophy is clearly understood by the decision makers on the project and the client; this is precisely where tools such as the CPT are invaluable. At the SRS, a routine boring to 50 meters (164 feet) depth with split-spoon sampling every 1.5 meters (5 feet) costs about three times as much as, and takes four times longer than, a seismic piezocone penetration test (SCPTu ) to the same depth. (Note: It is our the opinion that except for the most routine projects, if the CPT is to be used, it should be the SCPTu rather than the conventional CPT.) Thus, at any stage of the investigation program, and for less time and money, much more stratigraphic detail can be obtained using CPT technology as a first choice over conventional drilling and sampling methods. We do not advocate the abandonment of traditional borings as a technique for subsurface exploration. In fact, and as is discussed in the next section, CPT technology should be combined with drilling and sampling (and other in situ testing) for a highly effective exploration program. In our experience, on most major and critical projects, the initial budget is normally not an issue. Although heavily scrutinized, budgets generally are given to perform a scope of work, albeit ill-defined at the beginning of a project. Rather, it is a combination of schedule (having enough time to complete the initial program and evaluate the results) and/or revisions to the program based on actual conditions encountered (scope changes) that presents the greatest challenge to investigation programs. In other words, changes to the original scope, even though they may be fully warranted, are difficult to get approved. Therefore, geoscience professionals are obligated to communicate risk and uncertainty (common to every program) early to the decision makers. In this case, risk can be in terms of money and time to complete a program and/or technical risk if a program is not fully implemented or if it is cut short. Risk and uncertainty cannot be alleviated, but they can be managed. A discussion about risk and

The objective of site characterization is to better predict the performance of the proposed facility.

December 2009 Volume 2, Number 1

179

Each site characterization program is unique designed to fit the project and the unique conditions inherent in every project. Site conditions should dictate what is ultimately carried out.

uncertainty is far beyond the scope of this paper; however, based on experience, actual results (cost and scope) for like projects can be factored in to ensure that the project under consideration is not an outlier in terms of the proposed level of effort. For example, Figure 2 shows the results of the number of borings/CPTs by facility area and hazard category for projects with which we have been involved, as well as results from familiar case histories. The hazard category is somewhat subjective and is not based on any hard and fast criteria; rather, it is more qualitative, based on our experience. The results show considerable scatter in terms of hazard, but there is a distinct trend in terms of size; the larger the facility, the more exploration. These results are not unlike the suggestions of Sowers [4] in the size range of about 1,000 m 2 to 100,000 m 2 (about 11,000 ft2 to 1,000,000 ft2) for dams/dikes, multistory buildings, and manufacturing plants. In the same way, Figure 3 depicts the geotechnical cost in relation to the total estimated cost (TEC) of a particular project. The projects shown are those with which we have been involved or that are found in the literature and for which reasonable cost information is available. They are categorized by focus on transportation, power, nuclear fuel handling, and liquefied natural gas (LNG). While the scatter is significant, trends are still obvious; the higher

the estimated cost, the lower the geotechnical effort on a percentage basis. The projects shown have an average geotechnical expenditure of approximately 0.6% for the range of TEC shown. The large differences result mostly from actual site conditions and the geologic variability associated with the transportation projects in particular, which traverse great distances and involve widely varying geologic and site conditions. The results are not unlike other published cost information. For example, Sowers [4] reports that for an adequate investigation (including laboratory testing and geotechnical engineering) the cost ranges from 0.05% to 0.2% of the TEC but, for critical facilities or facilities with unusual site or subsurface conditions, the cost could increase to range from 0.5% to 1% of the TEC. A range of site investigation costs as a function of TEC was reported by Sara [6]: tunnels (0.3%2%), dams (0.3%1.6%), bridges (0.3%1.8%), roads (0.2%1.5%), and buildings (0.2%0.5%). Littlejohn et al. [7] report that for building projects in the UK, the expenditure for site investigations ranged from 0.1% to 0.3% of TEC; however, they also report that the perception of the respective clients was that the site investigation cost, on average, five times more. Were not sure how to interpret this disparity, other than as an apparent lack of communication, coupled with scope growth.

1,000 TRENDLINE: y = 0.85x 0.373 R2 = 0.72


1 Times

100 Penetrations, SPT or CPT


2/3

Times

Trendline 10

1 10 100 1,000 10,000 Facility Area, m


Low Hazard
2

100,000

1,000,000

10,000,000

Moderate Hazard

High Hazard

Figure 2. Penetrations per Square Meter for Various Projects by Hazard Category

180

Bechtel Technology Journal

5.0 4.5 4.0 3.5 Geotechnical Effort, % 3.0 2.5 2.0 1.5 1.0 0.5 0 10 100 Total Effort, $M
Nuclear Fuel Handling and LNG Power
Transportation

1,000

10,000

Figure 3. Geotechnical Cost as a Function of Total Project Estimated Cost

A successful site characterization program is done in five basic phases: (1) reconnaissance, (2) proposal or preliminary design, (3) detailed design, (4) construction, and (5) post-construction monitoring.

Unfortunately, subsurface investigations are thought of as commodities that can be purchased off the shelf. Rather, each program is unique designed to fit the project and the unique conditions inherent in every project with which geoscience professionals become involved. We suggest that information such as that given in Figures 2 and 3 be developed and used in the planning stages of a project to educate the decision makers and clients about the level of effort required and to provide a sanity check on the baseline program established. In this way, project managers and clients are included in the decision-making process and are a part of the risk and uncertainty discussions. These results are not meant as recommendations; actual site conditions should dictate what is ultimately carried out. There are no building codes, regulatory documents, or other hard-andfast criteria that dictate the ultimate level of effort; there are only guidelines. There are, however, particular attributes to every well-planned and well-executed characterization program above and beyond the level of effort already discussed. These attributes are discussed next. Attributes of a Good Characterization Program So, what constitutes a good characterization program? First, each site and facility is unique;

thus, all characterization programs are unique, or should be. Too often, characterization programs (including the reporting) are recycled with a cut-and-paste mentality. Although this approach may suffice in some instances, it is a slippery slope that should really be avoided. For a characterization program to be as successful as possible, it must be tailored to the specific project under consideration and must be sufficiently flexible to adapt to changing conditions as they are encountered. A general approach that we have developed over the years consists of phasing the investigation, employing the observational method principles suggested by Peck [8], among others. From our experience, a successful program is done in five basic phases: (1) reconnaissance, (2) proposal or preliminary design, (3) detailed design, (4) construction, and (5) post-construction monitoring. Each phase has a specific purpose and can vary considerably, given the specific project conditions. The reconnaissance phase is generally done for planning purposes and feasibility studies. The effort generally entails researching the site and surrounding area by reviewing historical reports, topographic maps, geologic maps, soil surveys, aerial photographs, field visits, and performance surveys of existing structures.
181

December 2009 Volume 2, Number 1

The proposal or preliminary design phase may include only the reconnaissance phase, but it could also include a limited field exploration with widely spaced borings, CPTs, and geophysical tests. It can include some laboratory testing and simplified analyses for conceptual design and/or cost estimating purposes. The detailed design phase is where the bulk of the characterization program is performed. It includes detailed field exploration, such as sample borings (SPT and undisturbed sample borings); CPT, DMT, and PMT soundings; FVSTs; and geophysics. Representative samples of the subsurface materials are taken and sent to a laboratory for testing. Testing generally includes index tests and tests for static and dynamic strength and compressibility. Depending on the size of the project and the complexity of the subsurface, this phase may be subdivided into additional phases. For example, the initial phase might include CPT soundings to determine site stratigraphy. A second phase would then target specific horizons for undisturbed samples for laboratory testing, in addition to the more routine SPT borings. In our experience, for critical projects (critical can be defined in terms of safety or monetary expenditure), a phased approach for the detailed design phase is highly recommended. It allows pinpoint sampling of specific horizons rather than sampling at preselected depths. This tends to focus the effort on those strata that have the greatest potential effect on the facility. It also adds needed flexibility to the program, which is required if the exploration program is to be successful. Without the flexibility to adjust locations, depths, sample types, and the type of exploration to meet the conditions encountered, the characterization program is doomed. Unfortunately, in many cases, once the program has been agreed to and initiated, cost and schedule tend to be managed at the expense of gathering needed data. Communication with the project team, and in particular the project manager, is critical to success. This also requires full-time oversight and direction of the program by qualified geotechnical engineers and geologists dedicated to the effort who will continue to follow through on the project as it moves from the investigation phase into the design and, later, the construction phases. Phases 1, 2, and 3 should be carried out on every project. The inclusion of Phases 4 and 5 (construction phase and post-construction

Communication with the project team, and in particular the project manager, is critical to the success of a site characterization program.

monitoring phase, respectively) depends on the success of the initial program and on any scope changes or any unknown subsurface conditions encountered during construction. However, the level of effort required for each phase may vary considerably based on the type and size of the project and the complexity of the subsurface. The key point is that whatever the program entails, it needs to be flexible and the geoscience professional must be able and allowed to adapt the program to the conditions encountered. In our experience, this does not necessarily mean that the program will grow; however, communication with the project team and/or owner is crucial. On any project, large or small, simple or complex, work is still done against a schedule and budget. And any deviation from either causes concern, even though the deviation may be valid given the subsurface conditions that dictated the change. Use of the CPT Following the trend observed in the industry in general, the use of CPT technology at the SRS, and within Bechtel, has increased progressively since the late 1980s in an effort to meet the aforementioned objectives on a project-by-project basis. This evolution has resulted in a basic exploration philosophy for critical facilities: Use the CPT early and often in a project, followed by borings and pinpoint sampling of targeted strata for further evaluation and laboratory testing. Several particularly important advantages of CPT technology have been recognized and, thus, used to further enhance the quantity and quality of geotechnical exploration that we have performed: More exploratory penetrations due to lower cost and less field time compared to traditional drilled borings (CPTs at the SRS are about one-half to one-third the cost and take about one-fifth the time of an SPT boring of equal depth.) Higher vertical resolution due to nearly continuous measurements, allowing for superior stratigraphic interpretation and detection of layers of special interest, including very thin, loose, or compressible layers, which can be used to determine target intervals for further adjacent sampling and subsequent laboratory testing Highly repeatable measurements within similar material types or layers because of standard and automatic testing and data acquisition methods

182

Bechtel Technology Journal

Multiple measured parameters, including tip stress, sleeve stress, friction ratio, pore pressure, and shear wave velocity, for resolving material characteristics, including initial stiffness At the SRS, and within Bechtel, the CPT is used primarily to establish stratigraphy, identify any anomalous strata (soft or compressible soil), and acquire a preliminary estimation of specific engineering soil properties for design. A word of caution, however: verification and calibration of site-specific correlations of engineering parameters determined with CPT parameters is highly recommended, since correlations shown in the literature do not fit all conditions.

events (case histories) to draw upon at the SRS or in the vicinity. For this reason, the decision was made to perform a detailed geotechnical exploration program, including field testing, undisturbed sampling, and dynamic testing of carefully sampled soil specimens in the laboratory. The program was developed and implemented by Bechtel Savannah River Incorporated (BSRI), and the dynamic laboratory testing was carried out at the University of California at Berkeley (UCB) laboratory (BSRI [10, 11]). Samples were obtained by a fixed-piston sampler using controlled techniques and sampling procedures well established at SRS. Measurements, including X-ray photography, were performed on each sample tube prior to packing and transporting and after being received at the UCB laboratory. The laboratory testing included index testing, the determination of dynamic strength and of volumetric strain after liquefaction, and an evaluation of the influence of confining pressure, leading to sitespecific recommendations for K, a factor that normalizes the cyclic resistance of a soil to an overburden pressure equal to one atmosphere. All of the test results were correlated back to a sample-specific (Nl )60, or (qt ) l, where (Nl )60 is the SPT penetration resistance normalized to one atmosphere and 60% energy level and (qt ) l is the CPT tip resistance normalized to one atmosphere. Site-specific sampling and laboratory testing were completed for two facilities at the SRS. In the first, a series of 17 stress-controlled, isotropically consolidated, undrained cyclic triaxial tests were performed for Site A. The samples were obtained adjacent to (within 1.5 to 3 meters [5 to 10 feet] of) SPT boreholes at locations exhibiting low N-values. To aid in the evaluation, SPT energy measurements were obtained and used later to correct the field N-values to N60. Although the fines content (FC) of the samples varied, a single cyclic resistance ratio (CRR) design curve for these soils was established. The overall assessment resulted in three data points relating (Nl )60 to CRR. The samples tested to develop the SRS curve were classified as SC soils (Unified Soil Classification System [USCS]) and had plastic fines with contents ranging from about 9% to 29%, with an average of about 17%. For the second site-specific sampling and laboratory testing, to correlate CPT (qt)1 with the cyclic resistance values obtained in the laboratory, CPT soundings were pushed adjacent to the Site A borings described above. In the same way, 18 additional high quality, fixed-piston samples

AGED SOILS AT THE SRS n the shallow subsurface beneath the SRS, Eocene and Miocene age (35 to 50 million years old) sediments of the Altamaha, Tobacco Road, and Upper Dry Branch Formations are composed primarily of laminated, gap-graded, clayey sands (SCs) deposited under alternating marginal marine and fluvial conditions. The claychiefly kaolinite and illitebinds the sand grains and appears to have been formed by in situ weathering. The CPT tip resistances of this material range from less than 1 MPa (10 tsf) to about 15 MPa (157 tsf), and the CPT friction ratios (sleeve resistance divided by tip resistance) range from less than 1% to over 10%. Corresponding SPT N-values range from less than 10 blows to over 20 per 0.3 meter (1 foot). Because of the relatively low penetration values and the relatively high seismic exposure (proximity to Charleston, South Carolina), studies of the liquefaction vulnerability using the empirical liquefaction chart suggested by Seed et al. [9] indicate that the site is potentially vulnerable to seismic liquefaction. It is well known that this empirical chart is based on observations of the performance of Holocene deposits. Since the soils at the SRS are geologically much older than Holocene age, the question logically arose regarding whether the empirical chart was appropriate for the liquefaction evaluation at the SRS. To resolve the concern, two tasks were completed: an extensive field and laboratory geotechnical investigation at the site and a review of available opinions and data in the technical literature on the liquefaction vulnerability of geologically old sand deposits. Geotechnical Investigations at the SRS For the formations of interest (Tobacco Road and Dry Branch), there were no paleoliquefaction
December 2009 Volume 2, Number 1

Verification and calibration of site-specific correlations of engineering parameters determined with CPT parameters is highly recommended.

183

were obtained at Site B from boreholes adjacent to (within 1.5 to 3 meters [5 to 10 feet] of) 18 CPT soundings. These samples were also sent to the UCB laboratory for dynamic testing (stress-controlled, isotropically consolidated, undrained cyclic triaxial tests). The laboratory results were evaluated in the same manner described above except that the CPT (qt )1 was used instead of (Nl )60. The samples tested were SC soils and had plastic FCs ranging from about 16% to 34%, with an average of about 24%. Table 1 summarizes the relevant data for the combined data set. Note that although 35 cyclic triaxial tests were performed, only 12 data points are shown. This is due to grouping like material in terms of FC and [qt]1. Prior to the reevaluation (discussed subsequently), a suite of curves based on plastic FC was established. The lines representing various FCs were constructed based on the laboratory test results described above, the trends of Seed et al.s empirical chart [9] (i.e., the clean sand passing through the origin of coordinates), and engineering judgment, assuming the suite of

curves was more or less parallel. That relationship (developed in 19941995) has recently undergone a reevaluation to take into account newer information (since 1995), including results from Youd et al. [12] and Idriss and Boulanger [13]. The reevaluation included a review of all the SRS data but centered, in particular, on the shape of the clean (5% fines) curve at low penetration resistances. For example, both Youd et al. [12] and Idriss and Boulanger [13] show the clean sand (5% fines) curve becoming flatter at low penetration resistances and intersecting the ordinate at a CRR value of 0.05. For the reevaluation of the SRS data, however, we relied more on the Idriss and Boulanger [13] relationship because (1) the Idriss and Boulanger clean curve fits our site-specific data more closely than the Youd et al. curves, and (2) as expected, at higher penetration resistances, the Idriss and Boulanger curve for Holocene soils results in a more conservative estimate of CRR. Thus, for the revised SRS clean curve, we adopted the Idriss and Boulanger relationship for the CPT to construct the revised SRS-specific relationships.

Table 1. SRS Data Summary


Data Point
1 2 3 4 5 6 7 8 9 10 11 12 CRRf DB1 DB3 LL LTR NP PI (qt)1 SC Notes:

Site
A A A A A B B B B B B B

Geologic Formation
UTR UTR LTR LTR LTR TR3 TR3 TR3 TR3 TR3/TR1 DB1/3 DB1/3

USCS
SC SC SP-SM SC, SM, SP-SC SP-SM, SC SC SC SM- SC, SC SC SC SC SP-SC, SC

(qt) 1 , MPa
0.9 2.0 2.3 3.3 5.0 1.7 0.5 1.1 0.6 1.8 1.9 1.1

Dr , %
<5 15 20 32 45 10 <5 <5 <5 11 14 <5

(Vs) 1 , mps
293 268 255 257 253 247 223 267 201 162 269 206 SM SP TR1 TR3 USCS UTR (Vs)1 d

Fines, %
28.7 18.5 9.6 15.7 10.8 34.0 33.6 26.3 20.5 19.7 16.1 18.6

LL, %
51 49 NP 41 NP* 47 48 50 48 60 39 80

PI, %
30 26 NP 17 NP* 28 32 29 29 37 18 62

d , kN/m3
16.6 15.5 15.7 15.2 15.5 16.5 17.1 16.4 16.6 16.7 16.3 15.2

CRRf
0.167 0.138 0.095 0.135 0.115 0.152 0.173 0.165 0.149 0.134 0.117 0.139

field-corrected cyclic resistance ratio Dry Branch 1 Dry Branch 3 liquid limit Lower Tobacco Road non-plastic plasticity index normalized cone tip resistance clayey sand

silty sand poorly graded sand Tobacco Road 1 Tobacco Road 3 Unified Soil Classification System Upper Tobacco Road normalized shear wave velocity (normalization) per Andrus and Stokoe [14] dry density

The estimate of relative density (Dr) is based on Tatsuoka et al. [15] (given in Ishihara and Yoshimine [16]). * Two of the three samples for Data Point 5 were NP.

184

Bechtel Technology Journal

Table 2. Summary of CRR Design Curve Factors


FC Curve, %
10 10 15 15 20 20 20 20 25 25 25 30 30

Data Point No. (Table 1)


3 5 4 11 2 9 10 12 1 8 9 1 7

Actual FC, %
9.6 10.8 15.7 16.1 18.5 20.5 19.7 18.6 28.7 26.3 20.5 28.7 33.6

(qt) 1 , MPa
2.3 5.0 3.3 1.9 2.0 0.6 1.8 1.1 0.9 1.1 0.6 0.9 0.5

CRRf
0.095 0.115 0.135 0.117 0.138 0.149 0.134 0.139 0.167 0.165 0.149 0.167 0.173

CRRI/B
0.058 0.079 0.064 0.056 0.056 0.051 0.055 0.052 0.052 0.052 0.051 0.052 0.050

Ratio CRRf/ CRRI/B


1.64 1.45 2.10 2.11 2.46 2.93 2.44 2.67 3.24 3.17 2.93 3.24 3.43

Ratio Selected for FC Group


1.6 1.6 2.1 2.1 2.6 2.6 2.6 2.6 3.1 3.1 3.1 3.4 3.4

Note: Data Point 6 from Table 1 is not in Table 2. This data point has not been included in the evaluation because it is not consistent with the results of the entire data set. We believe this data point to be somewhat anomalous; CRRI/B refers to the CRR using the Idriss and Boulanger [13] clean curve for CPT.

To develop the SRS aged clean curve, we used the low end of the strength gain factor range (1.3) proposed by Lewis et al. [17] for clean sands (discussed below). Thus, for the revised SRS aged clean sand relationship, the y intercept was 1.3 times 0.05 (the revised ordinate for the clean curve, Youd et al. [12]), or 0.065. In addition, we assumed the shape of the revised clean sand curve to be similar to that of the Idriss and Boulanger [13] clean sand curve, which is, in turn, similar to the shape given in Youd et al. [12] at low penetration resistances. Thus, for the clean aged curve, we applied a factor of 1.3 over all penetration resistances to the Idriss and Boulanger [13] clean curve to derive the CRR corresponding to the aged clean SRS curve. Using a constant factor to increase the curve across penetration resistances is consistent with the work of Polito [18] and Polito and Martin [19] for FCs below about 40%. Using a constant factor for a given FC independent of the penetration resistance and adopting the shape of the Idriss and Boulanger [13] Holocene clean sand curve, we developed the remainder of the SRS CRR curves for various FCs simply by applying the ratio of the sitespecific data (CRRf) of Idriss and Boulanger to
December 2009 Volume 2, Number 1

the adopted Holocene clean sand curve (CRRI/B) of Idriss and Boulanger over all penetration resistances. For example, the 10% FC curve used Data Points 3 and 5 from Table 1. Data Point 3 has a normalized tip stress ([qt]1) of 2.3 MPa (24 tsf), and Data Point 5 has a (qt)1 of 5.0 MPa (q t) 1 MPa (50 tsf). The ratios of the 0 4.8 9.6 14.4 19.2 0.4 CRRs at the correspond30% M = 7.5 ing CRR from the Idriss ' = 1 tsf 25% vo ~0.1 MPa and Boulanger [13] clean 20% 15% curve are 0.095/0.058 = 10% 30% 1.64 for Data Point 3 0.3 5% 22.5% 15% and 0.115/0.079 = 1.45 for Data Point 5; consid10% ering both data points, 0.2 29 34 26 the ratio would be about 0% 20 34 1.6. The resulting SRS 19 16 19 10% CRR curve would be 1.6 times higher 0.1 than the Idriss and Test Data With % Fines WSRC [11] Boulanger [13] clean Current Revision curve. In the same 0 way, curves can be 0 50 100 150 200 constructed for FCs of (q t) 1 tsf 15%, 20%, 25%, and 30%. Table 2 and Figure 4 Figure 4. CRR vs CPT (q )1
CRR t

185

summarize the evaluation results for each FC curve (Figure 4 also shows the BSRI [11] relationship). The data show that compared to the Idriss and Boulanger [13] Holocene clean curve for CPTs, the increase in dynamic strength ranges from 1.6 to 3.4 for FCs ranging from 10% to 30%. Review of Data on the Performance of Aged Soil Deposits Several investigators have addressed the issue of soil aging; among them are Youd and Hoose [20], Seed [21], Skempton [22], Kulhawy and Mayne [23], Martin and Clough [24, 25], Schmertmann [2, 26], BSRI [10, 11], Arango and Migues [27], Lewis et al. [17], and Leon et al. [28]. The results of some of the more significant findings are summarized below. Seed [21] considers the cyclic resistance of laboratory-prepared samples and of hydraulic fills of different ages (up to about 3,000 years) and concludes that the data indicate the possibility of increases in cyclic mobility resistance on the order of 75% over the stress ratios causing cyclic pore pressure ratios of 100% in freshly deposited laboratory samples, due to long periods of sustained pressure in older deposits. Skempton [22] discusses the evidence for increase in the deformation resistance of sand with the increased duration of sustained loading. He considers the increase in penetration resistance blow count (N-value) a reflection of the increase in resistance to deformation. He finds that the ratio between the normalized SPT (N1 )60 blow count and the square of the relative density (Dr2) varies with the period of sustained loading. Skempton reports that strength gains (increases in (N1 )60/Dr2 relative to those predicted for samples in the laboratory) were reported for normally consolidated sands of about 14% and 57% at 10 years and >100 years, respectively, after deposition. Kulhawy and Mayne [23] compile the values of the same parameter as Skempton [22], (N1 )60/Dr2, for several fine and fine to medium sand deposits of known geologic age, including some of the same data evaluated by Skempton. They conclude that the parameter (N1 )60/Dr2 is influenced by particle size, overconsolidation, and aging. Although they acknowledge that some of the data may be imprecise, a conservative relationship (C A = 1.2 + 0.05 log (t/100)) is developed to account for aging through the parameter (N1 )60/Dr2. We consider this relationship to be a lower bound of potential strength gain with time.
186

Lewis et al. [17] review published data compiled from the 1886 Charleston, South Carolina, earthquake. No quantitative data are available regarding the magnitude of the event or of associated peak ground accelerations; however, the earthquakes moment magnitude has since been estimated at between about 7 and 7.5. Independent studies carried out by several investigators estimate the epicentral acceleration at somewhere between 0.3 g and 1.0 g. Lewis et al. [17] conclude that a reasonable range of acceleration is between 0.3 and 0.5 g. Relic liquefaction features have been investigated by many along the eastern seaboard (e.g., Talwani and Cox, [29], Obermeier et al. [30], Dickenson et al. [31], Amick et al. [32], and Martin and Clough [24, 25]). Features were found primarily in the sands and silty sands of two ancient ridges dating back 130,000 to 230,000 years and located 10 miles inland. The beach processes led to sands and silty soils being concentrated in the highest portions of the beach ridges. For their study, Lewis et al. [17] reviewed data collected by Martin and Clough [24, 25] and Dickenson et al. [31]. The studies included borings, velocity profiles, piezocone probes, and trenches. Grain size tests showed that the sands at the sites are non-plastic (NP), clean, with FC less than 5% (14 locations) and 10% (5 locations). The sands in the remaining sites were described as poorly graded sandsilty sand (SP-SM) material. For the evaluation of these data, Lewis et al. [17] calculated the induced cyclic shear stresses at the depths of interest for each deposit, using a peak ground surface acceleration of 0.5 g. A lower boundary was established showing the minimum stress ratios required to cause liquefaction at those sites that experienced marginal liquefaction and liquefaction. The strength gain of the boundary relative to the clean sands curve in the empirical chart by Seed et al. [9] was found to be about 2.2. Similarly, an upper boundary was established that separates the maximum cyclic stress ratios tolerated by the soil with no liquefaction from those sites that experienced limited to widespread liquefaction. The strength gain in this case was calculated to be about 3.0. In the same way, using a lower bound acceleration of 0.3 g resulted in computed strength gains of 1.3 to 1.8. (Note: The 1.3 factor was applied above for the reevaluated SRS aged clean curve.) Arango and Migues [27] performed investigations after the occurrence of the January 17, 1994, Northridge, California, earthquake. The area selected for the study was within the Gillibrand Quarry site in the Tapo Canyon,
Bechtel Technology Journal

Strength gain is influenced by particle size, overconsolidation, and aging.

north of Los Angeles, California. Area acceleration levels exceeded 0.5 g, resulting in the failure of a small water-retaining dam in the quarry. In nearby Simi Valley, however, an old deposit of sand showed no signs of liquefaction. This sand has been estimated to be approximately 1 million years old. Although this deposit is now exposed in outcrops, it was previously buried by as much as 460 meters (1,510 feet) of overlying soil. It is relatively uniform, fine quartz sand (SP) with less than 5% NP fines. In its current state, the sand is lightly cemented, such that it can support vertical faces when dry but is weak enough to crush between ones fingers with the slightest pressure. Microscopic examination reveals a high degree of quartz grain overgrowthevidence of age and burial. The field exploration program used drilled and augered boreholes, CPT soundings, test pits, and undisturbed block sampling techniques. A total of 18 stress-controlled cyclic triaxial tests to classify and determine the static and dynamic strengths of the sand were carried out at the Geotechnical Laboratory at the UCB. The range of field CRRs, based on results of laboratory testing, was estimated to vary between 0.80 and 1.37. Based on these results, and

adopting a predicted, induced cyclic stress ratio equal to 0.50 for Holocene-age sands from Seed et al. [9], the increase in dynamic strength ranges from 1.6 to 2.7. Leon et al. [28] investigated the effect of age at four sites in the South Carolina coastal plain. The parameters reviewed were SPT N-values, CPT (qc)1, and normalized shear wave velocity (Vs)1. The four sites range in age from 546 years to 450,000 years old. The FCs from samples at all of the sites ranged from 0% to 9%, averaging 4%. The results of their evaluation indicate that these coastal plain soils had increased resistance to liquefaction by a factor ranging from 1.3 to 2, with an average of 1.6 (compared to the Youd et al. [12] relationships for Holocene soils), induced by a magnitude 7.5 earthquake. The specific factors and ages reported for each of the four sites are as follows:
Site
Ten Mile Hill Site A Ten Mile Hill Site B Sampit Gapway

Age does play a major role in the strength of soil deposits and cannot, therefore, be ignored.

Factor
1.3 2 1.5 1.7

Age, years
3,548 200,000 546450,000 3,548450,000

4.0 Holocene GEOLOGIC EPOCHS


10

Pleistocene

Pliocene Oligocene Miocene Eocene


30 25 5

3.0 Generalized Upper Trend Strength Gain Factor

0.5 g 20

10

5 9 4 5

15

2.0
9

10 0.3 g

10

7 10

1.0

Adapted from Skempton [22]

Note: Labels next to symbols indicate maximum fines content. 0 1E-02 1E-01 1E+00 1E+01 1E+02 1E+03 Age, years Seed [21] Kulhawy and Mayne [23] Skempton [22] Leon et al. [28] Lewis et al. [17] Arango and Migues [27] Current Evaluation 1E+04 1E+05 1E+06 1E+07 1E+08

Figure 5. Strength Gain with Age

December 2009 Volume 2, Number 1

187

Using a CPT early in a project adds flexibility to the site characterization program and allows greater site coverage with a given budget in a shorter period of time.

Figure 5 compares the predicted strength gain from the SRS studies reported above and the historical data reviewed. Note that for the SRS reevaluated data, the strength gain is relative to the clean CPT curve from Idriss and Boulanger [13]. Using the Seed et al. [9] relationship results in strength gains of approximately 10% to 20% less. In either case, we note the consistency between the results of the SRS investigation and the field data from South Carolina and Southern California. Furthermore, the results are also compatible with the extrapolated trends suggested by Seed [21], Skempton [22], and Kulhawy and Mayne [23]. It is interesting to note that trends shown on Figure 5 relating strength gain with age using the work based on SPT N-values (Skempton [22] and Kulhawy and Mayne [23]) are at the low end of the data shown, particularly for data older than about 1,000,000 years. This may be an indication that the SPT N-value is a poor indicator of strength gain with time for very old deposits. (Note: The trend of Kulhawy and Mayne is an acknowledged conservative trend, as they show other data with higher (N1 )60/Dr2 for ages up to 108 years.) However, it appears that the Kulhawy and Mayne relationship can be used as a lower bound for the data shown, and a similar relationship (using the same functional form suggested by Kulhawy and Mayne) can be used for an upper bound trend (C A = 1.92 + 0.23 log(t/100) ).

affords superior stratigraphic definition through continuous or near continuous data, excellent repeatability and data reliability, and time and cost savings. The result is more high-quality data, including pinpoint sampling and testing of targeted strata. While there will always be a need for soil borings and laboratory testing, the amount should decrease with increased use of the CPT. Knowledge about the subsurface conditions will increase, however. This has been commonly recognized as far back as 1978 (Schmertmann [33]): Although engineers with much CPT experience in a local area sometimes conduct site investigations without actual sampling, in general one must obtain appropriate samples for the proper interpretation of CPT data. But, prior CPT data can greatly reduce sampling requirements. In terms of aging, the technical community widely recognizes that the geotechnical properties of sand deposits are influenced by their age. Cyclic resistance data about the behavior of soils under dynamic loading summarized in the widely accepted empirical chart by Seed et al. [9] are limited to the relatively (geologically speaking) young soil of Holocene age. For use at the SRS, the need arose to define the cyclic resistance of older sands. Lacking information about the performance of the sands at the site, it was necessary to carry out the field and laboratory test programs and also the literature review. The results, summarized in Figure 5, provide confidence in the validity of the investigations. The case histories reviewed in this paper confirm the observations of Professor Schmertmannnamely that age does play a major role in the strength of soil deposits and cannot, therefore, be ignored, and that strength does increase with the passing of time.

TEST RESULTS, EVALUATION, AND CONCLUSIONS

he CPT has enhanced the capability to perform subsurface exploration within Bechtel and at the SRS. Using the CPT early in a project adds flexibility to the program and allows greater site coverage with a given budget in a shorter period of time. Initial program development following the suggestions of Sowers [4] provides a reasonable starting point. Properly verified site-specific correlations between CPT parameters and laboratory testing add a dimension that can be very powerful when assessing site conditions and performing design-related activities. The key component, however, in any exploration program is still effective communication with decision makers at all levels. Fulltime geotechnical oversight enhances and facilitates the needed communication and allows quick and early decisions to be made. With these attributes, using the CPT results in a program that allows maximum flexibility and
188

ACKNOWLEDGMENTS Through the years, Professor Schmertmann has been involved with many Bechtel projects, including several at the SRS. It has been through this direct interaction that we have arrived at the methodologies and conclusions described herein. We owe a great deal to Professor Schmertmann for his wisdom and insight and his fundamental knowledge of soil mechanics. We also acknowledge the contributions of Laura Bagwell and Rucker Williams (former Bechtel employees and still at SRS) for their patience and much-needed assistance in preparing this manuscript.

Bechtel Technology Journal

REFERENCES
[1] J.H. Schmertmann, Static Cone To Compute Static Settlement Over Sand, Journal of the Soil Mechanics and Foundation Division, ASCE, Vol. 96, No. SM3, May 1970, pp. 10111043, http://www.insitusoil.com/ docs/CPT%20for%20Predicting%20 Settlement%20in%20sands.pdf. J.H. Schmertmann, The Mechanical Aging of Soils, Journal of Geotechnical Engineering, ASCE, Vol. 117, No. 9, September 1991, pp. 12881330, access via http://cedb.asce.org/ cgi/WWWdisplay.cgi?9104271. J.P. Gould, Problems of Site Characterization, Seminar on Site Characterization, ASCE National Capitol Section, Geotechnical Engineering Committee, 1985, pp. 45. G.F. Sowers, Introductory Soil Mechanics and Foundations: Geotechnical Engineering, Prentice Hall, Englewood Cliffs, NJ, 4th edition, 1979, see http://www.amazon.com/IntroductorySoil-Mechanics-Foundations-Geotechnical/ dp/0024138703. G.B. Baecher and J.T. Christian, Reliability and Statistics in Geotechnical Engineering, John Wiley & Sons, Chichester, West Sussex, England, 2003, see http://www.amazon.com/ Reliability-Statistics-Geotechnical-EngineeringGregory/dp/0471498335#noop. M.N. Sara, Standard Handbook for Solid and Hazardous Waste Facility Assessments, Lewis Publishers, Boca Raton, FL, 1994, see http://openlibrary.org/b/OL1398088M/ Standard_handbook_for_solid_and_ hazardous_waste_facility_assessments. G.S. Littlejohn, K.W. Cole, and T.W. Mellors, Without Site Investigation Ground is a Hazard, Proceedings of the ICE Civil Engineering, Vol. 102, Issue 2, May 1994, pp. 7278, access via http://www. icevirtuallibrary.com/content/article/ 10.1680/icien.1994.26349. R.B. Peck, Advantages and Limitations of the Observational Method in Applied Soil Mechanics, Ninth Rankine Lecture, Gotechnique, Vol. 19, Issue 2, June 1969, pp. 171187, access via http://www.icevirtuallibrary.com/ content/article/10.1680/geot.1969.19.2.171. H.B. Seed, K. Tokimatsu, L.F. Harder, and R.M. Chung, The Influence of SPT Procedures in Soil Liquefaction Resistance Evaluations, Journal of Geotechnical Engineering, ASCE, Vol. 111, No. 12, December 1985, pp. 14251445, access via http://cedb.asce.org/cgi/ WWWdisplay.cgi?8503442.

[2]

[3]

[12] T.L. Youd, I.M. Idriss, R.D. Andrus, I. Arango, G. Castro, J.T. Christian, R. Dobry, W.D.L. Finn, L.F. Harder, Jr., M.E. Hynes, K. Ishihara, J.P. Koester, S.S.C. Liao, W.F. Marcuson, III, G.R. Martin, J.K. Mitchell, Y. Moriwaki, M.S. Power, P.K. Robertson, R.B. Seed, and K.H. Stokoe, II, Liquefaction Resistance of Soils: Summary Report from the 1996 NCEER and 1998 NCEER/ NSF Workshops on Evaluation of Liquefaction Resistance of Soils, Journal of Geotechnical and Geoenvironmental Engineering, ASCE, Vol. 127, Issue 10, October 2001, pp. 817833, http://ascelibrary.aip.org/getpdf/servlet/ GetPDFServlet?filetype=pdf&id=JGGEFK0001 27000010000817000001&idtype=cvips&prog=n ormal, and access via http://cedb.asce.org/cgi/ WWWdisplay.cgi?0105841. [13] I.M. Idriss and R.W. Boulanger, Semi-Empirical Procedures for Evaluating Liquefaction Potential During Earthquakes, Proceedings of the Joint 11th International Conference on Soil Dynamics and Earthquake Engineering (ICSDEE) and the 3rd International Conference on Earthquake Geotechnical Engineering (ICEGE), Berkeley, CA, January 79, 2004, pp. 3256, http://cee.engr.ucdavis.edu/faculty/boulanger/ PDFs/2004/Idriss_Boulanger_3rd_ICEGE.pdf. [14] R.D. Andrus and K.H. Stokoe, II, Liquefaction Resistance of Soils From Shear-Wave Velocity, Journal of Geotechnical and Geoenvironmental Engineering, ASCE, Vol. 126, No. 11, November 2000, pp. 10151025, http://ascelibrary.aip.org/getpdf/servlet/ GetPDFServlet?filetype=pdf&id=JGGEFK0001260 00011001015000001&idtype=cvips&prog=normal. [15] F. Tatsuoka, S. Zhou, T. Sato, and S. Shibuya, Method of Evaluating Liquefaction Potential and its Evaluation, in Seismic Hazards in the Soil Deposits in Urban Areas report, Ministry of Education of Japan, Tokyo, Japan, 1990, pp. 75109 (in Japanese). [16] K. Ishihara and M. Yoshimine, Evaluation of Settlement in Sand Deposits Following Liquefaction During Earthquakes, Soils and Foundations, Vol. 32, No. 1, March 1992, P173188, see http://www.jiban.or.jp/e/sf/sf.html. [17] M.R. Lewis, I. Arango, J.K. Kimball, and T.E. Ross, Liquefaction Resistance of Old Sand Deposits, Proceedings of the 11th Pan-American Conference on Soil Mechanics and Geotechnical Engineering, Foz do Iguassu, Brazil, August 812, 1999, pp. 821829, see http://www.issmge.org/ web/page.aspx?refid=176. [18] C.P. Polito, The Effects of Non-Plastic and Plastic Fines on the Liquefaction of Sandy Soils, PhD Thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA, December 10, 1999, http://scholar.lib.vt.edu/ theses/available/etd-122299-125729/ unrestricted/Dissertation.pdf. [19] C.P. Polito and J.R. Martin, Effects of Nonplastic Fines on the Liquefaction Resistance of Sands, Journal of Geotechnical and Geoenvironmental Engineering, ASCE, Vol. 127, Issue 5, May 2001, pp. 408415, http://ascelibrary.aip.org/getpdf/ servlet/GetPDFServlet?filetype=pdf&id= JGGEFK000127000005000408000001&idtype= cvips&prog=normal.

[4]

[5]

[6]

[7]

[8]

[9]

[10] Bechtel Savannah River, Inc. (BSRI), Savannah River Site, Replacement Tritium Facility, Geotechnical Investigation (U), Report No. WSRC-RP-93-606, Aiken, SC, 1993. [11] Bechtel Savannah River, Inc. (BSRI), In-Tank Precipitation Facility (ITP) and H-Tank Farm (HTF) Geotechnical Report (U), Report No. WSRC-TR-95-0057, Aiken, SC, 1995.

December 2009 Volume 2, Number 1

189

[20] T.L. Youd and S.N. Hoose, Liquefaction Susceptibility and Geologic Setting in Dynamics of Soil and Soil Structures, Proceedings of the Sixth World Conference on Earthquake Engineering, New Delhi, India, January 1977, Vol. 3, pp. 21892194. [21] H.B. Seed, Soil Liquefaction and Cyclic Mobility of Evaluation for Level Ground During Earthquakes, Journal of the Geotechnical Engineering Division, ASCE, Vol. 105, No. 2, February 1979, pp. 201255, access via http://cedb.asce.org/cgi/ WWWdisplay.cgi?5014380. [22] A.W. Skempton, Standard Penetration Test Procedures and the Effects in Sands of Overburden Pressure, Relative Density, Particle Size, Aging and Overconsolidation, Gotechnique, Vol. 36, Issue 3, September 1986, pp. 425447, access via http://www.icevirtuallibrary.com/content/ article/10.1680/geot.1986.36.3.425. [23] F.H. Kulhawy and P.W. Mayne, Manual on Estimating Soil Properties for Foundation Design, EPRI EL-6800, Final Report 1493-6, Electric Power Research Institute (EPRI), Palo Alto, CA, August 1990, http://www.vulcanhammer.net/ geotechnical/EL-6800.pdf. [24] J.R. Martin and G.W. Clough, Geotechnical Setting for Liquefaction Events in the Charleston, South Carolina Vicinity, H. Bolton Seed Memorial Symposium, 1990, Vol. 2, p. 313. [25] J.R. Martin and G.W. Clough, Seismic Parameters From Liquefaction Evidence, Journal of Geotechnical Engineering, ASCE, Vol. 120, No. 8, August 1994, pp. 13451361, access via http://cedb.asce.org/cgi/ WWWdisplay.cgi?9403694. [26] J.H. Schmertmann, Update on the Mechanical Aging of Soils, for the Sobre Envejecimiento de Suelos symposium, The Mexican Society of Soil Mechanics, Mexico City, Mexico, 1993. [27] I. Arango and R.E. Migues, Investigation of the Seismic Liquefaction of Old Sand Deposits, Report on Research, Bechtel Corporation, National Science Foundation Grant No. CMS-94-16169, San Francisco, CA, 1996. [28] E. Leon, S.L. Gassman, and P. Talwani, Accounting for Soil Aging When Assessing Liquefaction Potential, Journal of Geotechnical and Geoenvironmental Engineering, Vol. 132, Issue 3, March 2006, pp. 363377, http://ascelibrary.aip.org/getpdf/servlet/ GetPDFServlet?filetype=pdf&id= JGGEFK000132000003000363000001&idtype= cvips&prog=normal. [29] P. Talwani and J. Cox, Paleoseismic Evidence for Recurrence of Earthquakes Near Charleston, South Carolina, Science, Vol. 229, No. 4711, July 1985, pp. 379381, access via http://www.sciencemag.org/cgi/content/ abstract/229/4711/379. [30] S.F. Obermeier, R.B. Jacobson, D.S. Powars, F.E. Weems, D.C. Hallbick, G.S. Gohn, and H.W. Markewick, Holocene and Late Pleistocene Earthquake-Induced Sand Blows in Coastal South Carolina,

Proceedings of the Third U.S. National Conference on Earthquake Engineering, Charleston, SC, Vol. 1, 1986, p. 197. [31] S.E. Dickenson, J.R. Martin, and G.W. Clough, Evaluation of the Engineering Properties of Sand Deposits Associated With Liquefaction Sites in the Charleston, SC Area: A Report of First-Year Findings, submitted to the U.S. Geological Survey, June 1988, see http://nisee.berkeley.edu/elibrary/Text/S20113. [32] D. Amick, R. Gelinas, G. Maurath, R. Cannon, D. Moore, E. Billington, and H. Kemppinen, Paleoliquefaction Features Along the Atlantic Seaboard, Report NUREG/CR-5613, U.S. Nuclear Regulatory Commission, Washington, DC, October 1990, access via http://www.osti.gov/energycitations/product. biblio.jsp?osti_id=6151611. [33] J.H. Schmertmann, Guidelines for Cone Penetration Test Performance and Design, Report No. FHWATS-78-209, US Department of Transportation, Federal Highway Administration, Offices of Research and Development, Washington, DC, 1978.

Under the title, Site Characterization Philosophy and Liquefaction Evaluation of Aged Sands A Savannah River Site and Bechtel Perspective, this paper was published in March 2008 in From Research to Practice in Geotechnical Engineering (ASCE Geotechnical Special Publication [GSP] No. 180) a volume honoring Dr. John H. Schmertmann, P.E., Professor Emeritus, Department of Civil and Coastal Engineering, University of Florida. The paper has been edited and reformatted to conform to the style of the Bechtel Technology Journal. It is reprinted with permission from the American Society of Civil Engineers (ASCE).

BIOGRAPHIES
Michael R. Lewis is Bechtels corporate geotech n ical engineering lead and heads the Geotechnical Engineering Technical Working Group. He started work at Bechtel immediately after college as a field soils engineer on the WMATA Metro subway project in Washington, DC. During his 35 years with the company, he has been involved with nearly every type of project that Bechtel has completedfrom fossil and nuclear power to LNG, hydroelectric, mass transit and tunnels, airports, railroads, hotels, theme parks, bridges, highways, pipelines, mines, and smeltersand even a palace for the Sultan of Brunei. Mike has authored or co-authored more than 25 technical papers and has written or co-written several hundred internal reports for Bechtel. He has received three Bechtel Outstanding Technical Paper awards. Mike is an ASCE Fellow and a member of both the International Society of Soil Mechanics and Geotechnical Engineering and the American Nuclear Society working committee on seismic instrumentation at nuclear power facilities. As a member of the

190

Bechtel Technology Journal

Nuclear Energy Institute Seismic Task Force Team, which is focused on seismic issues related to the nuclear renaissance, he was the principal author of the NEI white paper regarding shear wave velocity measurements in compacted backfill. Mike has lectured at various universities and at several local, state, and national ASCE meetings, including the annual Sowers Symposium at The Georgia Institute of Technology. Mike holds a BS in Civil Engineering from the University of Illinois. He is a licensed Professional Engineer in Maryland, Florida, and Illinois. Ignacio Arango, PhD, retired in 2003 after 18 years with Bechtel, where he was a Bechtel Fellow, a principal vice president, and the corporate manager of geotechnical engineering. He originally joined Bechtels San Francisco office as chief geotechnical engineer, a position that involved him in all projects requiring geotechnical input. Ignacio has been retained as a corporate geotechnical engineering consultant on matters related to geotechnical and geotechnical earthquake engineering. Ignacio began his engineering career with Woodward-Clyde-Sherad and Associates, California, and later worked for a civil/geotechnical engineering practice in Colombia; for Shannon and Wilson, Washington State; and for Woodward Clyde Consultants, California. Ignacio has authored or co-authored 62 technical papers published in several journals and conference proceedings, as well as a book chapter on earth dams in Design of Small Dams (published by McGrawHill). He received two technical research grants from Bechtel and two technical research grants from the National Science Foundation. Ignacio has made numerous technical presentations at conferences in multiple countries throughout Europe, Asia, and the Americas. In addition, he has given weeklong seminars in Colombia, Chile, and Argentina, for which he prepared books containing the material presented and provided them to all seminar participants. Ignacio is currently a member of the ASCE, the Earthquake Engineering Research Institute, and the International Society of Civil Engineers and is an honorary member of the Sociedad Colombiana de Ingenieros and the Sociedad de Ingenieros Estructurales del Ecuador. Ignacio has received three Civil Engineering degreesa PhD from the University of California at Berkeley; an MS from the Massachusetts Institute of Technology, Cambridge; and his undergraduate degree from the Universidad Nacional de Colombia, Medelln. He is a licensed Professional Engineer in California.

Michael D. McHood is a senior geotechnical engineer in Bechtels Geotechnical and Hydraulic Engineering Services Group. He has more than 17 years of experience in this field, with particular emphasis on site response and liquefaction analyses. Most of his career has been spent at the nuclear facilities at the DOE Savannah River Site, with recent assignments involving work on nuclear power plants as well. Mike has co-authored five technical papersthree related to liquefaction and two related to earthquake ground response. He is a member of the ASCE. Mike received his MS and BS, both in Civil Engineering, from Brigham Young University in Provo, Utah. He is a licensed Professional Engineer in South Carolina.

December 2009 Volume 2, Number 1

191

192

Bechtel Technology Journal

EVALUATION OF PLANT THROUGHPUT FOR A CHEMICAL WEAPONS DESTRUCTION FACILITY


Issue Date: December 2009 AbstractThe Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP) is being designed and built to safely and efficiently destroy the stockpile of chemical weapons stored at the US Army Pueblo Chemical Depot (PCD). The facility must destroy more than 2,600 tons of mustard chemical warfare agent. The project team decided early in the design phase to prepare a discrete event model of the destruction process to conduct what-if analyses for design decisions and predict the plants overall operating schedule. The project also funded early developmental testing for key first-of-a-kind (FOAK) equipment and performed associated throughput, reliability, availability, and maintainability (TRAM) evaluations. The model and TRAM evaluations have proven invaluable in conducting what-if as well as throughput and availability analyses, selecting unit operating scenarios, evaluating system and equipment redundancy, and planning for maintenance and spare parts needs. The model also allows for a stochastic analysis of plant operations, including processing rates for equipment, routine plant inspections, preventive maintenance, random and expected equipment failure rates, and repair durations. This paper describes the development and use of the model, application of TRAM evaluation data during equipment design and testing, and suggestions for future application to chemical process plants. KeywordsBechtel Pueblo Team (BPT); blister agent; chemical agent destruction; chemical process plant; chemical weapon demilitarization; Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP); first-of-a-kind (FOAK) equipment; operations schedule; plant life-cycle cost evaluations; throughput and availability analysis (TAA); throughput model; throughput, reliability, availability, and maintainability (TRAM)
INTRODUCTION he Pueblo Chemical Agent-Destruction Pilot Plant (PCAPP) is being built in Pueblo, Colorado, under the direction of the US Army Element, Assembled Chemical Weapons Alternatives (ACWA) program. Its purpose is to safely and efficiently destroy the stockpile of chemical weapons stored at the Pueblo Chemical Depot (PCD). ACWA selected the Bechtel Pueblo Team (BPT), an integrated contractor team consisting of Bechtel National, Inc. (BNI), Parsons, Battelle Memorial Institute, and URS Corporation (formerly Washington Demilitarization Company), to design, construct, systemize, pilot test, operate, and close the PCAPP facility. The Pueblo stockpile consists of two chemical agent typesHD (distilled mustard) and HT (mixture of HD and T agents). These blister agents are stored in three different caliber munition types: 155 mm projectiles, 105 mm projectiles, and 4.2 in. mortars. A cutaway view of a typical mortar and a 105 mm projectile is shown in Figure 1.

Christine Statton
cstatton@bechtel.com

August D. Benz
adbenz@bechtel.com

Craig A. Myler, PhD


cmyler@bechtel.com

Wilson Tang
wrtang@bechtel.com

Figure 1. Cutaway of a Typical Mortar and 105 mm Projectile

Paul Dent
pmdent@bechtel.com

The Pueblo stockpile totals approximately 2,600 tons of chemical agent. The PCAPP design uses chemical neutralization followed by biotreatment to destroy the agent. First, the munitions are unpacked, disassembled, and demilitarized in the enhanced reconfiguration building (ERB). Uncontaminated dunnage

2009 Bechtel Corporation. All rights reserved.

193

ABBREVIATIONS, ACRONYMS, AND TERMS


ANR APB ACWA BC BNI agent neutralization reactor agent processing building US Army Element, Assembled Chemical Weapons Alternatives brine concentrator Bechtel National, Inc. Bechtel Pueblo Team biotreatment area cavity access machine evaporator/crystallizer enhanced reconfiguration building first of a kind distilled mustard (chemical agent) HT ICB MTU MWS PCAPP PCD PMD RR TAA TRAM mixture of HD and T chemical agents immobilized cell bioreactor munitions treatment unit munitions washout system Pueblo Chemical Agent-Destruction Pilot Plant (US Army) Pueblo Chemical Depot projectile/mortar disassembly reconfiguration room throughput and availability analysis throughput, reliability, availability, and maintainability

The PCAPP will destroy projectiles and mortars containing the vesicant chemical agents HD and T.

BPT BTA CAM E/C ERB FOAK HD

(pallets and boxes) and energetics (bursters and propellants) are shipped off site for disposal at commercial waste treatment facilities. Next, the munition bodies are deformed, drained, and rinsed in the automated cavity access machines (CAMs) and thermally treated. Chemical neutralization of the agent followed by biotreatment is performed in two major process areas: the agent processing building (APB) and the biotreatment area (BTA). The facility layout and a physical representation of the three major process areasERB, APB, and BTA are shown in Figure 2.

The key treatment steps within these three major process areas are illustrated in Figure 3. It should be noted that the flow diagram does not show ancillary equipment and operations such as interim storage, off-gas treatment, utilities, treatment of leaking or reject munitions, or other ancillary functions. During PCAPP facility design development, it was apparent that the projects life-cycle cost would depend highly on the facilitys operations schedule. Design features and operating parameters that shorten overall operations schedules tend to result in lower project life-cycle
Enhanced Reconfiguration Building (ERB)

Biotreatment Area (BTA)

Agent Processing Building (APB)

Figure 2. PCAPP Site Layout Showing Three Major Process Areas

194

Bechtel Technology Journal

ERB

APB

BTA

Stockpile Munitions

Unpack and Baseline Reconfiguration Propellants Drained Munitions Bodies

MWS

ICBs

Damage

Munitions

Treated Effluent

Hydrosafe

Caustic

Sludge

Agent and Wash Water

PMD

Debursted Munitions

ANRs

Water Recovery

The PCAPP is divided into three process areas: ERB (explosives separation) APB (chemical agent destruction)

Fuses and Bursters

Water

Salts & Solids

MTUs

Uncontaminated Dunnage to Disposal

Uncontaminated Energetics to Disposal

Clean Munitions Bodies to Recycle

Clean Solids to Disposal

BTA (liquid effluent treatment)

ANR APB BTA ERB

agent neutralization reactor agent processing building biotreatment area enhanced reconfiguration building

ICB MTU MWS PMD

immobilized cell bioreactor munitions treatment unit munitions washout system projectile/mortar disassembly

Figure 3. Key Treatment Steps Within Major Process Areas

cost, unless increased capital or operating costs for additional treatment equipment prove to be excessive. To aid in evaluating potential design changes and to improve schedule, a detailed throughput and availability analysis (TAA) model was developed. Using the TAA model, several alternative treatment schemes and configurations were evaluated to: Select the appropriate number of parallel processing lines Determine utility and support services requirements Evaluate operating labor requirements Select the munitions campaign sequence The analysis indicated that concurrent operation of the APB and the ERB (as opposed to an early enhanced munitions reconfiguration) would lead to the lowest life-cycle cost, so the project selected this approach as the basis for final plant design.

The TAA model has been updated and expanded as the detailed facility design has been completed, and it is currently used to evaluate the impacts of various funding scenarios, confirm operations schedules, and support life-cycle cost estimate updates for the project.

DEVELOPMENT AND USE OF THE TAA MODEL iGrafx Analytical Tool The TAA model was developed using a discrete event model created in Corel iGrafx Process 2005 for Six Sigma (v.10) software. The iGrafx analytical tool allows discrete-event modeling that reflects process behavior for each of the key treatment steps in the PCAPP, including the effects of resource availability (e.g., munitions feed, utilities supply, and work schedule) and equipment capacity and availability (e.g., design and expected throughput, downtime, and preventive maintenance). The software allows

December 2009 Volume 2, Number 1

195

consideration of both batch and continuous operations, which is necessary to describe all critical PCAPP treatment steps. The facility is modeled on a first-in/first-out approach. To reduce model complexity, the details of campaign changes and the effects of external influences (for example, weather, loss of outside utility supply, and security) are evaluated independently of the TAA model. To model the PCAPP facility, a detailed, interlinked, sequential flow diagram of key activities within each treatment step was prepared. The munitionsreferred to as transactionsflow between each treatment step as the model runs. The flow of the munitions between treatment steps is controlled using attributes, functions, and logic expressions in iGrafx. Each treatment

step on the flow diagram has an assigned behavior or activity, which may include batching, resource assignment, work performance, delay, subprocess, splits of attributes, or decisions. Because an individual munition may contain several subcomponents that must be processed and tracked (for example, agent and munition type, munition packaging, propellants, and explosives), the model must reflect the interlinking activities necessary to process the subcomponents. The model must also account for subactivities within each treatment step (for the ERB: unpacking, fuse removal, burster removal, leak check, and repack). Key steps of the iGrafx model, and an overview of how the steps are interlinked, are shown in Figures 4, 5, and 6.

The TAA model was developed using a discrete event model that reflects process behavior for each of the key treatment steps.

Load All 155 mm Projectiles Load Aluminum Burster 105 mm Projectiles Load Remaining 105 mm Projectiles Load HD 4.2 in. Mortars Boxed (2:20)

Reconfiguration Rooms (RRs)

Process Cartridges Process Propellant

Check Times

Boxed? No

Yes

Reconfigure RR_2

RR_1

Process Boxes and Associated Pallets


RR1 RR2 PMD Path PMD_3 PMD_2 PMD_1 PMD3 PMD2 PMD1 Release

Process Pallets

Unpack

Load HT 4.2 in. Mortars


PMD_1 Munitions Type PMD_2 PMD1 Surge Leak? PMD2 Surge PMD3 Surge Yes Count Leakers Leak? No Yes Count Rejects Reject? No No Hold Maintain Repair PMD3 Yes Count Rejects Yes Count Leakers PMD_3

Yes

Count Leakers

Leak? No

Projectile/ Mortar Disassembly (PMD) Machines

No
Reject?

Yes

Count Rejects

Reject? No No Hold

No No
Hold

1 2 3
0 1

Maintain Repair PMD1

1 2 3
0 0 1 1

Maintain Repair PMD2

1 2 3
0 0 1 1

On_Line 2
3 4
2

On_Line 2
3 4
2

On_Line 2
3
0

4
2

4
2

4
2

4
2

5 Maint. Time 6 Queue PMD1 Release

5 Repair Time 6

5 Maint. Time 6 Queue PMD2 Release

5 Repair Time 6

5 Maint. Time 6 Queue PMD 3 Release

5 Repair Time 6

Figure 4. Enhanced Reconfiguration Building in iGrafx

196

Bechtel Technology Journal

Munitions Washout Systems (MWSs)


MWS1 Surge 1A MWS2 Surge 2A

Maintain
1 2 3
0 1

Repair
1A 1 1 2 3
0 0 1 1

Maintain
2A

Repair
1

On_Line 2
3 4
2

On_Line 2
3
0

4
2

4
2

4
2

5 Maint. Time 6 Distribute

5 Repair Time 6

5 Maint. Time 6 Distribute Munition Bodies

5 Repair Time 6 Munition Bodies Release Agent 1 Count Munition Bodies Load for MTU2

Munitions Treatment Units (MTUs)


1 2 Repair MTU2 On_Line MTU2 Release 3
0

Repair Time

MWS1A Represents All Required Cavity Access Machines

Release Agent

MWS2A Represents All Required Cavity Access Machines

Maintain 2 3
0

5
1

Repair Time

Agent and H20 Surge

To model the PCAPP facility, a detailed, interlinked sequential flow diagram of key activities within each treatment step was prepared.

Treated Munition Bodies

1 Start Agent Reaction


0 1

2 Repair

Repair Time

Count Munition Bodies 2

Load for MTU1

MTU1

On_Line

MTU1

Release

Agent Feed

Maintain 1 2 3
0

Repair Time

1 2 3
0

Maintain

RX1 On_Line

Repair

1 2

Maintain 1 2 3
0

Repair RX2 On_Line 1 2 3


0

3
0

4
1

4
1

6 Maint. Time 7

6 Repair Time 7

4
1

4
1

Agent Neutralization Reactors (ANRs)


Reprocess Batch (Clear) Agent_Hydrolysate1 Agent Hydrolysate Hold 1 Reprocess Batch Fail Failed Batch? Pass Release ICB Hold Tank Failed Batch? Fail Reprocess Batch Pass Release

6 Maint. Time 7 Distribute AgentRX_1

6 Repair Time 7 Reactor 1

Release Release Batch Agent

Reactor AgentRX_2 2

Agent 0 Hydrolysate 1 Surge

1 Distribute 2

Agent Hydrolysate Hold 2 Agent_Hydrolysate2 Reprocess Batch (Clear)

Figure 5. Agent Processing Building in iGrafx

December 2009 Volume 2, Number 1

197

Immobilized Cell Bioreactors (ICBs)


Start for ICB Treatment ICB Open Valve Close Valve Distribute

ICB01 ICB02 ICB03 ICB04 ICB05 ICB06 ICB07 ICB08 ICB09 ICB10 ICB11 ICB12 ICB13 ICB14 ICB15 ICB16 ICB17 ICB18 ICB19 ICB20

ICB 1

ICB 2

ICB 3

ICB 4

ICB 5

ICB 6

ICB 7

ICB 8

ICB 9

ICB 10

ICB 11

ICB 12

ICB 13

ICB 14

ICB 15

ICB 16

ICB 17

ICB 18

ICB 19

ICB 20

ICB 21

ICB 22

ICB 23

ICB 24

To Water Recovery

24

Start for E/C Feed

Brine Concentrator Feed

0 1

11 22

1 2 3 Repair Repair On_Line On_Line BC1 BC2 Maintain 1 1 2 3 Maintain 2 3 24


0

24

24 6
0

Evaporator/Crystallizer (E/C)
1 2 3 24
1

6 24
0

Start for Brine E/C Feed

E/C Feed

0 1

11 22

1 2 3 Repair Repair On_Line On_Line E/C1 E/C2 Maintain 1 1 2 3 Maintain 2 3 24


0

24 6
0

Figure 6. Biotreatment Area in iGrafx

For each specific activity, input to the TAA model includes: Duration of the process activity (minutes per round or per batch) Capacity of the process activity (gallons, pounds, or number of munitions per hour or per batch) Capacity of buffer storage and tankage to allow for accumulation of material between upstream or downstream batch operations Expected maintenance/failure/repair frequency and duration for the activity or specific elements of the activity (time interval between events and event duration) The above inputs are developed for each activity considering past experience with the same or similar equipment or are based on engineering judgment of equipment performance under

198

The TAA model is a detailed mathematical simulation of the physical PCAPP facility that can be readily modified or manipulated to evaluate alternative designs and operating scenarios.

Brine Concentrator (BC)


1

Repair Time
1

7 6 Repair Time 7 BC1 Release BC Processed Hold BC2 Release

BC_1 Distribute

6 7

Maint. Time

BC_2

Maint. Time

Repair Time
1

7 6 Repair Time 7 E/C1 Release E/C Processed E/C2 Release

E/C_1 Distribute

6 7

Maint. Time

E/C_2

Maint. Time

expected operating conditions. For newly developed and first-of-a-kind (FOAK) equipment items, these judgment inputs are critical to generating a realistic operations model. Given this criticality, the inputs are based on the results of equipment shop testing and input from a throughput, reliability, availability, and maintainability (TRAM) evaluation team (including the equipment designer/fabricator and representatives of the operations and maintenance team, among others). For each activity input, a probability occurrence set is also provided so that the model can simulate overall facility operation on a random, Monte Carlo basis. The iGrafx model also includes necessary decision and/or logic steps to define how the material is batched through each process step, how material is distributed between parallel equipment trains,

Bechtel Technology Journal

and when equipment is not available to process munitions or agent due to limiting conditions upstream or downstream of that step. Decision and/or logic steps are used to track how often a process step enters maintenance or failure mode and when the process step is available to receive the next transaction (i.e., when an agent hydrolysis reactor is available to process the next batch of agent and is not busy reprocessing a failed batch). The PCAPP TAA model includes more than 25 decision points, 15 counters at each key location, and 125 process steps. The software takes more than 10 minutes to run the entire munition stockpile through the model for a single simulation run. As the software runs, more than 5 million attributes are created and tracked. Each simulation run is generated using a different random number so that no two simulation runs produce the same results. Results from 30 or more different simulation runs are generated and averaged to obtain a probable overall schedule and to evaluate bestcase and worst-case schedules. In summary, the TAA model is a detailed mathematical simulation of the physical PCAPP facility that can be readily modified or manipulated to evaluate alternative designs and operating scenarios. Once the flow sequence model is defined and specific input data are entered into the model, the software will run to virtually destroy the entire stockpile on a discrete-event basis. The results from model runs can be summarized in many standard output formats as well as in customized outputs such as:

Overall time required for processing all munition bodies and hydrolysate Equipment utilization statistics Equipment out of service statistics Tracking and monitoring of buffer storage quantities, processing times, and processed quantities at given time intervals An example of a customized graph created in the model to track the quantity of munitions stored in a buffer storage area over a single simulation run is provided as Figure 7. Using the TAA Model The primary use of the TAA model has been to determine and verify the most probable life-cycle operating case based on a given set of input data. Analysis of results for differing input data can shed light on potential design improvements, enhanced equipment reliability, operations pinch points due to equipment capacity limits, use of multiple parallel trains or added surge buffer capacity, peak utility requirements, and possible changes in weekly work schedules. The probable impact on operating schedule based on any or all of this variable input data can be evaluated and used to support final design decisions and operating strategies and in predicting funding needs for the project. The TAA model was used early in the design phase to select plant configuration and to estimate throughput and reliability requirements. Its use identified the equipment that had the greatest impact on throughput and allowed the project to focus technical risk reduction efforts (particularly early equipment demonstration and testing) on those areas with the greatest impact on overall facility cost and operations schedule.

The throughput and availability model is used to develop a probabilistic operating case, investigate potential design improvements, and identify pinch points in the process.

Number of Munition Bodies in APB Munition Body Storage Building (MWS Surge)

Number of Rounds

155 mm Time, days

105 mm

4.2 in.

Figure 7. Custom Buffer Storage Quantities

December 2009 Volume 2, Number 1

199

The TAA model has proven invaluable for what-if analyses by trending the impact that changing different process variables has on the operations schedule. Series of runs were performed to determine the sensitivity of the operations schedule to changes in mechanical equipment availability as well as to other factors that could affect production rates, such as buffer capacity size and throughput rates. Table 1 shows the results from a whatif analysis of a series of runs made using reduced equipment availabilities for two key PCAPP systemsmunitions washout system (MWS) and agent neutralization reactor (ANR) system. The analysis examined how each system independently affects the overall operations schedule assuming the same reduction in mechanical availability. Based on the results of the different runs, it can be concluded that reduced availability in the MWS had a much greater impact on the operations schedule than reduced availability in the ANR system.
Table 1. What-If Analysis with Reduced Mechanical Availability
Change in Mechanical Availability Compared to Base Case
N/A (Base Case) 10% 20% N/A (Base Case) 10% 20%

being further updated with the results from the detailed equipment testing program for plant-ready, FOAK equipment being supplied by vendors. The FOAK test program at PCAPP has been planned based on a TRAM evaluation, as discussed below.

THROUGHPUT, RELIABILITY, AVAILABILITY, AND MAINTAINABILITY EVALUATIONS

The TAA model has proven invaluable for what-if analyses by trending the impact that changing different process variables has on the operations schedule.

RAM evaluations involve a disciplined collection and examination of relevant throughput, reliability, availability, and maintenance data for special, newly developed FOAK equipment designs that have not been fully demonstrated on an industrial scale. The TRAM analysis is used to determine the extent of operational testing that should be done on plant-ready equipment to confidently predict the probable throughput, reliability, availability, and maintenance performance of that equipment. After completion of FOAK demonstration testing, the TRAM program results are essential input for updating the TAA model to increase the credibility of its operations schedule and life-cycle cost estimates. To aid in understanding the roles played by TRAM evaluation, FOAK testing, and the TAA model in optimizing plant operations, it is helpful to consider a specific example relating to the PCAPP projects MWS. As mentioned previously in this paper, MWS is one of the key treatment steps at PCAPP and utilizes an entirely new agent access and removal process for projectiles and mortars. For projectiles, the empty burster well is crushed into the munition body, followed by gravity draining of the agent and then washout of the round using high pressure water. For mortars, the bottom of the munition body is cut off with a wheeled cutter, followed by draining and washout. The MWS also uses a commercial robot for munitions handling and transfer between several washout stations in each MWS process line. Based on a TRAM evaluation, the forthcoming FOAK testing of the MWS at the designer/ fabricators shop will be carried out in a test program equivalent to over 6 weeks of continuous, 24/7 operations on simulated projectile or mortar rounds (both pressurized and non-pressurized). Testing in this manner will confirm design capacity, define preventive and corrective maintenance frequency and durations, evaluate failure mechanisms for

Key System

Availability, %*

Additional Processing Time, weeks


N/A 22 51 N/A 2 12

MWS

86, 84, 72 76, 74, 62 66, 64, 52

ANR

89 79 69

* The MWS is retooled for each munition type between campaigns. MWS availability differs, depending on which munition type is processed during a campaign. The ANR system has a single availability because no changes to the system are required for the different munitions.

During the early design phase, the TAA model was coordinated with the engineering design development of the process flow diagrams, material and energy balances, and vendor data using historical operations data from other operating chemical destruction plants and from industrial experience with similar equipment, plus results from an early prototype equipment testing effort. At this time, the TAA model is

200

Bechtel Technology Journal

MWS components, and define spare parts and assembly needs. Future PCAPP operating conditions will be simulated to the extent possible to make the testing representative. To this end, plant-ready MWS support equipment from other suppliers is also being shipped to the designer/fabricators facility to allow integrated system testing. If failure modes are shown to exist during testing, then redesign or upgrade of components will be considered. After testing is completed, the TRAM assessment will be updated and inputs to the TAA model will be revised to reflect increased or reduced downtime. This extensive FOAK test program will provide the data needed to develop a much more realistic operations schedule and cost for PCAPP. Besides the previously described MWS, two other key FOAK treatment steps at PCAPP (refer to the simplified block diagram of Figure 3) were the subject of detailed TRAM evaluations: Munitions DisassemblyThis operation involves an automated enhanced reconfiguration of munitions to remove explosive components. It includes nose closure (fuse or lifting lug) removal, booster charge and booster cup removal, and burster removal. These operations are similar to operations performed by the projectile/ mortar disassembly (PMD) machine used in US baseline demilitarization facilities. For PCAPP, the PMD elements have been upgraded to use a commercial robot for munitions handling and transfer between disassembly steps, and all hydraulics have been replaced by electric drives. Munitions Treatment UnitThe munitions treatment unit (MTU) is a muffle oven designed to thermally treat washed munition bodies to destroy any residual agent before releasing the metal bodies for offsite recycling. This continuous belt, resistance-heated oven has been modified from a proven commercial design. More detail on the above three FOAK treatment steps for PCAPP can be found elsewhere. [1] The remaining four key treatment steps at PCAPP, shown in Figure 3, are as follows: Unpack and Baseline Reconfiguration These two primarily manual operations have been practiced at various Army depots numerous times.

Agent Neutralization ReactorAgent neutralization of mustard was fully demonstrated at the Aberdeen Chemical Destruction Facility. Immobilized Cell BioreactorImmobilized cell bioreactor (ICB) treatment is widely practiced on commercial and domestic waste streams and is well demonstrated. Water RecoveryWater recovery using evaporation and crystallization is also widely practiced on commercial waste streams similar to the ICB effluent and is well demonstrated. Because the four steps listed above have been previously demonstrated at industrial scale, they are considered lower-risk processes that do not require FOAK testing and full TRAM analysis. TRAM data on these key treatment steps is available based on past industrial experience.

CONCLUSIONS AND RECOMMENDATIONS

The TAA model should be prepared early in the design process, when modeling case studies can readily influence treatment step scope and capacity.

or a complex processing plant such as PCAPP, which involves several sequential batch and continuous treatment steps, it is extremely useful to perform a detailed TAA to ensure that the facility scope, operations schedule, and full lifecycle cost have a sound, credible basis. Further, if the processing plant uses equipment and systems that are new designs, a realistic TAA model should be based on a disciplined approach to developing and demonstrating relevant TRAM information for FOAK undemonstrated equipment and systems within the plant. The TAA model should be prepared early in the design process, when modeling case studies can readily influence treatment step scope and capacity. Initial modeling is usually based on rough estimates of capacity, reliability, and availability for individual steps, but these estimates are later refined as the design progresses and plant configuration is optimized. The TAA model should be used freely to investigate alternative treatment configurations such as parallel treatment trains, buffer storage, increased or decreased equipment sizes, redundancy, spare equipment, and other activities and functions. Operations and maintenance issues should be considered during TAA modeling, including realistic estimates for startup/shakedown and initial testing, repair or replacement work in toxic areas, detailed campaign change modifications, and shutdown and closure plans. The model should be updated as key design and operations decisions are made, and maintained evergreen.

December 2009 Volume 2, Number 1

201

Although it is possible to model every step and substep of the facility in great detail, it is best to avoid making the model overly complicated to keep the code as simple as possible, allowing it to be modified easily and run quickly. The TAA model matrix and simulation results should be verified by independent calculation. Results should be clearly communicated and documented for discussion and buy-in by key stakeholders (plant operations, regulatory agencies, owner representatives, etc.).

plant operations to help forecast their duration to support a life-cycle cost estimate. Since then, Christine has held increasingly responsible positions, including senior resident engineer of utility systems and engineering supervisor of the Process and Mechanical Engineering Groups. Christine holds a BS in Chemical Engineering from Pennsylvania State University and is an active member of the Society of Women Engineers and American Institute of Chemical Engineers. She is an Engineer-inTraining in Washington. August D. Benz is a Bechtel Fellow and principal technologist with Bechtel National, Inc., in San Francisco, California. His remarkable 50-year Bechtel career encompasses experience in chemical process design, project engineering, research and development, feasibility studies, process plant startup and operations, engineering management, and project management. During the past 15 years, Gus has performed conceptual studies and detailed design and operation reviews for the US Armys chemical weapons destruction facilities in Aberdeen, Maryland; Pueblo, Colorado; and Lexington, Kentucky. Earlier, he was project manager for Phase I planning studies for the Russian chemical weapons facility in Shchuchye for the US Defense Threat Reduction Agency. Gus has written more than 30 technical papers, more than half of which have been presented in major public forums. In addition, he holds six patents related to chemical process engineering and project execution. His fields of experience include organic and inorganic chemicals, petrochemicals, polymers, fertilizers, petroleum refining, natural gas, chemical warfare agent destruction, waste treatment, environmental remediation and regulatory compliance, and alternative energy systems. He is a member of the American Institute of Chemical Engineers. Gus holds a BS in Chemical Engineering from Oregon State University and is a licensed Professional Engineer in California. Craig A. Myler, PhD, a Bechtel Fellow, has more than 25 years of experience in the treatment and disposal of chemical agents and munitions. As chief engineer for chemical and nuclear engineering for Bechtel National, Inc., he has overall responsibility for this discipline and manages the efforts of a large and diverse group of chemical, nuclear, and safety engineering specialists. His project oversight responsibilities range from chemical demilitarization facilities to high-level radioactive waste disposal projects. Craig has developed methods for data analysis and reporting for the US Armys chemical demilitarization program and holds two patents related to chemical agent treatment and protection. He has taught chemistry and engineering at the United States Military Academy and the University of Maryland, Baltimore campus.

Key to the successful implementation of a discrete element TAA model is keeping the code as simple as possible to fit the required application.

TRADEMARKS Corel, iGrafx, and iGrafix Process are trademarks or registered trademarks of Corel Corporation and/or its subsidiaries in Canada, the United States, and/or other countries.

REFERENCES
[1] C.A. Myler and A.D. Benz, Munitions Processing for the Pueblo Chemical Agent Destruction Pilot Plant, presented at the 10th International Chemical Weapons Demilitarisation Conference (CWD 2007), Brussels, Belgium, May 1418, 2007, http://www.dstl.gov.uk/conferences/ cwd/2007/pres/craig-myler-pres.pdf.

This paper expands on a PowerPoint presentation entitled Evaluation of Plant Throughput for a Chemical Weapons Destruction Facility Using Discrete Event Modeling, Equipment Throughput Reliability Availability and Maintainability (TRAM) Evaluations, and Equipment Testing (http://www.dstl.gov.uk/conferences/cwd/2009/ pres/BenzA-01.pdf), which was presented at the 12th International Chemical Weapons Demilitarisation Conference (CWD 2009), held May 1821, 2009, in Stratford-upon-Avon, Warwickshire, UK.

BIOGRAPHIES
Christine Statton is a senior mechanical/chemical engineer on the PCAPP project. Since joining Bechtel in 2001, she has worked for three Bechtel National, Inc., projects: the Waste Treatment and Immobilization Plant in Richland, Washington; the Aberdeen Chemical Agent Destruction Facility (ABCDF) in Aberdeen, Maryland; and, since late 2006, the PCAPP project. Christine has more than 5 years of experience in design and operations of chemical agent destruction facilities. While on the ABCDF project, she worked as a process engineer supporting plant operations. One of her first tasks on the PCAPP project was to update an existing iGrafx model to simulate the

202

Bechtel Technology Journal

Craig is a senior member of the American Institute of Chemical Engineers and a member of the American Nuclear Society and Society of American Military Engineers. Craig received a PhD and an MS in Chemical Engineering from the University of Pittsburgh, Pennsylvania, and a BS in Chemistry from the Virginia Military Institute in Lexington. Wilson Tang is a senior process engineer for the Waste Treatment and Immobilization Plant in Richland, Washington. He is also currently assisting on the PCAPP project, where he previously supervised the process design team that designed the equipment to achieve the desired plant throughput. Wilson has 35 years of experience in process engineering, including chemical agent destruction, waste minimization and pollution prevention, hazardous waste handling and disposal, nuclear waste disposal, nuclear fuels reprocessing, petroleum refining, and inorganic chemicals. Before joining Bechtel, he was a process engineer for Food Machinery Corporation in California. Wilson co-authored Environmental Assessment for the Demonstration of Uranium-Atomic Vapor Laser Isotope Separation (U-AVLIS) at Lawrence Livermore National Laboratory, May 1991, Report No. DOE/EA-0447. Wilson holds a BS in Chemical Engineering from the University of California, Berkeley, and is a Six Sigma Yellow Belt.

Paul Dent is FOAK equipment manager for the Bechtel Pueblo Team executing the PCAPP project. He has more than 30 years of experience in planning, constructing, and operating industrial facilities in the transportation, environmental, and government sectors. At PCAPP, he leads the development, testing, and integration of FOAK equipment. Before joining the Bechtel Pueblo Team, Paul was county manager on Bechtel National, Inc.s, FEMA project, which installed over 36,000 temporary homes for those displaced by Hurricane Katrina. He also worked in key positions for operations, startup, and EPC management of the ABCDF. Paul joined Bechtel in 1998 with more than 20 years of experience in operations and project management for commercial hazardous waste and railroad companies. His prior accomplishments include development of a corporate project management program and a nationwide network of facilities for Chemical Waste Management, Inc.; turnaround of an unprofitable industrial recycling and transportation operation; development of a hazardous waste master plan for the country of Thailand; and construction and operation of many railroad facilities. Paul has a BSE in Civil Engineering from Princeton University and is a licensed Professional Engineer in Wisconsin.

December 2009 Volume 2, Number 1

203

204

Bechtel Technology Journal

INVESTIGATION OF EROSION FROM HIGH-LEVEL WASTE SLURRIES AT THE HANFORD WASTE TREATMENT AND IMMOBILIZATION PLANT
Issue Date: December 2009 AbstractThe Waste Treatment and Immobilization Plant (WTP) is being constructed at the US Department of Energy (DOE) Hanford Site in Washington State to treat and immobilize approximately 216 million liters (57 million gallons) of high-level radioactive waste. Of the 216 million liters, some 42 million liters (11 million gallons) of sludge could erode mixing vessels when mixed by pulse-jet mixer (PJM) devices that direct highvelocity jets against the vessel walls. Because the vessels are not designed to be replaced and are in nonaccessible locations, the erosion mechanisms and rates must be well understood and accounted for in the design so that the vessels perform safely and reliably over the plants 40-year design life. The literature contains little information about erosion under PJM mixing conditions, and earlier evaluations involved considerable interpretation and adjustment of existing data as the basis for assumptions used in earlier predictions. Accordingly, the WTP project undertook an erosion testing program to collect data under prototypic PJM operation and waste characteristic conditions. This paper describes the mixing program process and results, including determining the waste characteristics to be tested, developing the simulant, establishing the test variables to be examined, and conducting the testing. Evaluation of the test results indicates that all WTP vessels have adequate erosion wear resistance. Keywordserosion, Hanford, impingement angle, pulse-jet mixer (PJM), radioactive waste, slurry, stainless steel, ULTIMET, wear plates, wear resistance

INTRODUCTION The Hanford Site in Washington State is partially circumvented by the Columbia River and contains roughly 216 million liters (57 million gallons) of mixed waste (classified by the state as radioactive dangerous waste) from Cold War plutonium production. Many of the sites 177 underground tanks are closed or in the process of being emptied. To facilitate the disposition of the waste, the US Department of Energy (DOE) is constructing the worlds largest radioactive waste processing facility. In this Waste Treatment and Immobilization Plant (WTP), the waste will be immobilized in a glass matrix and contained in stainless steel canisters for safe and permanent disposal. Of the 216 million liters, about 42 million liters (11 million gallons) of insoluble sludge could erode mixing vessels as a result of the mixing action the sludge undergoes in preparation for its vitrification. Three main facilities constitute the WTP. The pretreatment (PT) facility is designed to chemically treat, separate, and concentrate the

waste received directly from the underground tanks. The high-level waste (HLW) vitrification facility will use melters to immobilize the high-level portion (insoluble solids and higher percentage of radioactive constituents) of the waste in glass. The low-activity waste (LAW) facility will similarly vitrify the low-activity waste (soluble salts). Both the PT and HLW facilities contain vessels in which the waste will be mixed by pulse-jet mixers (PJMs)36 vessels in the PT and 4 in the HLW. A PJM is a long cylinder with a tapered nozzle, located in a process vessel that is pressurized to expel its waste into a larger process vessel to cause mixing. During process operations, the PJM operates pneumatically in continuously alternating fill and discharge modes, with both modes facilitating mixing. PJMs are typically arrayed around the periphery of the process vessel, and their size and number are a function of the vessel and its contents. At the WTP, the number of PJMs per vessel ranges from 1 to 12.

Ivan G. Papp
igpapp@bechtel.com

Garth M. Duncan
gduncan@bechtel.com

2009 Bechtel Corporation. All rights reserved.

205

ABBREVIATIONS, ACRONYMS, AND TERMS


ASME conc. DEI DOE HLW American Society of Mechanical Engineers concentration Dominion Engineering, Inc. US Department of Energy high-level waste inside diameter low-activity waste pulse-jet mixer pretreatment Waste Treatment and Immobilization Plant

Very limited data was available on WTP slurry waste conditions before Bechtel performed the tests described in this paper.

ID LAW PJM PT WTP

recently published Hanford tank farm waste chemical and physical characterization (e.g., particle size distribution) to develop a simulant that replicated waste conditions. Testing was performed at one-quarter scale to the actual design, and testing variables included pulsed vs. continuous flow, jet velocity, mean particle size, slurry concentration, average particle hardness, and impingement angle. The project used mass loss data to develop calculation exponents for velocity, size, and concentration terms so that erosion rates could be adjusted for different operating and feed conditions. The predicted erosion rate for each affected vessel was evaluated against the available erosion design allowance for that vessel. The project also performed numerous sensitivity analyses to assess the effect of different operating and waste input variables, to demonstrate design margin and robustness. Evaluation of the testing data demonstrated that no adjustment to the established design of the vessels was necessary for wear resistance.

For the WTP, the main concern is the PJMs jet mixing action of solids that contributes to erosion of the vessel walls. The PJM jets typically have 10-centimeter (4-inch) diameter nozzles that discharge at a rate of 8 to 17 meters per second (26 to 56 feet per second). The discharge point of a PJM is nominally a distance of 1.5 times the nozzle diameter from the vessel wall. Thus, a 10-centimeter PJM nozzle would be located approximately 15 centimeters (6 inches) from the vessel wall. WTP vessels whose contents are mixed by PJMs are made of stainless steel (grade 304L or 316L). Because the vessels are not designed to be replaced and are located in high-radiation areas of the facility, erosion mechanisms and rates must be well understood and accounted for in the design so that the vessels perform safely and reliably over the plants 40-year design life. The literature contains little information about erosion under PJM mixing conditions, so earlier evaluations involved considerable interpretation and adjustment of published experimental data. Two respected subject matter experts reviewed the WTP erosion prediction methodology and recommended that, while it appeared to be appropriate and should yield reasonable results, testing should be performed to provide greater assurance that the estimates were valid. Accordingly, the Bechtel project team undertook an erosion testing program to collect erosion rate data under prototypic PJM operation and waste characteristic conditions. The project used

PREVIOUS EROSION WORK TAKEN INTO ACCOUNT

ery limited data was available on WTP slurry waste conditions before Bechtel performed the tests described in this paper. Enderlin and Elmore [1] had performed some testing and investigated a zeolite water slurry in a hydroxide solution as applicable to the DOE West Valley vitrification plant. Parametric relationships used to compare slurry conditions were based on work published by Gupta et al. [2] and Karabelas [3] (based on mineralslurry wear data reported by Karabelas and by Aiming et al. [4]) to determine erosion allowances for WTP waste slurries [3, 4]. From this previous work, Bechtel developed the exponential relationship of erosion (scar depth) to each of the main factors affecting erosionjet velocity, mean particle size, and slurry concentration. The team also reviewed other work performed by Wang and Stack [5] and by Mishra et al. [6]. The results reported by Karabelas and by Gupta (under the specific conditions investigated) produced exponents in the range of 2 to 3 for the jet velocity term of the equation, 0.2 to 0.3 for the particle size term, and approximately 0.5 for the slurry concentration term. Across the range of conditions studied, the work Bechtel performed produced exponents, on average, of 3 for velocity, 2 for particle size, and 0.8 for concentration.

206

Bechtel Technology Journal

WASTE PROPERTIES DETERMINATION he testing program used an updated assessment of Hanford tank farm waste chemical and physical characterization (e.g., particle size distribution) published in Reference [7], also known as the 153 report, as the primary basis for assigning values to the erosion-important constituents of the waste. Important waste characteristics are particle size, density, hardness, and morphology. An additional resource was a detailed evaluation that the project performed of 518 feed delivery batches contained in the Tank Farm Contractor Operation and Utilization Plan, Rev. 6. [8] That evaluation is documented in WTP Waste Feed Analysis and Definition. [9] Based on this input, nominal erosive feed characteristics were established as mean particle size of 24 microns, average specific gravity of 2.4, and average Mohs hardness of 3.6. The waste particle characteristics are consistent with those used to evaluate critical slurry flow velocity to ensure that lines within the WTP will not plug, and with assessments of the PJM mixing capabilities. These characteristics are included as part of the WTP waste feed acceptance criteria contained in ICD-19 Interface Control Document for Waste Feed. [10] The ability of the tank farm contractor to meet the acceptance criteria was assessed and confirmed in detail in Reference [11]. As a final control, the project measured the waste characteristics, including the erosive characteristics, for each feed delivery batch, to enable a running prediction of vessel erosion to be maintained and, if necessary, to take corrective actions.

predictions for the WTP vessels were shown to not be accentuated by the effect of corrosion. The base simulant consisted of 15 components, including various aluminum compounds (boehmite and gibbsite), zeolite, and ferrite. Smaller amounts of other components were included to achieve the required hardness and density properties. While the 153 report indicated that larger particles were agglomerates that broke apart upon mixing or pumping, this feature was not included in the final simulant; the mean particle size was based on primary particle sizes, which is conservative. Other morphology-related aspects were accounted for by using the same components found in the real waste, in the size range found in the waste. The project prepared five simulant compositions to provide a realistic but bounding set of properties for testing. The simulants were based on weighted mean particle sizes that included 24, 38, and 54 microns. Concentrations included 150, 250, and 350 grams per liter.1 Particle hardness was averaged to achieve 3.6 and 4.4 on the Mohs scale. The hardness values were calculated averages based on vendor-supplied information on the primary particle constituents used to make up the slurries.

The test program used an updated assessment of Hanford tank farm waste. The project prepared five simulant compositions to provide a realistic but bounding set of properties for testing.

TEST VARIABLES est variables included pulsed vs. continuous flow, jet velocity, mean particle size, slurry concentration, average particle hardness, and impingement angle. Particle size reduction was noted in shakedown testing, and every 24 hours the simulant was replaced and the average particle size restored to match plant vessel turnover conditions. Bechtel designed the test matrix to gather the minimum information required to adequately predict the erosion over 40 years of WTP operation (Table 1). The matrix varied the jet velocity, mean particle size, and slurry concentration to determine the exponential relationships of these parameters as inputs to the calculation method described below. As a secondary investigation, the team also evaluated hardness as a sensitivity parameter. Because hardness of the simulant primary particles was not deemed to be a significant contributor to the calculation method, the term was not used as a factor in the method.
1 US equivalents have not been provided for these and other metric values used in or resulting from the testing because test parameters and terms were developed based on the metric system.

SIMULANT DEVELOPMENT echtel also prepared a testing simulant that replicated waste feed conditions by comparison to the weighted mean particle size, hardness, waste chemistry, and crystalline geometry. The simulant comprised only those significant constituents found in the actual waste, at their nominal sizes. In addition to the particle properties affecting erosion noted above, waste properties affecting erosion include solids concentration, pH, and liquid viscosity. Consideration was also given to the corrosive potential of the liquid fraction of the slurry against the stainless steel. Corrosion of the stainless steel measured in all of the testing described below was shown to be negligible when compared to the erosion. Further, the erosion rate

December 2009 Volume 2, Number 1

207

Table 1. Erosion Testing Matrix


Test
Run 1 Jet 1 Jet 2 Hold Run 2 Jet 1 Jet 2 54 54 24 24 24 24 350 350 350 350 350 350 Highest Wear Concentration? 39 39 24 24 24 24 24 24 Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold Per Hold 3.6 3.6 3.6 3.6 4.4 4.4 3.6 3.6 14 12 17 8 12 12 12 12 90 90 90 90 90 90 65 90 Yes Yes Yes Yes Yes Yes Yes Yes Continuous Continuous Continuous Continuous Continuous Continuous Continuous Continuous 3.6 3.6 3.6 3.6 3.6 3.6 14 12 14 12 14 12 90 90 90 90 90 90 Yes Yes Yes Yes Yes Yes

Particle Distribution, microns


24 24

Solids Concentration, g/L


350 350

Average Hardness, Mohs


3.6 3.6

Jet Velocity, m/s


12 12

Jet Angle
90 90

Replenish
Yes Yes

Flow Pattern
Continuous Pulsed Pulsed or Not? Continuous Continuous Continuous Continuous Continuous Continuous

Test measurements required for use as direct input to WTP design calculations were taken in accordance with ASME NQA-1 nuclear quality standards.

Run 3 Jet 1 Jet 2 Run 4 Jet 1 Jet 2 Hold Run 5 Jet 1 Jet 2 Run 6 Jet 1 Jet 2 Run 7 Jet 1 Jet 2 Run 8 Jet 1 Jet 2

Another secondary investigation looked at the effect of pulsed versus continuous flow of the PJM jet stream. This investigation considered the concept that an intermittent jet could result in more, less, or an equivalent scar depth of the vessel wall. Run 1 showed little difference between pulsed and continuous erosion rates, so the remaining testing was done with a continuous flow rate. The tests used grade 316L stainless steel test coupons, except for Run 7 Jet 2 and Run 8 Jet 2, which used test coupons made from the ULTIMET cobalt-based alloy to evaluate it as a potential weld overlay to the vessels stainless steel wear plate as an added erosion barrier.

the simulant after each run, as noted earlierto provide a total of 96 hours of continuous wear. The erosion measurements taken were mass loss and scar depth; the latter is the primary measurement of importance at the WTP. Figure 1 shows the test fixture used, and Figure 2 shows the primary one-quarter-scale test apparatus. Testing used circular, 20-centimeter (8-inch) diameter, grade 316L stainless steel (except as noted above) test coupons exposed to conditions geometrically similar to those of the WTP vessels. Test measurements required for use as direct input to WTP design calculations were taken in accordance with American Society of Mechanical Engineers (ASME) NQA-1 nuclear quality standards. DEI also took several commercial grade measurements to supplement the understanding of the prediction of erosion behavior in the WTP vessels.

CONDUCT OF TESTING ominion Engineering, Inc. (DEI), conducted the testing at its facilities in Reston, Virginia. It made four consecutive 24-hour runsreplacing

208

Bechtel Technology Journal

3-inch, Sch. 40 Inlet Pipe

Upper Flange (interfaces with 10-inch 150# slip-on flanges in tank lid)

3-inch to 1-inch, Sch. 40 Concentric Reducer 1-inch Socket Weld Flange (facilitates removal and inspection of 1-inch pipe nozzle)

1-inch, Sch. 40 Pipe Nozzle ID: 1.049 0.005 inch Entry Length: 6 x ID Off-set: 1.5 x ID

Pivoting Bracket (allows impingement angles other than 90 )

ASME SA240 Type 316L SS Test Specimen

The volume of the test vessel was approximately 3,785 liters (1,000 gallons). Two independent recirculation loops were used; each had a rotary lobe pump and coriolis flow meter.

Figure 1. Test Fixture for Erosion Testing

Figure 2. Test Rig with Two Independent Recirculation Loops

December 2009 Volume 2, Number 1

209

TEST RESULTS, EVALUATION, AND CONCLUSIONS

xamination of the test coupons revealed a donut-shaped depression: the area directly below the centerline of the jet had less erosion than a ring around the center (see Figure 3). Micrometer readings were used to measure the scar depth. The incremental scar depth at successive 24-hour runs was relatively constant. Mass loss data was used to develop exponents for jet velocity, mean particle size, and slurry concentration so that erosion rates

could be adjusted for different operating and feed conditions. Figure 4 shows a stainless steel test coupon after 96 hours of exposure to jet wear with waste simulant. No visual observance of erosion was detected other than a polished appearance. Testing results are shown in Table 2. Changes in the successive tests are highlighted in green.

Stainless steel test coupons were exposed to direct flow simulating actual WTP plant conditions. No visual wear was noticeable on the exposed face of the coupons other than a polished appearance.

Nozzle

Direction of Flow

Impingement Surface

Figure 3. Post-Test Wear Pattern

Figure 4. Stainless Steel Test Coupon After 96 Hours of Erosion

Table 2. Erosion Testing Program Results


Mass Loss, g
3.4373 5.0141 7.6467 3 1.2894 2.2124 4 0.8756 1.4029 5 2.5691 4.1083 6 0.2879 6.0005 7 4.3555 4.4727 8 1.6635 1.3994

Test Run

PJM Velocity, m/s


12 12 14 12 14 12 14 12 14 8 16.5 12 12 12 12

Slurry Concentration, g/L


350 350 350 250 250 150 150 350 350 350 350 350 350 350 350

Coupon

Scar, mm
0.03810 0.05080 0.05842 0.02286 0.02286 0.02032 0.02286 0.02286 0.03556 0.01016 0.05588 0.04064 Irregular 0.02286 Irregular

Mean Particle Size, microns


24 54 54 24 24 24 24 39 39 24 24 24 24 24 24

Angle, radian
1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.57 1.13 1.57

Hardness, Mohs
3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 3.6 4.4 4.4 3.6 3.6

1 2

SS SS SS SS SS SS SS SS SS SS SS SS ULTIMET SS ULTIMET

210

Bechtel Technology Journal

The test results were used in Equation 1 to predict the expected erosion rate over 40 years of operation. The equation is as follows:

Ew = Ewref
Where: Ew Ewref Va Vref Pa Pref I G Cref H F E D Ia Sc = = = = = = = = = = = = = = =

[ ][ ][
Va V ref Pa P ref

G H (1 I ) + I (F )(E )(D)(Ia)(Sc) Cref Cref

(1)

scar depth at end of design life (m) scar depth of reference case (m) velocity of jet actual (m/s) velocity of jet from reference case (m/s) particle weighted mean diameter actual (m) particle weighted mean diameter from reference case (m) fraction of time for maximum solids loading normal solids concentration (wt%) reference case concentration (wt%) maximum solids concentration (wt%) vessel usage factor (fraction of time) PJM duty factor (fraction of time) design life (years) factor for impingement angle scale factor (1/4 to full scale)

The predicted erosion rate for each PJM-mixed WTP vessel was evaluated against the available erosion design allowance for that vessel.

In Equation 1, test data is used as the reference case for scar depth, jet velocity, weighted mean particle size, and slurry concentration. The plant operating conditions and actual waste properties are then provided as input to the other parameters in the equation, and from the calculation a scar depth is estimated for a given period of facility operation (typically for the 40-year design life of the WTP). The project used computational fluid dynamics to scale the one-quarter erosion rates to full scale. The predicted erosion rate for each affected vessel was evaluated against the available erosion design allowance for that vessel. The project also performed numerous sensitivity analyses to assess the effects of different operating and waste input variables to demonstrate margin in the design robustness. The test results provided exponents for jet velocity, mean particle size, and slurry concentration. Exponents were developed over the range of operating parameters of the PJMs, which range in discharge velocity from 8 to 17 meters per second (26 to 56 feet per second). Particle sizes

range from submicron to about 300 microns, with a weighted mean of 24 microns. Slurry concentration at the WTP ranges from almost no solids content to a maximum of 20%. The work Bechtel performed as reported in this paper produced exponents on average that exceeded the range of conditions of 3 for velocity, 2 for particle size, and 0.8 for concentration.

TRADEMARKS ULTIMET is a registered trademark owned by Haynes International, Inc.

REFERENCES
[1] C.W. Enderlin and M.R. Elmore, Letter Report for First Erosion/Corrosion Test Conducted for West Valley Process Support, Pacific Northwest National Laboratory, Richland, WA, 1997. R. Gupta, S.N. Singh, and V. Sehadri, Prediction of Uneven Wear in a Slurry Pipeline on the Basis of Measurements in a Pot Tester, Wear, Vol. 184, No. 2, May 1995, pp. 169178, access via http://www.ingentaconnect.com/content/els/ 00431648/1995/00000184/00000002/art06566.

[2]

December 2009 Volume 2, Number 1

211

[3]

A.J. Karabelas, An Experimental Study of Pipe Erosion by Turbulent Slurry Flow, Proceedings of the Hydrotransport-5, Fifth International Conference on the Hydraulic Transport of Solids in Pipes, Hanover, Germany, May 811, 1978, pp. E2-15 E2-24. F. Aiming, L. Jinming, and T. Ziyun, An Investigation of the Corrosive Wear of Stainless Steels in Aqueous Slurries, Wear, Vol. 193, No. 1, April 1996, pp. 7377, access via http://www.ingentaconnect.com/content/els/ 00431648/1996/00000193/00000001/art06684. H.W. Wang and M.M. Stack, The Erosive Wear of Mild and Stainless Steels Under Controlled Corrosion in Alkaline Slurries Containing Alumina Particles, Journal of Materials Science, Vol. 35, No. 21, November 2000, pp. 52635273, access via http://www.springerlink.com/ content/l5464x5324747560/?p=5dd83b97025b4312 83b2a23d48d55345&pi=72. R. Mishra, S.N. Singh, and V. Sashadri, Study of Wear Characteristics and Solids Distribution in Constant Area and Erosion-Resistant Long-Radius Pipe Bends for the Flow of the Multisized Particulate Slurries, Wear, Vol. 217, No. 2, May 1998, pp. 297306, http://www.ingentaconnect.com/content/els/ 00431648/1998/00000217/00000002/art00147. B.E. Wells, M.A. Knight, et al., Estimate of Hanford Waste Insoluble Solid Particle Size and Density Distribution, WTP-RPT-153, Rev. 0, Battelle-Pacific Northwest Division, February 2007, http://www.pnl.gov/rpp-wtp/ documents/WTP-RPT-153.pdf. R.A. Kirkbride et al., Tank Farm Contractor Operation and Utilization Plan, CH2M Hill Hanford Group, Inc., HNF-SD-WM-SP-012, Rev. 6, January 2007. R.F. Gimpel, WTP Waste Feed Analysis and Definition EFRT M4 Final Report, Bechtel National, Inc., Richland, WA, Report No. 24590-WTP-RPT-PE-07-001, Rev. 1, September 18, 2007.

BIOGRAPHIES
Ivan G. Papp has worked for Bechtel for 8 years and is currently the deputy manager for mechanical and process engineering at the WTP project. During his time at Hanford, he has worked on many projects, including the Fast Flux Test Facility (FFTF), Plutonium/Uranium Extraction (PUREX) Facility, and the 200 East Area Effluent Treatment Facility (ETF). Ivan has more than 20 years of process engineering experience in the nuclear industry, including operations and design. He has spent the past 11 years developing the WTP process design. Prior to working at Hanford, he worked briefly as a process engineer for Unocal Corporation at Unocals refinery in Los Angeles, California. Ivan has a BSc in Chemical and Materials Engineering from California State Polytechnic University, Pomona, where he was a member of Omega Chi Epsilon, the National Honor Society for Chemical Engineering. Garth M. Duncan is deputy manager of the Process Engineering and Technology Department at the WTP project. He has 35 years of experience in radioactive waste cleanup and nuclear plant design and operations. Principal responsibilities include management of the WTP process design, including the flowsheet and associated mass and energy balance, resolution and closure of External Flowsheet Review Team issues, and liaison with the Department of Energy on these and related issues. Previously at the WTP, Garth was deputy manager for mechanical and process engineering, and the project engineering manager for the LAW vitrification facility. Prior to working at the WTP, he was engineering manager for deactivation of the N-Reactor on the Hanford Site. Earlier assignments involved design and operating plant services for commercial nuclear generating stations, including Grand Gulf, Susquehanna, and Limerick. Prior to joining Bechtel, Garth was a nuclear propulsion-qualified division officer in the US Navy. Garth has a BS in Mechanical Engineering from the University of Southern California, Los Angeles, and is a licensed Professional Engineer in California. He is a member of the American Nuclear Society.

[4]

[5]

[6]

[7]

[8]

[9]

[10] M.N. Hall, ICD-19 Interface Control Document for Waste Feed, Bechtel National, Inc., Richland, WA, 24590-WTP-ICD-MG-01-019, Rev. 4, May 2007. [11] M.N. Hall and R.F. Gimpel, Technical and Risk Evaluation of Proposed ICD-19 Rev. 4, Bechtel National, Inc., Richland, WA, 24590-WTP-ES-PET-01-001, Rev. 1, 2008.

212

Bechtel Technology Journal

Technical Notes
Abbreviated Technology Papers

TECHNOLOGY PAPERS

215

Effective Corrective Actions for Errors Related to Human-System Interfaces in Nuclear Power Plant Control Rooms
Jo-Ling J. Chang Huafei Liao, PhD

219

Estimating the Pressure Drop of Fluids Across Reducer Tees


Krishnan Palaniappan Vipul Khosla

Yucca Mountain Management and Operations


Alpine mining machines are used in the Exploratory Studies Facility to excavate alcoves and niches for scientic testing.

EFFECTIVE CORRECTIVE ACTIONS FOR ERRORS RELATED TO HUMAN-SYSTEM INTERFACES IN NUCLEAR POWER PLANT CONTROL ROOMS
Issue Date: December 2009 AbstractA Bechtel-sponsored industry-wide study presents guidelines for correcting errors related to human-system interfaces (HSIs) in nuclear power plant (NPP) control rooms. A total of 138 licensed operators from 18 NPPs participated by evaluating a list of common error causal factors and sharing their personal knowledge and experience as well as examples of near miss situations. The collective operator opinions were quantitatively analyzed, using multivariate statistical methods and chi-square tests, to arrive at guidelines containing suggested corrective actions for each potential type of HSI error. The results of the study also include guidance on training, resource allocation, and effective decision making, along with human error prevention and, more importantly, error prediction. Keywordscorrective action, factor analysis, human error, human performance, human-system interface (HSI), main control room (MCR), nuclear power plant (NPP)

BACKGROUND

n Wednesday, March 28, 1979, the control room operators at Three Mile Island Generating Station were unable to promptly identify a stuck-open pilot-operated relief valve in the primary system. This error led to a partial core meltdown and the biggest nuclear incident in the United States. As a result of this incident, all operating plants underwent a reevaluation process to identify and correct potential human factor errors in the main control rooms (MCRs). While many human-system interface (HSI)related issues were addressed throughout this industry-wide reevaluation process, a review the authors performed of recent plant events from the Institute of Nuclear Power Operations (INPO) database showed that HSI continues to be a significant contributor to control room errors. Immediate corrective action tends to be confined to merely trending HSI-related errors instead of taking steps for extensive investigation. We suspect that the underlying cause is the lack of formal guidelines because plants had already undergone thorough control room human factor evaluations decades ago. What types of errors may be corrected by operator training? Are procedural updates the most effective means of error prevention? Will an increase in management oversight improve or

hinder operator performance? When is a design change the most appropriate corrective action? These are the questions we attempted to address when we commenced this study.

METHODOLOGY

ighteen commercially operated nuclear power plants (NPPs) participated in this Bechtel-sponsored study. Our team developed a complete list of MCR HSI-related error causal factors by reviewing past plant events and over one hundred academic publications on high-reliability industries (nuclear, medical, aerospace, etc.). A total of 138 licensed operators evaluated each factor, and their responses were analyzed to provide both the suggested corrective actions and the relative importance of each error type. Factor Analysis Factor analysis takes a complete list of causal factors and reduces it to a small number of representative categories. This is done through the statistical process of varimax rotation using participant responses. The bare-bones categories help to reduce trending efforts and identify the types of causes experienced operators believe to be the most important contributors to MCR errors.
215

Jo-Ling J. Chang
jjchang@bechtel.com

Huafei Liao, PhD


hliao@bechtel.com

2009 Bechtel Corporation. All rights reserved.

ABBREVIATIONS, ACRONYMS, AND TERMS


HSI INPO MCR NPP human-system interface Institute of Nuclear Power Operations main control room nuclear power plant

Study Participant Comments Over 30% of the study participants provided additional comments. These were reviewed and, with the exception of non-HSI issues, all comments fell into the complete list of causal factors that we initially created. Approximately one-third of the comments are related to equipment labels. As a group, operators believe that all labels should be consistent throughout the plant. However, each label should also be easily distinguished from others. Specific attention should be paid to nomenclature and font size. Many operators also expressed a lack of trust in equipment; specifically, since there are no predictions of equipment behaviors, it is often difficult to plan for contingencies. Many comments are related to non-HSI issues. These include operator vigilance, distractions in the field, time pressure, inadequate staffing, procedural errors, communications, collaboration, and training.

While many modification projects at NPPs involve members of the operations department during the design phase, the end users perceptions of HSI problems may be very different from those they face when the equipment is installed.

Table 1 depicts the final representative category structure, as modified to be used in the guidelines. Errors that do not belong in any of the five categories listed in the table are considered to hold less significance, in the collective opinion of the surveyed operators. Operator Decision versus Operator Action Each error causal factor pertains to operator decision, operator action, or both. Decision represents cognitive errors related to operator knowledge and judgment. Action, on the other hand, represents operator physical errors, or slips, that result from lapses of attention while monitoring displays or that occur while implementing intended plans. The industry currently uses a similar two-part model: human performance errors (or regular errors) and technical human performance errors. The suggested corrective actions presented in Table 2 use results from another study conducted by Chen-Wing/Davey. [1] We used chi-square tests to determine if collective operator experience shows a tendency toward cognitive or physical for each type of error.

DISCUSSION

he results of this study provide NPPs with the basis for a formal corrective action guideline. When an MCR error occurs, the Decision-Action Model (Table 2) may be consulted for the most effective resolution and the Simplified Error Causal-Factor Structure (Table 1) may be used to determine the implementation time frame based on level of importance.

Table 1. Simplified Error Causal-Factor Structure


Category (Level of Importance) Error Causal Factor

Operations UncertaintiesErrors caused by doubts regarding the information presented on the job, such as indications that do not accurately reflect plant conditions, controls that provide no confirmation after state change, and inconsistency in labels Limited CapabilitiesEvents that could have been avoided if additional information had been available, such as trending or alarms/indications with levels of severity

Misoperations

Equipment ControlErrors that are usually due to overreliance on equipment

Design IssuesErrors caused by the design of the equipment, such as controller spacing and function allocation

216

Bechtel Technology Journal

Table 2. DecisionAction Model


Correct Decision + Correct Action
(No incident)

Incorrect Decision + Correct Action


Unreliable indication No feedback Insufficient plant information Display challenges No trending Poor color/sound coordination Boolean indication Equipment being operated incorrectly Safety features being defeated Equipment allowing failures Over-reliance on equipment Suggested Corrective Action: Improve operations procedures, general guidelines, and pre-job briefings

Suggested Corrective Action: N/A

Correct Decision + Incorrect Action


Control panel visually crowded Controls too close together Controls too far apart

Incorrect Decision + Incorrect Action


Non-intuitive control No alarm noting abnormal conditions and/or failures Time limit to operation Incorrect function allocationmanual actions designed to be automated Suggested Corrective Action: Modify control room and extend human factor reevaluation to the condition

Suggested Corrective Action: Provide additional operator training, peer check, and management oversight

It is vital to present operators with some form of equipment simulation during the design process to obtain insights into the real challenges.

Operators as a group identified unreliable indication as the highest failure cause. There is also agreement that lack of informationsuch as lack of alarms, indications, and feedbackleads to human performance errors. It should be noted that although most operator comments are related to the organization of information, associated items (such as visual/ audio coordination and over-indication) were not ranked high based on the survey results. This discrepancy may be explained by the difference between perception and reality. When confronted with an abstract question, an operator may respond that lack of information is the biggest HSI problem in the MCR. But if asked to present examples, the same operator may list labeling problems, which is part of information organization. This phenomenon poses problems for the equipment designer. While many modification projects at NPPs involve members of the operations department during the design phase, the end users perceptions of HSI problems may be very different from those they face when the equipment is installed. It is, therefore, vital to present operators with some form of equipment simulation during the design process to obtain insights into the real challenges.

REFERENCES
[1] S.L.N. Chen-Wing and E.C. Davey, Designing to Avoid Human Error Consequences, Second Workshop on Human Error, Safety, and System Development (HESSD 98), Session 5, Seattle, WA, April 12, 1998, http://www.crew-ss.com/portfolio/ download/HEW98_Design_for_HE.pdf.

ADDITIONAL READING Additional information sources used to develop this paper include: J.J. Chang, H. Liao, and L. Zeng, HumanSystem Interface (HSI) Challenges in Nuclear Power Plant Control Rooms, Proceedings of the Symposium on Human Interface 2009 on Human Interface and the Management of Information, Information and Interaction, Part II (held as part of HCI International 2009), San Diego, CA, July 1924, 2009, pp. 729737, http://portal.acm.org/citation.cfm?id= 1610664.1610750&coll=GUIDE&dl=GUIDE& CFID=62697787&CFTOKEN=13180906. Workplace Changes, 21st Annual Canadian Nuclear Society Conference, Toronto, Ontario, Canada, June 1114, 2000, http://www.crew-ss.com/portfolio/download/ CNS_2000_Review_Criteria.pdf (or access via http://www.cns-snc.ca/CNS_Conferences/ Search/2000-1.htm).

E. Davey, Criteria for Operator Review of

December 2009 Volume 2, Number 1

217

M. Grozdanovi, Methodology for Research

of Human Factors in Control and Managing Centers of Automated Systems, University of Nis, The Scientific Journal Facta Universitatis: Working and Living Environmental Protection, Vol. 1, No. 5, 2000, pp. 922, http://facta.junis.ni.ac.rs/walep/walep2000/ walep2000-02.pdf.

N. Natio, J. Itoh, K. Monta, and M. Makino, An Intelligent Human-Machine System Based on an Ecological Interface Design, Nuclear Engineering and Design, Vol. 154, No. 2, March 1995, pp. 97108, access via http://www.ingentaconnect.com/content/els/ 00295493/1995/00000154/00000002/art00903. D.A. Norman, Categorization of Action Slips, Psychological Review, Vol. 88, No. 1, January 1981, pp. 115, http://psycnet.apa.org/ index.cfm?fa=search.displayRecord&uid= 1981-06709-001.

Huafei (Harry) Liao, PhD, is a control systems engineer with Bechtel Power Corporation, working on the 760 MW Trimble Unit 2 pulverized-coal-fired power plant, located in Kentucky. Previously, he worked on the North Anna and Edwardsport IGCC projects. Harry brings his technical expertise to bear not only in his Bechtel assignment, but also as a member of the editorial board for Human Factors and Ergonomics in Manufacturing and of the revision working group for ANSI/ ANS-58.8-1994 (R2008). Harry received a PhD with a concentration in human factors from the School of Industrial Engineering of Purdue University, West Lafayette, Indiana, and his Masters and Bachelors degrees with a concentration in Control Theories and Control Engineering from Tsinghua University, Beijing, China.

BIOGRAPHIES
Jo-Ling (Janet) Chang is a senior control systems engineer in Bechtels Nuclear Operating Plant Services business line. For over a year, she has led and supported a variety of design modification upgrade projects for Southern Nuclear Company. Before joining Bechtel, Janet was an instrumentation and controls engineer with American Electric Power at the D.C. Cook nuclear plant. Janets area of interest is human factors, and she is currently working on several technical papers on the topic. Her most recent, Human-System Interface (HSI) Challenges in Nuclear Power Plant Control Rooms, was presented at HCI International 2009, the 13th International Conference on Human-Computer Interaction, in San Diego, California, in July 2009. Janet holds an MSE from Purdue University, West Lafayette, Indiana, and a BSE in Electrical Engineering from the University of Michigan, in Ann Arbor. She is a member of Women in Nuclear and of North American Young Generation in Nuclear. Janet is a licensed Professional Engineer in Michigan.

218

Bechtel Technology Journal

ESTIMATING THE PRESSURE DROP OF FLUIDS ACROSS REDUCER TEES


Issue Date: December 2009 AbstractAccurate estimates of the pressure drop across piping system reducer tees are critical to line sizing and can have a significant impact on overall project safety and cost. While simplistic calculations can estimate pressure drop when the area and/or flow ratios are close to unity or close to zero, accurately estimating intermediate ratio values often involves approximations. This paper summarizes a study of reducer tee pressure drop estimates obtained from different calculation methods and examines how pressure drop values compare. Keywordsentrance loss, K method, Miller, pressure drop, reducer tee, sudden contraction, Truckenbrodt

BACKGROUND

PROBLEM STATEMENT

ccurate estimates of the pressure drop across reducer tees in piping systems are critical to line sizing, especially for lowsuction-pressure compressors, flare networks, pressure-reducing valve (PRV) laterals, and revamps, and can have a significant impact on overall project safety and cost. At a reducer tee, the flow split ratio is not always the same as the area ratio. This results in a scenario where, in addition to the direction changes, the pressure changes caused by acceleration or deceleration also gain significant importance. While simplistic calculations can estimate pressure drop when the area and/or flow ratios are very high (close to unity) or very low (close to zero), accurately estimating intermediate ratio values often involves approximations. This paper summarizes the findings of a study comparing reducer tee pressure drop estimates obtained from different calculation methods, such as the K method, Millers method, and Truckenbrodt method, and examines how pressure drop values compare.

naccurately estimating the pressure drop across reducer tees can potentially affect safety and cost. However, estimating the pressure drop in a reducer tee is difficult because significantly less experimental data is available for this type of tee than for standard tees. Further, reducer-tee investigations seem to be limited to area ratios greater than 0.1.

STUDY SIGNIFICANCE

ased on the type of reducer tee installation, four different flow patterns are possible, as shown in Figure 1.

Large chemical plants with thousands of tee fittings are designed so that the pressure drops for the four types of reducer tee installation are approximated to the branch tee pressure drop, irrespective of the flow pattern, in many instances.

Krishnan Palaniappan
kpalania@bechtel.com

Type 1

Type 2

Type 3

Type 4

Vipul Khosla
vkhosla@bechtel.com
Splitting Flows Combining Flows

Figure 1. Reducer Tee Flow Patterns

2009 Bechtel Corporation. All rights reserved.

219

ABBREVIATIONS, ACRONYMS, AND TERMS


K An empirically derived local pressure loss coefficient that accounts for losses encountered from pipe fittings such as reducer tees nominal bore pressure-reducing valve pressure safety valve A dimensionless number that expresses the ratio of inertial forces to viscous forces, thereby quantifying the relative importance of these two types of forces for given flow conditions

The four evaluated methods are summarized as follows: Use a sum of standard tee pressure drop and a sudden contraction using Cranes single-K method Use a sum of standard tee pressure drop and an entrance loss using Cranes single-K method Use the K value predicted by an equation proposed by Truckenbrodt, including the correction factor proposed by Oka and Ito based on experimental verification for small area ratios Use the K value read graphically from Millers chart, including correction factors proposed by Miller for systems where the Reynolds number of any branch of a tee is below 200,000 Using these four methods, calculations were performed for area ratios ranging from 0.05 to 0.9 and for flow ratios ranging from 0.1 to 1. To evaluate the different area ratios, a reducer tee fitting with a straight run size of 36 inches (0.91 meter), nominal bore (NB), was considered while varying the branch size from 8 inches (0.20 meter) to 34 inches (0.86 meter). The quantity of water flowing1,500 m 3/hr (396,258 gph) of pure water at 40 C (104 F and 590 kPaa (85.6 psia)was selected so that the highest system velocity was always below the erosion velocity up to an area ratio of 0.1 and the flow always remained in the turbulent zone. These parameters are illustrated in Figure 2.

NB PRV PSV

When an engineer performs adequacy checks of safety valve laterals, pumps, or compressors, a correct pressure drop estimate is critical in determining whether the system passes or fails.

Reynolds number

Most modern grassroots plant designs provide for hydraulic design margins so that simplifying assumptions do not interfere with the system design intent. However, there could be instances where approximations are unacceptablesuch as when an engineer performs adequacy checks of safety valve laterals, pumps, or compressors. In these instances, a correct pressure drop estimate is critical in determining whether the system passes or fails. A study was undertaken to provide engineers a guideline on various methods that can be used to reliably estimate the Type 1 flow pattern pressure drop. This flow pattern was selected because it was found to be of interest in several safety valve inlet line calculations.

METHODOLOGY ommonly used industry methods to calculate reducer tee pressure drop were critically examined, along with simplifying assumptions for complex plant design. Oka and Ito [1] summarized the available literature as empirical correlations based on theoretical equations for estimating loss coefficients in a reducer tee; Miller [2] published a chart. Simplistic assumptions derived from methods proposed by Crane [3] are still used to quickly estimate loss coefficients. To completely understand how these various methods compare with one another, two simplistic assumption methods based on Crane, a theoretical equation proposed by Truckenbrodt (as summarized by Oka and Ito), and a graphical method of estimating K values from Millers chart were evaluated.

Flow ratio = Q2/Q1 Q1 = 1500 m3/hr of water @ 40 C and 590 kPaa Q2 varies from 0.1 x Q1 to 1.0 x Q1 Area ratio = A 2/A1 = (D2/D1)2 D1 = D3 = 36" (fixed) D2 varies from 8" to 34" D2 = 8" to 34"

D1 = 36"

Q1

Q2

Q3

D3

Figure 2. Reducer Tee Evaluation Parameters

220

Bechtel Technology Journal

Table 1. Results of Pressure Drop Calculations in kPa


Area Ratio Method of Calculation
Standard Tee + Sudden Contraction

Flow Ratio 0.1


0.130 0.131 0.217 0.216 0.130 0.132 0.218 0.218 0.132 0.134 0.220 0.218 0.138 0.142 0.227 0.220 0.201 0.211 0.289 0.317 0.524 0.545 0.591

0.3
0.131 0.142 0.227 0.190 0.136 0.150 0.235 0.206 0.148 0.167 0.250 0.214 0.207 0.239 0.316 0.308 0.774 0.858 0.876 0.767 3.674 3.863 3.591

0.5
0.134 0.164 0.247 0.187 0.148 0.187 0.268 0.208 0.180 0.232 0.309 0.256 0.344 0.434 0.493 0.458 1.918 2.151 2.050

0.7
0.137 0.197 0.277 0.205 0.165 0.241 0.318 0.250 0.228 0.331 0.399 0.326 0.549 0.726 0.759 0.739 3.634 4.090 3.811

0.9
0.142 0.240 0.317 0.247 0.188 0.314 0.384 0.307 0.292 0.462 0.519 0.440 0.823 1.116 1.113 1.145 5.921 6.674 6.158

1.0
0.145 0.266 0.340 0.266 0.201 0.358 0.424 0.334 0.330 0.541 0.590 0.523 0.986 1.347 1.324 1.294 7.278 8.209 7.552

0.89

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart Standard Tee + Sudden Contraction

0.69

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart Standard Tee + Sudden Contraction

0.51

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart Standard Tee + Sudden Contraction

0.30

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart Standard Tee + Sudden Contraction

Where safety valves have low set pressures, pressure drop limitations on the inlet and outlet lines are more stringent and can have a serious impact on the design.

0.12

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart Standard Tee + Sudden Contraction

9.965 10.491 9.590

19.396 20.427 18.589

31.966 33.669 30.588

39.427 41.530 37.712

0.05

Standard Tee + Entrance Loss Truckenbrodt Correlation Millers Chart

Note: Values in bold indicate the highest values for a particular flow ratio/area ratio combination.

The inlet flow was kept constant at 1,500 m3/hr (396,258 gph), while the flow ratio through the branch was varied from 0.1 to 1.0 for each area ratio examined. A total of 440 calculations were performed to evaluate pressure drop across a reducer tee for various area and flow ratios. Table 1 summarizes the results. The impact of the selected method and variations in results are best illustrated in the following actual project example.

ACTUAL PROJECT EXAMPLE

t the gas inlet to a gas processing facility, three 6Q8 safety valves operating together are required to handle the blocked outlet relief case. Although these safety valves are set to protect at a relatively high pressure of 84 barg

(1,218 psig), considering the huge volumes of liquid and gas that need to be handled in this facility, the pressure safety valves (PSVs) are remotely pilot operated because of high-pressure drops in the inlet piping. To calculate the area requirement, it is necessary to accurately estimate the pressure drop between the pilot line takeoff and the PSV inlet flange. When the pressure drop on a reducer tee in the inlet line was calculated using the four estimating methods, varying numbers ranging from 0.8 bar (11.6 psi) to 1.1 bar (15.9 psi) were obtained. For a single fitting, these variances could be serious enough to change the orifice designation because of insufficient installed area, or they could pose a potential safety concern if overlooked. Where safety valves have low set pressures, pressure drop limitations on the inlet and outlet lines are more stringent and can have a serious impact on the design.
221

December 2009 Volume 2, Number 1

Table 2. K Values To Find Branch Flow Pressure Drop for Larger Pipe Sizes, Calculated Using Truckenbrodt Method for Reducer Tees with Type 1 Flow
100.0 95.0 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.01 1.03 1.11 1.46 2.5 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.01 1.01 1.03 1.11 1.46 2.83 5.0 1.00 1.01 1.01 1.01 1.01 1.01 1.02 1.03 1.05 1.11 1.46 2.83 8.31 10.0 1.02 1.02 1.02 1.03 1.04 1.05 1.07 1.11 1.20 1.46 2.83 8.31 30.2 20.0 1.04 1.05 1.05 1.06 1.08 1.11 1.16 1.26 1.46 2.03 5.11 17.5 66.8 30.0 1.07 1.08 1.09 1.11 1.15 1.20 1.29 1.46 1.81 2.83 8.31 30.2 118 40.0 1.11 1.13 1.14 1.18 1.23 1.32 1.46 1.71 2.27 3.86 12.4 46.7 184 50.0 1.16 1.18 1.20 1.26 1.34 1.46 1.66 2.03 2.83 5.11 17.5 66.8 264 60.0 1.22 1.25 1.28 1.35 1.46 1.62 1.90 2.40 3.49 6.60 23.4 90.6 359 70.0 1.29 1.32 1.36 1.46 1.60 1.81 2.17 2.83 4.25 8.31 30.2 118 469 80.0 1.37 1.41 1.46 1.58 1.76 2.03 2.48 3.31 5.11 10.3 38.0 149 593 90.0 1.41 1.46 1.51 1.64 1.84 2.15 2.65 3.58 5.58 11.3 42.2 166 661 95.0 1.46 1.51 1.56 1.71 1.93 2.27 2.83 3.86 6.08 12.4 46.7 184 732 100.0

Area on Reducing Branch of Tee, %

90.0 80.0 70.0 60.0 50.0 40.0 30.0 20.0 10.0 5.0 2.5

The Truckenbrodt correlation with the correction factor proposed by Oka and Ito is recommended for area ratios lower than 0.125 or when K values are more than 6, or when a single method with a conservative result is to be used across all area ratios and flow ratios.

Flow Through the Reducing Branch, %

CONCLUSIONS hen it is necessary to accurately evaluate the pressure drop in tees, the fittings flow pattern must be considered. The calculation or approximation method used can significantly affect the system design, depending on the fittings flow and area ratios. For small area ratios, 0.125 or below, Truckenbrodts correlation, including the correction factor proposed by Oka and Ito, yields pressure drop values that best match the experimental values at all flow ratios. Millers chart cannot be accurately read in this range. Using simplifying assumptions based on sudden contraction or entrance loss results in a conservative higher pressure drop. For area ratios higher than 0.125, simplifying assumptions based on sudden contraction or entrance loss generally yield low pressure drops and are not recommended. Millers chart is commonly used in this region. The extension of Truckenbrodts correlation with correction factors proposed by Oka and Ito into this range yields higher K values, as shown in Table 2; as a result, the pressure drops predicted are higher than those from the other methods. The use of the K values shown in Table 2 is recommended for area ratios lower than 0.125, or when K values are more than 6, or when a single method with a conservative estimate of pressure drop is to be used across all area ratios and flow ratios.

REFERENCES
[1] K. Oka and H. Ito, Energy Losses at Tees with Large Area Ratios, Journal of Fluids Engineering, Transactions of the ASME, Vol. 127, Issue 1, January 2005, pp. 110116, access via http://asmedl.aip.org/dbt/ dbt.jsp?KEY=JFEGA4&Volume=127. D.S. Miller, Internal Flow Systems, 2nd Edition, Miller Innovations, Bedford, UK, 2008, pp. 208, 302318, see http://www.internalflow.com/. Flow of Fluids through Valves, Fittings, and Pipe, Technical Paper No. 410M, Crane Company, New York, NY, 1982, http://www.scribd.com/doc/21335619/ Through-valves-Pipes-and-Fittings.

[2]

[3]

BIOGRAPHIES
Krishnan Palaniappan, a senior process/systems engineer, joined Bechtel in 2005. A technical specialist in syngas facilities, he has over 15 years of industry experience working on the design of petrochemical, refining, and gas processing units. Krishnan has a BTech in Chemical Engineering from the National Institute of Technology, Tiruchirappalli, Tamil Nadu, India. He is a Six Sigma Yellow Belt. Vipul Khosla, a process/systems engineer, joined Bechtel in 2007 as a graduate engineer and has worked on refinery and LNG projects, including Motiva crude expansion, Angola LNG, and Takreer refinery. Vipul received a BE in Chemical Eng i ne er i ng f rom Pa njab University, Chandigarh, India.

222

Bechtel Technology Journal

Vous aimerez peut-être aussi