Vous êtes sur la page 1sur 100

INTRODUCTION

TO
OIL
&
NATURAL GAS

Introduction
Petroleum has become so interwoven in the daily lives of the world over the past century that it
now comprises an industry which accounts for about 40 percent of all energy consumed globally.
Petroleum is used, directly or indirectly, in many applications including transportation fuel, light,
heat, and for use in the manufacture of consumer products. Daily per capita consumption of
petroleum products in the United States averages nearly 3 gallons
more than one-fourth of the petroleum consumed worldwide.
Petroleum is a complex mixture of liquid hydrocarbon chemical compounds containing
hydrogen and carbon occurring naturally within the earth.1 Early use of petroleum dates back
many years. Man first became aware of its existence when it was discovered oozing to the
surface from natural seeps. Native Americans used crude oil skimmed from creeks and rivers as
a medicinal ointment. In the 1840s and 1850s, an enterprising Pennsylvanian bottled and sold
Pennsylvania Rock Oil to treat ailments such as rheumatism, gout, and burns. By the time
commercial production began in 1859, oil was demanded more for its ability to produce light, and
kerosene for use in lamps was the principal product. At the turn of the century, the advent of the
automobile shifted the emphasis to motor gasoline. In the 1930s and 1940s, a substantial
market for heating oil developed.
Because petroleum in its raw state has limited uses, further processing of crude oil, via petroleum
refining, is needed to unlock the full potential of this resource. In the mid-1800s, the earliest
refineries used a distillation process to produce kerosene. Todays refineries employ many
processes from simple distillation to the more complex cracking and reforming
operations to convert crude oil into a wide array of desired products. With approximately 20
percent of the worlds crude oil distillation capacity and about 30 percent of the more complex
cracking and reforming capacity, the United States leads the world in the production of petroleum
products. Moreover, motor gasoline alone accounts for about one-half of all U.S. refinery
production. (Figure 1.1).
Although petroleum provides many useful products, the most notable are motor gasoline and
heating fuel. Petroleums many uses in the transportation sector include fuel for
automobiles, trucks, agricultural and industrial machinery, trains, ships, and aircraft. Petroleum is
used to heat homes, offices, and factories and is used to grow process, package, distribute,
refrigerate, and cook food. Petroleum is also the source of synthetic fabric for cloths as well as
detergents and dry cleaning solvent to clean them.

Automobiles, trucks, and buses are fueled mostly by motor gasoline and diesel fuel.

Moreover, petroleum provides a chemical base for cosmetics and pharmaceutical products as
well as for many plastic products from toys to building materials.
The United States consumes more energy from petroleum than from any other energy source.
Petroleum provides nearly two-thirds more energy than natural gas, nearly three-fourths more
energy than coal, and more than three times the energy from all other energy resources.
Although the United States is one of the largest petroleum producers, it actually consumes more
than it produces, requiring net imports of crude oil to meet demand. Growing U.S. petroleum
product demand and declining domestic crude oil production have combined to make the
United States increasingly dependent on crude oil imports. From its peak of 9.6 million barrels per
day in 1970, domestic crude oil production had declined by nearly one-third to 6.5 million barrels
per day by 1997. During this same period, petroleum demand grew by one-fourth, from 14.7
million barrels per day in 1970, to over 18.6 million barrels per day by 1997. Consequently, net
imports2 of crude oil and products rose from 3.2 million barrels per day in 1970, to 9.2 million
barrels per day by 1997, causing U.S. dependence on petroleum net imports to grow from about
one-fifth in 1970, to nearly half by 1997.
While crude oil production is declining in the United States, so are proved reserves of crude oil.
The U.S. holds about a 2 percent share of the world is crude oil reserves. As world reserves
are depleted, the roles of Saudi Arabia and other Middle East suppliers are increasingly important
in international petroleum supply. This was dramatically illustrated by fears of worldwide supply
disruption following Iraqis invasion of Kuwait in August 1990.
A vast transportation network of tankers, barges, pipelines, railways, and trucks moves crude oil
from the field to refineries, where it is processed, and brings refined products to the
consumer. This complex system also links foreign suppliers to domestic customers and provides
access to foreign markets for products from U.S. refineries. Storage facilities hold crude oil and
refined products at various stages as they move through the system. Marine terminals
receive crude oil as it is unloaded from tankers and hold it until it is transported by pipeline or
other means to the refinery for processing. When it reaches the refinery, it is stored again until it
can be processed. After processing, the refined products go to bulk terminals where they are
stored until they are transferred to retail sales outlets and the final consumer. Above-ground
tanks, underground caverns and tanks, and offshore storage are used to hold supplies of crude
oil and petroleum products as they move through the distribution system. The largest
underground storage facilities are part of the U.S. Strategic Petroleum Reserve, containing
petroleum stocks maintained by the Federal Government for use during periods of major supply
disruptions.
The petroleum industry can be divided into two main sectors that are largely functional in nature:
the upstream sector that includes exploration and production and the downstream sector which
includes refining, transportation, and marketing.
The industry is highly integrated, with many firms involved in more than one sector. Large
companies known as majors are fully integrated and may own and operate establishments
involved in all of these sectors. Smaller, nonintegrated companies often referred to as
independents generally specialize in one aspect, such as crude oil exploration
and production or product marketing. The petroleum industry is international in scope, and it has
a complex system of markets where exchanges, long- and short-term sales contracts,
and spot market and futures contracts are used to transfer oil ownership as it passes from one
sector to the next.
The following chapters discuss the role of petroleum products in our society and describe the
workings of the U.S. petroleum supply network from the petroleum resource base
through crude oil exploration and production, refining, transportation and storage, product
marketing, and distribution.

Petroleum Products
We find petroleum products in every area of our lives. They are easily recognized in the gasoline
we use to fuel our cars and the heating oil we use to warm our homes. Less obvious are the uses
of petroleum-based components of plastics, medicines, food items, and a host of other products.
Petroleum products fall into three major categories: fuels such as motor gasoline and distillate
fuel oil (diesel fuel); finished nonfuel products such as solvents and lubricating oils; and
feedstocks for the petrochemical industry such as naphtha and various refinery gases. Demand
is greatest for products in the fuels category, especially motor gasoline.
Petroleum products contribute about 40 percent of the energy used in the United States. This is a
larger share than any other energy source including natural gas with a 25 percent share,
coal with about a 23 percent share, and the combination of nuclear, hydroelectric, geothermal
and other sources comprising the remaining 12 percent share.1It is projected that petroleum
consumption in the United States will increase by 1.2 percent annually, reaching 24.7 million
barrels per day by the year 2020.2 Although petroleum consumption will continue to increase
overall, its share of total energy use has shrunk over the past several decades as a result of
conservation efforts, fuel efficiency improvements, and growing use of alternative sources of
energy. While petroleum will undoubtedly remain the Nations leading energy source for
some time, the need to balance environmental, economic and energy security objectives has led
policy-makers and planners to seek means of diversifying the sources and reducing the role of
this resource in our overall energy supply. Petroleum products, especially motor gasoline,
distillate (diesel) fuel, and jet fuel, provide virtually all of the energy consumed in the
transportation sector. The industrial sector is the second largest petroleum consuming sector and
accounts for about 26 percent of all petroleum consumption in the United States.
Residential/Commercial and the electric utility sectors account for the remaining 8 percent of
petroleum consumption. Demand for petroleum products5 in the United States averaged
18.6 million barrels per day in 1997. This represents about 3 gallons of petroleum each day for
every person in the country. By comparison, petroleum demand averaged about 2 gallons per
person per day in the early 1950 is and nearly 3.6 gallons per person per day at its peak in 1978.
Price levels, economic growth trends, and weather conditions influence the demand for petroleum
products. For example, oil prices affect a consumers willingness to use petroleum instead of
other fuels such as natural gas. High prices relative to other fuels tend to encourage
fuelswitching,especially at electric utilities, in industrial plants having dual-fuel boilers, and in
households that have woodburning stoves and electric heaters available.6 High prices
also provide incentive for individuals to adopt short-term conservation measures, such as
adjusting thermostats and reducing discretionary driving. High prices stimulate longterm
measures as well, such as design changes that increase fuel efficiency in automobiles, improved
insulation in newly constructed and existing buildings, and design changes in appliances
to improve energy efficiency. Once in place, these long-term conservation measures continue to
affect fuel use regardless of subsequent price fluctuations.
Low oil prices tend to stimulate demand. Demand also increases during periods of economic
expansion, particularly in the industrial and transportation sectors, as increases in the
production of goods bring corresponding increases in transportation of raw materials and
deliveries of finished products. Lower prices coupled with economic expansion
stimulated consumption during periods in the mid-1980s and mid-1990s.
Weather extremes (winters that are colder than normal or summers that are warmer than normal)
also increase petroleum demand for heating or electricity generation for airconditioning
purposes. Milder weather than normal tends to reduce heating and air conditioning-related
demand for petroleum fuels. Weather can also contribute to the seasonal variations in demand for
transportation fuels such as gasoline.
Petroleum demand illustrated the effects of these factors several times during the 1990s. For
instance, the Iraqi invasion of Kuwait on August 2, 1990, caused petroleum demand to sink to
under 17 million barrels per day, its lowest level since 1987. The slowing economy and mild
weather had weakened demand early in the year. Then, following the invasion,prices climbed
rapidly in response to uncertainty over future supplies, with motor gasoline and jet fuel prices

registering dramatic increases. Shortly after the United Nations Security Council approved an
embargo against oil exports from Iraq and Kuwait, the Organization of Petroleum Exporting
Countries (OPEC) adopted a resolution allowing member countries to exceed their production
quotas to make up the difference. As production increases from OPEC and other countries began
to offset the loss of Iraqi and Kuwaiti oil, petroleum prices subsided.

Petroleum Fuels
Fuel products account for nearly 9 out of every 10 barrels of petroleum used in the United States.
The leading fuel, motor gasoline, consistently accounts for the largest share of petroleum
demand (Figure 2.2). Demand for motor gasoline alone accounts for more than 40 percent of the
total demand for petroleum products. Other petroleum fuels include distillate fuel oil (diesel fuel
and heating oil), liquefied petroleum gases (LPGs)(including propane and butane), jet fuel,
residual fuel oil, kerosene, aviation gasoline, and petroleum coke.
Motor gasoline is chiefly used to fuel automobiles and light trucks for highway use. Smaller
quantities are used for off highway driving, boats, recreational vehicles, and various
farm and other equipment.
A number of factors influence the demand for motor gasoline. For example, rising gasoline prices
in the 1970s encouraged consumers to reduce discretionary driving and stimulated consumer
demand for smaller, more fuel efficient automobiles. The Corporate Average Fuel Economy
(CAFE) Standards established by the Energy Policy and Conservation Act of 1975 set mileage
standards for new cars that helped reduce gasoline demand even more as new, more
fuel efficient cars replaced older, less efficient cars. The effects of the market shift to smaller cars
and the fuel efficiencies resulting from the CAFE standards continued to restrain growth in
gasoline demand through the 1980s. However, by the mid-1990s, fuel efficiency growth slowed
considerably as low gasoline prices and rising disposable income spurred consumers to buy less
fuel efficient light trucks, vans, and sport utility vehicles.
Environmental concerns have brought about a number of changes in gasoline composition. To
meet emission standards specified in the Clean Air Act of 1970, automobile manufacturers
introduced catalytic converters requiring unleaded fuel beginning in the 1975 model year.

The Environmental Protection Agency (EPA) issued regulations in 1973.establishing


requirements for the availability of unleaded fuels and, as the new cars entered the fleet,
unleaded gasoline began to displace leaded fuel. EPA continued the lead phase-down, further
restricting the lead content of motor gasoline in 1982, 1985, and 1986. The Clean Air Act
Amendments of 1990 banned lead use entirely, effective January 1, 1996.
As lead was eliminated, the use of other components, such as butane, aromatics, alcohols, and
ethers, to boost gasoline octane increased. Some of these additives like butane increase the
volatility, or evaporative tendency, of gasoline. As gasoline evaporates, contaminants are
released into the atmosphere.EPA and several States have issued regulations to restrict gasoline
volatility in the summer months when the problem is more severe. Federal regulations restricting
gasoline volatility took effect in 1989. Several States and localities have also begun to require the
use of other additives termed oxygenates in gasoline to reduce carbon monoxide emission levels
during the wintertime. These restrictions and proposals to amend the Clean Air Act encouraged
the industry to develop and market new reformulated gasoline.
The Clean Air Act Amendments of 1990 set standards for reformulated gasoline and mandated its
use in several U.S. cities beginning in 1995.
A number of alternative fuels have been developed for automotive use. Methanol (an alcohol
produced from natural gas, coal, or biomass) and ethanol (an alcohol produced from
biomass) are two alternative fuels that may be viewed as potential replacements for petroleum
products or as additives for use in present or future gasoline formulations.7 Compressed
natural gas, electricity, propane, liquefied natural gas, hydrogen, and solar energy are other
transportation fuel alternatives under consideration and in various stages of development.
Distillate fuel oil includes diesel oil, heating oils, and industrial oils. It is used to power diesel
engines in buses, trucks, trains, automobiles, and other machinery. It is also used to
heat residential and commercial buildings and to fire industrial and electric utility boilers.
Specifications differ for heating oils and diesel fuels based primarily on the sulfur content
of each fuel.
Diesel fuel accounts for about three-fourths of refinery first sales of distillate fuel oils.8 Most diesel
fuel is used for transportation purposes: highway diesel fuel represents more than half of distillate
fuel sales. Residential heating, the next largest end-use category, represents about 12 percent of
annual distillate use, but is concentrated in the winter months.
Environmental concerns also extended to diesel fuel. The Clean Air Act Amendments of 1990
mandated standards, effective October 1, 1993, for diesel fuels designated for on-highway use to
a maximum sulfur content of 0.05 percent by weight.
Liquefied petroleum gases (LPGs) rank third in usage among petroleum products, behind
motor gasoline and distillate fuel oil. LPGs are used as inputs (feedstocks) for petrochemical
production processes. This is their major nonfuel use. LPGs are also used as fuel for domestic
heating and cooking, farming operations, and as an alternative to gasoline for use in internal
combustion engines.
Individual LPG products (see Glossary) have distinct uses.For example, propane is widely used
as a fuel in the residential,commercial, and industrial sectors. It is also important as a
petrochemical feedstock. Ethane is used primarily as a petrochemical feedstock. Butane is used
as a gasoline blending component, although volatility regulations for gasoline have limited its use
for this purpose (see Chapter 5). Butane also has many domestic and industrial uses.
Most jet fuel is a kerosene-based fuel primarily used in commercial airlines. It requires a higher
temperature to ignite and is safer for commercial use than naphtha-based fuel.Naphtha jet fuel
meets the specifications required for certain military aircraft. It has a lower freezing point than
commercial fuel and a lower flash (ignition) point. However, from October 1, 1993, through 1995,
the U.S. military essentially converted most of its jet fleet from naphtha-type jet fuel to
kerosene-type jet fuel.Kerosene-type jet fuel is sometimes blended into heating oil and diesel fuel
during periods of extreme cold weather. This is done to help alleviate viscosity (thickness),
handling and performance problems associated with cold weather.Electric utilities use residual
fuel to generate electricity. Although this sector uses relatively little petroleum compared with the
transportation and industrial sectors, the electric utility sector depends on petroleum for about 5
percent of its total energy requirements.

Much of the surplus capacity for electricity generation is oil-fired, so petroleum use by utilities
is expected to increase along with electricity demand. Residual fuel oil is also used as bunker fuel
(fuel for ships),industrial boiler fuel, and heating fuel in some commercial buildings.
Kerosene is used for residential and commercial space heating. It is also used in water heaters,
as a cooking fuel, and in lamps. Kerosene falls within the light distillate range of refinery output
that includes some diesel fuel, jet fuel, and other light fuel oils.
Petroleum coke can be used as a relatively low-ash solid fuel for power plants and industrial use
(marketable coke) if its sulfur content is low enough, or used in nonfuel applications (catalyst
coke), such as in refinery operations.

Nonfuel Products
Nonfuel use of petroleum is small compared with fuel use,but petroleum products account for
about 89 percent of the Nations total energy consumption for nonfuel uses. There are many
nonfuel uses for petroleum, including various specialized products for use in the textile,
metallurgical, electrical,and other industries. A partial list of nonfuel uses for petroleum includes:
Solvents such as those used in paints, lacquers, and printing inks
Lubricating oils and greases for automobile engines and other machinery
Petroleum (or paraffin) wax used in candy making,packaging, candles, matches, and polishes
Petrolatum (petroleum jelly) sometimes blended with paraffin wax in medical products and
toiletries
Asphalt used to pave roads and airfields, to surface canals and reservoirs, and to make roofing
materials and floor coverings
Petroleum coke used as a raw material for many carbon and graphite products, including
furnace electrodes and liners, and the anodes used in the production of aluminum.
Petroleum Feedstocks used as chemical feedstock derived from petroleum principally for the
manufacture of chemicals, synthetic rubber, and a variety of plastics.

Introduction to Oil and Natural Gas


Since Colonel Drake ushered in the era of drilling for oil with his 1859 well1 in Pennsylvania,
petroleum engineers and other industry professionals have made numerous innovations that
enable the industry to find and produce the resources that power the world economy.
This section of the website is designed to provide an overview of oil and gas exploration and
production for the nontechnical audience. This section is just an introduction, and includes a
generalization of the techniques used. If you would like to learn more about the specific
technologies or processes involved, there are additional resources on this website, along with
links to other information sources.
Oil and natural gas are an important part of your everyday life. Not only do they give us mobility,
they heat and cool our homes and provide electricity. Millions of products are made from oil and
gas, including plastics, life-saving medications, clothing, cosmetics, and many other items you
may use daily.
In the United States, 97% of the energy that drives the transportation sector (cars, buses,
subways, railroads, airplanes, etc.) comes from fuels made from oil.[1] Auto manufacturers are
developing cars to run on alternate fuels such as electricity, hydrogen and ethanol. However, the
electric batteries need to be charged and the fuel to generate the electricity could be oil or gas.
The hydrogen needed for fuel cells may be generated from natural gas or petroleum-based
products. Even as alternative fuels are developed, oil will be crucially important to assuring that
people can get where they need to be and want to go for the foreseeable future. Barring any
increase in the penetration of new technologies, alternative fuels are not expected to become
competitive with oil for transportation before 2025.[3]
In areas of the world that are still developing, businesses and individuals are demanding greater
mobility for themselves and their products. World vehicle ownership is projected to increase from
122 vehicles per thousand people in 1999 to 144 vehicles per thousand in 2020, with the growth
occurring in developing nations. In China, for example, the number of cars has been growing by
20% per year.[3] Airports are being added in these countries as well, expanding jet fuel demand.
Oil is expected to remain the primary fuel source for transportation throughout the world for the
foreseeable future, and transportation fuels are projected to account for almost 57% of total world
oil consumption by 2020.[2]
World population is currently around 6 billion people, but is expected to grow to approximately 7.6
billion by 2020. That will mean a huge increase in the demand for transportation fuels, electricity,
and many other consumer products made from oil and natural gas.
Natural-gas use is growing across all economic sectors. Natural gas burns cleaner than oil or
coal, and this environmental benefit has encouraged its use. While decades ago, natural gas was
seen as an unwanted by-product of oil and may have been wasted, its value has been
recognized. Developing nations with gas reserves are finding this resource invaluable to building
their economies. Most natural gas is distributed by pipelines, which is a limiting factor for remote
resources that are not near the major consuming markets. Some natural gas is chilled to a liquid
state (LNG) whereby it can be transported across oceans by tanker. Similarly, there is

considerable development of technology to convert natural gas to liquids (GTL) to enable


transportation.
The world economy runs on oil and natural gas. These fuels improve your quality of life by
providing you with transportation, warmth, light, and many everyday products. They enable you to
get where you need to go, they supply products you need, and they create jobs. Without oil and
natural gas, quality of life would decline and people in developing nations would not be able to
improve their standard of living. Does that mean that alternative energy sources are not
necessary? Of course not. But it is important to acknowledge the value of oil and gas to the world
economy and recognize that it will be decades before the alternatives can replace all of things
that oil and natural gas contribute to our lives.
Oil and natural gas were formed from the remains of prehistoric plant and animals. Hundreds of
millions of years ago, prehistoric plant and animal remains settled into the seas along with sand,
silt, and rocks. As the rocks and silt settled, layer upon layer piled up in rivers, along coastlines,
and on the sea bottom. Geologic shifts resulted in some of these layers being buried deep in the
earth. Over time, the layers of organic material were compressed under the weight of the
sediments above them, and the increasing pressure and temperature changed the mud, sand,
and silt into rock and the organic matter into petroleum. This rock containing the organic matter
that turned into petroleum is referred to as source rock. The oil and natural gas is contained in the
tiny pore spaces in these source rocks, similar to water in a sponge.
Over millions of years, the oil and gas that formed in the source rock deep within the earth moved
upward through tiny, connected pore spaces in the rocks. Some seeped out at the surface of the
earth. But most of the petroleum hydrocarbons were trapped by nonporous rocks or other barriers
that would not allow it to migrate any further. These underground traps of oil and gas are called
reservoirs. Reservoirs are not underground "lakes" of oil; reservoirs are made up of porous and
permeable rocks that can hold significant amounts of oil and gas within their pore spaces. The
properties of these rocks allow the oil and natural gas within them to flow through the pore spaces
to a producing well.
Some reservoirs may be only hundreds of feet below the surface. Others are thousands, and
sometimes tens of thousands of feet underground. In the U.S., a few reservoirs have been
discovered at depths greater than 30,000 feet (9.15 km). Many offshore wells are drilled in
thousands of feet of water and penetrate tens of thousands of feet into the sediments below the
sea floor.
Most reservoirs contain oil, gas, and water. Gravity acts on the fluids to try to separate them in
the reservoir according to their density, with gas being on top, then oil, then water. However,
other parameters, such as fluid/rock properties and solubility will restrict complete gravitational
separation. When a well produces fluids from a subsurface reservoir, typically oil and water, and
often some gas will be recovered.
The larger subsurface traps are the easiest deposits of oil and gas to locate. In mature production
areas of the world, most of these large deposits of oil and gas have already been found, and
many have been producing since the 1960s and 1970s. The oil and gas industry has developed
new technology to identify and access smaller, thinner bands of reservoir rock that may contain
oil and gas. Improved seismic techniques (such as 3D seismic) have improved the odds of
correctly identifying the location of these smaller and more difficult to find reservoirs. There is still
a lot of oil and gas left to be discovered and produced. Future discoveries will be in deeper
basins, and in more remote areas of the earth. There will also be a lot of small reservoirs found in
existing oil and gas areas using advanced technologies.

Technological innovation not only makes it easier to find new deposits of oil and gas, but to get
more oil or gas from each reservoir that is discovered. For example, new drilling techniques have
made it feasible to intersect a long, thin reservoir horizontally instead of vertically, enabling the oil
or gas from the reservoir to be recovered with fewer wells. Technology advances have greatly
enhanced the oil and gas industry's ability to find and recover more of the finite amount of oil and
gas created millions of years ago.
Through the early 1900s, finding oil and gas was largely a matter of luck. Early explorers looked
for oil seeps to the surface, certain types of rock outcrops, and other surface signs that oil might
exist below ground. This was a "hit or miss" process.
But science and technology quickly developed to improve the industry's ability to "see" what lies
below ground. Seismic technology uses the reflection of sound waves to identify subsurface
formations. A crew working on the surface sets geophones at intervals along a straight line. Then
a loud noise is created at the surface. The noise moves through the ground and reflects off of
underground formations. How quickly and loudly that sound is reflected to the geophones
indicates what lies below ground. This process is repeated many times. Different types of
formations reflect sound differently, providing a picture of the types of rocks that lie below. If the
geophones are laid out in straight lines, the results are called 2-dimensional (2D) seismic. If they
are in a grid pattern, the result is called 3-dimensional (3D) seismic. Reading 2D seismic images
to find possible traps and reservoir rocks was as much art as science. Today, sophisticated
technology and high-speed computers help geophysicists process massive amounts of seismic
data. From these data, they can develop three-dimensional underground maps that significantly
improve the industry's ability to locate possible oil or gas deposits. But until a well is drilled, it is
impossible to know for certain whether the resource is there, whether it is oil or gas, and whether
it can be recovered in commercial quantities.
Once a company identifies where the oil or gas may be located, it then begins planning to drill an
exploratory well. Drilling a well is expensive; shallow offshore wells or deep onshore wells can
cost more than U.S. $10 million each to drill. In deep water offshore, or in remote areas such as
the Arctic, wells can cost substantially more. Companies must analyze all of the available
information in determining whether, and where, to drill an exploration well.
Before the technology advances of the past few decades, the best place to put a well was directly
above the anticipated location of the oil or gas reservoir. The well would then be drilled vertically
to the targeted oil or gas formation. Technology now allows the industry to drill directionally from a
site up to 5 miles (8 km) away from the target area. Computers on the drilling rig and steering
equipment in the drill-bit assembly enable guiding the wellbore with such accuracy that it can
target an area the size of a small room more than a mile underground. This directional drilling
technology means that the industry can avoid placing wells in environmentally sensitive areas or
other inaccessible locations, yet still access the oil or gas that lies under those areas. Advanced
drilling technologies, linked with satellite communications, mean that an engineer can monitor and
guide drilling operations in Peru, in real time, from an office in Houston.
In simplified terms, the drilling process uses a motor, either at the surface or downhole, to turn a
string of pipe with a drill bit connected to the end. The drill bit has special "teeth" to help it crush
or break up the rock it encounters to make a hole in the ground. These holes can vary in diameter
from a few inches to approximately two feet (0.6 m), but are usually in the range of 8 to 12 inches
(20-30 cm). While the well is being drilled, a fluid, called "drilling mud," is circulated down the
inside of the drillpipe, passes through holes in the drill bit, and travels back up the wellbore to the
surface. The drilling mud has two purposes: 1) it carries the small bits of rock (cuttings) from the
drilling process to the surface so they can be removed, and 2) it fills the wellbore with fluid to
equalize pressure and prevent water or other fluids in underground formations from flowing into
the wellbore during drilling. Water-based drilling mud is composed primarily of clay, water, and
small amounts of chemical additives to address particular subsurface conditions that may be

encountered. In deep wells, oil-based drilling mud is used because water-based muds cannot
stand up to the higher temperatures and conditions encountered. The petroleum industry has
developed technologies to minimize the environmental effects of the drilling fluids it uses,
recycling as much as possible. The development of "green" fluids and additives is an important
area of research of the oil and gas industry.
Even with the best technology, drilling a well does not always mean that oil or gas will be found. If
oil or gas is not found in commercial quantities, the well is called a dry hole; it will be plugged with
cement. Sometimes, the well encounters oil or gas, but the reservoir is determined to be unlikely
to produce in commercial quantities.
Technology has increased the success rate of finding commercial oil or gas deposits. In the U.S.,
for example, dry holes still accounted for 13% of all wells drilled in 2003. But this compares with
37% in 1973, 32% in 1983 and 26% in 1993. For wells seeking new deposits of oil or gas
(exploratory wells), technology has decreased the dry hole rate from 78% (only 22% of wells
finding commercial quantities of oil or natural gas) in 1973 to 56% in 2003.[1] The use of better
seismic and drilling technologies means fewer wells are required to add the the world's oil and
gas supplies.
New and better technology has made it possible for the industry to continue finding oil and gas
with fewer wells, less waste, less surface disturbance, and greater efficiency.
Once an oil or gas reservoir is discovered and assessed, production engineers begin the task of
maximizing the amount of oil or gas that can ultimately be recovered from it. Oil and gas are
contained in the pore spaces of reservoir rock. Some rocks may allow the oil and gas to move
freely, making it easier to recover. Other reservoirs do not part with the oil and gas easily and
require special techniques to move the oil or gas from the pore spaces to a producing well. Even
with today's advanced technology, in some reservoirs more than two-thirds of the oil in the
reservoir rocks may not be recoverable.
Before a well can produce oil or gas the borehole must be stabilized with casing, which is lengths
of pipe cemented in place. The casing also serves to protect any fresh water intervals that the
well passes through, so that oil cannot contaminate the water. A small-diameter tubing string is
centered in the wellbore and held in place with packers. This tubing will carry the hydrocarbons
from the reservoir to the surface.
Reservoirs are typically at elevated pressure because of underground forces. To equalize the
pressure and avoid the "gushers" of the early 1900s, a series of valves and equipment is installed
on top of the well. This wellhead, or "Christmas tree," as it is sometimes called, regulates the flow
of hydrocarbons out of the well.
Early in its production life, the underground pressure will often push the hydrocarbons all the way
up the wellbore to the surface, much like a carbonated soft drink that has been shaken.
Depending on reservoir conditions, this "natural flow" may continue for many years. When the
pressure differential is insufficient for the oil to flow naturally, mechanical pumps must be used to
bring the oil to the surface. This process is referred to as artificial lift. In the U.S., above-ground
pumping units are often called "horsehead" pumps because of their unique shape and movement.
Most wells produce in a predictable pattern called a decline curve. Production will increase for a
short period, then peak and follow a long, slow decline. The shape of this decline curve, how high
the production peaks, and the length of the decline are all driven by reservoir conditions. Some
wells may stop producing in economic quantities in only a few years. In the U.S., 8 oil and gas
fields have been producing for more than 100 years.

Engineers can do a variety of things to affect a well's decline curve. They may periodically
perform an operation called a "workover," which cleans out the wellbore to help oil or gas move
more easily to the surface. They may fracture or treat the reservoir rock with acid around the
bottom of the wellbore to create better pathways for the oil and gas to move through the
subsurface to the producing well.
As a field ages, the company may choose to use a technique called waterflooding. In this case,
some of the wells in the field are converted from production wells to injection wells. These wells
are used to inject water (often produced water from the field) into the reservoir. This water tends
to push the oil out of the pores in the rock toward the producing well. Waterflooding will often
increase production from a field.
In more advanced cases, the company may use more sophisticated techniques, collectively
referred to as enhanced oil recovery (EOR). Depending on reservoir conditions, various
substances [from steam to nitrogen, carbon dioxide to a surfactant (soap)] may be injected into
the reservoir to remove more oil from the pore spaces and increase production.
Throughout their productive life, most oil wells produce oil, gas, and water. This mixture is
separated at the surface. Initially, the mixture coming from the reservoir may be mostly oil with a
small amount of water. Over time, the percentage of water increases. On average in the United
States, oil wells produce 8 barrels of water for each barrel of oil. Some older wells may produce
as much as 100 barrels of water for each barrel of oil. This produced water varies in quality from
very briny to relatively fresh. In arid areas of the western U.S., produced water may be used for
agricultural purposes, such as livestock watering or irrigation. Where it cannot be used for other
purposes, this produced water may be reinjected into the reservoir either as part of a
waterflooding project or for disposal (returning it to the subsurface).
Natural gas wells usually do not produce oil, per se, but do produce some amount of liquid
hydrocarbons. These natural gas liquids are removed in the field or at a gas processing plant
(which may remove other impurities as well). Natural gas liquids often have significant value as
petrochemical feedstocks. Natural gas wells also often produce water, but the volumes are much
lower than is typical for oil wells.
Once it is produced, oil may be stored in a tank and later moved by means of truck, barge, or ship
to where it will be sold or enter the transportation system. Most often, however, it goes from the
separation facilities at the wellhead directly into a small pipeline, which then feeds into a larger
pipeline. Often, pipelines are used to bring the production from offshore wells to shore. Pipelines
may transfer oil from a producing field to a tanker loading area for shipping. Pipelines also may
be used to move oil from a port area to a refinery to be processed into gasoline, diesel fuel, jet
fuel, and many other products.
Natural gas is almost always transported through a pipeline. Because of the difficulty in moving it
from where it exists to where potential consumers are, some known gas deposits are not
currently being produced. Many years ago, the gas may have been wasted as an unwanted byproduct of oil production. The industry recognizes the value of clean-burning natural gas and is
working on improved technologies for getting gas from the reservoir to the consumer. Gas-toliquids (GTL) is an area of technology development that will allow natural gas to be converted to a
liquid, transported by tanker. Some countries have installed facilities to export gas as liquefied
natural gas (LNG), but the number of countries with facilities to use LNG is still limited.
The oil and natural gas that power our homes, transportation, and businesses are produced in
more than 100 countries around the world. Most of those countries produce both oil and natural
gas; a few produce only natural gas.

The ten largest oil producing countries in 2002 were: Saudi Arabia, Russia, United States, Iran,
China, Mexico, Norway, Venezuela, United Kingdom, and Canada.[1] Many factors can affect the
level of production, such as civil unrest, national or international politics, adherence to quotas, oil
prices, oil demand, new discoveries, and technology development or application.
In 2002, the ten largest natural gas producing countries were: Russia, United States, Canada,
United Kingdom, Algeria, Netherlands, Iran, Indonesia, Norway, and Uzbekistan.[1] Natural gas is
difficult to transport over long distances. Thus, in most countries, natural gas is consumed within
the country or exported to a neighboring country by pipeline. Technology for liquefying natural
gas so that it can be transported in tankers (like oil) is improving, but the volume of natural gas
exported in this manner is still limited. As technology expands the options for gas transportation,
demand for natural gas is expected to grow.
The Energy Information Administration (EIA) [U.S. Department of Energy] estimates that world oil
production in 2002 was 66.8 million barrels per day (B/D), or 24.4 billion barrels. EIA estimates
that dry natural gas production was 92.3 trillion cubic feet in 2002.
World oil production comes from more than 830,000 oil wells.[2] More than 520,000 of these wells
are in the United States, which has some of the most mature producing basins in the world. On
average, an oil well in the United States produces only 11 B/D, compared with 199 B/D in Russia,
3,674 B/D in Norway, and 5,404 B/D for a well in Saudi Arabia. Comparable data for natural gas
wells are not readily available.
A growing percentage of the world's production is from offshore areas, such as the Gulf of
Mexico, the North Sea, western Africa (Angola, Nigeria), and Asia (China, Vietnam, and
Australia). Offshore production represents significant technical challenges, yet technology
advances have enabled the industry to increase offshore production dramatically in the past
decade. Remotely operated vehicles (ROVs) that can maintain wellheads and equipment on the
oceans floor are just one example of the technology that has expanded the world's producing
horizons.
Oil and natural gas exist in the pore spaces of rock in the subsurface of the earth. How much oil
or gas can be recovered from the rock is a function of rock properties, technology, and
economics. Even when it is technically feasible to remove oil or gas from a specific reservoir, the
costs involved in doing so may exceed the value of the oil or gas recovered at projected prices. In
this case, the oil or gas is uneconomic and will not be developed.
The total amount of oil or gas in the reservoir is called original oil- or gas-in-place. For a specific
reservoir, engineers estimate this amount with information about the size of the reservoir trap and
properties of the rock (which can be sampled and tested). Some of the original oil and gas
deposited millions of years ago has been discovered, while some remains undiscovered (the
target of future exploration).
Discovered (or known) resources can be divided into proved reserves and prospective or
unproved (probable and possible) resources. "Proved reserves" are the quantities of oil or gas
from known reservoirs and expected to be recoverable with current technology and at current
economic conditions. Prospective resources are those that may be recoverable in the future with
advanced technologies or under different economic conditions.
A primary source for worldwide reserves estimates is the Oil & Gas Journal (OGJ).[1] OGJ
estimates that at the beginning of 2004, worldwide reserves were 1.27 trillion barrels of oil and
6,100 trillion cubic feet of natural gas. These estimates are 53 billion barrels of oil and 575 trillion
cubic feet of natural gas higher than the prior year, reflecting additional discoveries, improving
technology, and changing economics.

The countries with the largest amounts of remaining oil reserves are: Saudi Arabia, Canada, Iran,
Iraq, Kuwait, United Arab Emirates, Venezuela, Russia, Libya, and Nigeria.[1] The largest
reserves of natural gas are found in: Russia, Iran, Qatar, Saudi Arabia, United Arab Emirates,
United States, Algeria, Nigeria, Venezuela, and Iraq.[1] The maps illustrate the distribution of
remaining reserves around the world.
At 2003 consumption levels [2], the remaining reserves represent 44.6 years of oil and 66.2 years
of natural gas. Does this mean that the world will be out of fossil fuels in 50 years or so? That
theory has been around since the 1970s. In fact, the figures for years of remaining reserves have
remained relative constant over the past few decades as the industry has replaced consumption
with newly discovered oil and gas deposits and has developed technologies to increase the
amount of oil and gas that can be recovered from existing reservoirs.
As noted above, three factors affect the amount of oil or gas that can be recovered from a known
reservoir rock properties, technology, and economics. While the industry cannot change the
properties of the rock, it can develop new techniques to remove more oil from the rock. The
industry has made significant advances to enhance recovery from known reservoirs, adding to
the reserves base. When prices rise, marginal reservoirs can be developed economically, adding
to the reserve base.
Reserves will also grow as more oil and gas deposits are found around the world. Continental
North America and much of continental Europe have already been explored heavily, and any new
discoveries are likely to be small. But many areas of the globe are largely unexplored and many
large new deposits are waiting to be found. Companies have experienced major success in
discovering significant new oil and gas reservoirs offshore Brazil, the Gulf of Mexico, Alaska, off
the western coast of Africa, Russia, and many areas of Asia and the Pacific. These are just a few
of the current areas of growth. Most observers agree that significant deposits of oil and gas
remain undiscovered in the Middle East.
No one can know for certain how much oil and gas remains to be discovered. But geologists
sometimes make educated guesses. For example, the U.S. Geological Survey (USGS) conducts
periodic assessments of U.S. mineral resources. In its most recent assessment (1995), the USGS
estimated that the onshore U.S., including Alaska, has undiscovered, technically recoverable
resources of 112.3 billion barrels of oil and 1,074 trillion cubic feet of natural gas. In a separate
assessment of offshore resources completed in 2000, the U.S. Minerals Management Service
(MMS) estimated that 75 billion barrels of oil and 362 trillion cubic feet of natural gas underlie the
areas off the coasts of the U.S. The USGS and MMS resource assessments make clear that,
despite being a very mature producing area, substantial resources still exist in the U.S.
World oil resources, to 2025 may be more than two times current reserves, based on an estimate
from the U.S. Energy Information Administration (EIA) using USGS data. Reserve growth of 730
billion barrels accounts for new discoveries and the expansion of what can be recovered from
known reservoirs due to advances in technology and improvements in economics. But EIA
estimates that in 2025, countries around the globe will still have more than 900 billion barrels of
oil remaining to be discovered. EIA estimates total world oil resources at more than 2.9 trillion
barrels of oil.
The oil and gas industry uses advanced technology to aid in the search for the resources that will
meet growing world energy needs. Technology advances enable more accurate drilling and
extraction of a higher percentage of oil and gas from each field, extending the life of each well.
Advanced technology also allows development of resources that were not previously
economically viable, such as deep-sea fields, unconventional natural gas, and oil and gas in very
deep reservoirs. Together, these new sources of oil and gas will replace production from existing
wells as they decline, and help to assure adequate oil and gas supplies to meet world energy
needs for the foreseeable future.

When many people think of the oil and natural gas industry, they think of an old, dirty, staid
industry using technology from decades ago. The images they see on television or in the movies
show oil gushing madly out of wellbore or a mechanical drilling rig with three men in dirty clothes
throwing a chain around drill pipe. Most people do not realize how out-dated these images are.
Oil and gas is a very technology-oriented industry; many techniques developed by the industry
are now used in other industries, including the space program. Technological innovations have
made it possible for the oil and gas industry to supply the fuels that power the world economy.
The development and application of advanced technology is vital to the modern industry task of
finding and developing oil and gas resources. The reservoirs are covered with thousands of feet
of rock that makes it difficult to "see" the deposits. But the development of three-dimensional (3D)
seismic, coupled with significant increases in computational power allow the industry to develop
fairly accurate models of the subsurface. While these models can be viewed on a desktop
computer, others are viewed in huge theaters with curved screens that can be used to project
images in three dimensions. These 3D visualization centers allow technical personnel to see into
the subsurface and explore what is there. 3D seismic has enabled the industry to improve its
success rate, meaning that reserves are found with fewer wells, less waste, and less surface
disturbance.
New and better technology has made it possible for the industry to economically develop large oil
and gas deposits offshore. Drilling oil and gas wells in thousands of feet of water adds
significantly to the complexity, cost, and potential risks. However, technological innovations have
enabled the industry to overcome the added challenges. Wells are routinely drilled in 5,000 feet
(1,525 m) of water, and in 2003, a well was drilled in the Gulf of Mexico in nearly 10,011 feet
(3,050 m) of water.[1] After penetrating the sea floor, these wellbores extend thousands of feet
below the ocean floor.
Such wells are drilled from a ship that uses dynamic-positioning technology. A series of small
thrusters, combined with global positioning system (GPS) technology, allow the drillship to remain
essentially stable despite wind and water currents shifting less than 50 feet (15.25 m) in any
direction. This stability allows the ship to drill in very deep water and in most weather conditions.
Offshore platforms are very expensive to build and install. The technology that goes into
designing these structures, building them in a shipyard, and then transporting and installing them
is significant. But if every offshore field required a platform, many offshore resources would be
uneconomic to develop, so technology has evolved to place some of the equipment for producing
the oil or gas on the seabed. The produced fluids are piped to a nearby platform for processing.
These subsea completions are maintained by underwater vehicles called remotely operated
vehicles, or ROVs, that are operated by a worker on the platform nearby.
Advanced technology is also important to develop resources in remote and environmentally
sensitive locations such as the Arctic. The exploration phase of development (seismic,
exploratory drilling) is conducted during the winter months to minimize disturbance to the
environment and wildlife. Ice roads and drilling pads are constructed for use in these operations.
When spring arrives the ice melts, leaving little or no trace of the operations. If oil or gas is found,
technology has made it possible for the operations to have a very small "footprint." Newer Arctic
developments use less than 40% of the space that was required to develop Alaska's Prudhoe
Bay field. A new type of drilling platform was tested during early 2003 that raises drilling activities
above the tundra, with only the support legs contacting the surface, further reducing potential
impact.
Innovations in technology are expanding the depth horizons for exploration. Subsurface
temperatures and pressures increase with depth, so that a depth is eventually reached that is
beyond the capabilities of conventional equipment. But industry has worked diligently to develop
equipment made from space-age titanium alloys that can withstand the high temperatures and

high pressure (HT/HP) in very deep wells. The electronics needed to guide drilling operations and
provide feedback on what is encountered downhole have been insulated to withstand HT/HP. As
a result of these innovations, the industry now can develop fields with temperatures of 400 F
(204 C) and pressures of 16,000 psi (11,000 N/cm2).
Technology allows the industry to get more oil or gas out of each deposit that it finds. Newer
stimulation technologies, treatment fluids and enhanced recovery techniques enable the oil or gas
to move more easily to producing wells. Hydraulic fracturing techniques create small cracks from
the wellbore into the reservoir rock. These fractures serve as a "highway" for the hydrocarbons to
reach producing wells. Horizontal-drilling technologies allow a reservoir to be penetrated
horizontally rather than vertically, opening more of the reservoir to the well and enhancing
recovery.
Technology has enhanced environmental protection as well. Directional drilling provides greatly
increased flexibility in well placement, so that a well can be placed in the area where it will have
the least possible environmental effect and still reach a reservoir that might be miles away
laterally. Several wells can be drilled from a single location, dramatically decreasing the amount
of land surface area required to develop a field. Newer synthetic-based drilling fluids have been
developed for applications that previously required oil-based fluids, reducing toxicity, oil usage,
and oily wastes that must be disposed. Coiled-tubing drilling units are smaller, use less space,
create less visual disturbance, make less noise, use less energy, and reduce waste volumes.
When offshore platforms have reached the end of their useful life they may be removed for
recycling or appropriate disposal, or they may be relocated for beneficial use as artificial reefs.
These artificial reefs expand valuable fish habitats in areas lacking natural reefs (Gulf of Mexico,
Thailand, other areas).
Technological innovation has been the hallmark of the petroleum industry from its earliest days.
Petroleum engineers and geologists are constantly challenged to learn more about where oil and
gas are found, how to get the rocks to give up the hydrocarbons they contain, how to get the oil or
gas out of the ground efficiently, and how to do all of it while minimizing environmental impacts.
An important part of SPE's mission is to assist the industry in this process through the collection
and dissemination of technical information. By learning what others have done successfully or
even tried and failed, engineers are empowered to make the next technological breakthrough that
will continue to improve the industry's ability to produce the oil and gas that the world needs.
A presentation (1.94MB), courtesy of the American Petroleum Institute, discusses some of these
technical advances and provides illustrations. These slides are based on posters that were
displayed in the Russell Senate Office Building rotunda during February 2002.
Industry practices to operate safely and to protect the environment have evolved significantly in
the past few decades. Improvements in technology enable us to conduct many aspects of our
operations far more efficiently than just a decade ago. This efficiency translates to smaller
"footprints" (the amount of surface area disturbed), less waste generated, cleaner and safer
operations, and greater compatibility with the environment.
In 1999, the U.S. Department of Energy (DOE) issued a report titled Environmental Benefits of
Advanced Oil and Gas Exploration and Production Technology. After an extensive review of
exploration and production technologies developed in the past decade or two, DOE found
numerous environment benefits from these technologies. Examples cited in the DOE report
include the following.

Directional drilling technology provides access to oil and gas resources that underlie
sensitive areas, such as wetlands, from an area nearby where a drilling site can be
constructed with minimal effect on the environment.

To avoid potential harm during platform installation and decommissioning, the industry
conducts area surveys before and during operations to assure that no endangered
species or large marine mammals are in the area.
In the Arctic, companies build ice roads and ice drilling pads to conduct their operations.
These structures melt away in the spring, leaving no sign that they ever existed.
Companies have substantially reduced the amount of land disturbance required for
drilling a well, and by drilling several wells from a single location (with directional or
multilateral technology) fewer sites are required to achieve the same level of production.
When drilling in an area that was home to a grizzly bear population, one company
restricted human access by using remote telemetry technology to monitor the wellhead,
controlled hours of road use, drilled only during the winter months, and installed muffling
devices on certain pieces of equipment to reduce noise. Many other examples of
modifying operations to protect certain species were cited.
If drilling operations are unsuccessful, companies plug the well below the surface with
cement, then recontour and revegetate the surface as nearly as possible to predrilling
conditions (as required by landowners and state or federal agencies, who often must
approve the company's completion of restoration activities). In many cases, within a very
short period of time, it is virtually impossible to tell that a well was ever drilled in an area.

While these examples are from the United States, the same technologies are used to produce
similar benefits in other parts of the world.
Oil and gas companies have developed and implemented sophisticated management systems
that spell out the procedures that are required by employees and contractors to operate safely
and to protect the environment. Over the past few years, these management systems have been
extended to include social responsibility and ethical considerations. The term sustainable
development is often used to balance economic, environmental, and social challenges.
Oil and gas companies and governments are very sensitive to the effect of development on local
communities and the need to enter constructive dialogue with the community. In some areas,
companies may drill water wells to provide water both for industry operations and the local
community or build clinics to serve the local health needs. A recent report from the United Nations
Environmental Program (UNEP) describes many examples of companies investing to improve
local roads, access to health care, and education, which benefit the community.
Oil spills are an environmental concern that many people associate with the oil and gas industry.
In reality, the exploration and production of oil and gas rarely creates an oil spill, and when one
does occur, it is typically less than 1 barrel of oil.[1] Most oil spills are primarily from
transportation, particularly involving the tankers that are used to move oil from where it is
produced to where consumers need it. But oil spills from transportation have declined significantly
over the past few years, and the growing use of double-hulled tankers provides extra protection
while oil is transported.[2] Another source of oil spills during transportation is pipelines.
Unfortunately, a major reason for spills from pipelines in developing countries is civil unrest, such
as pipeline ruptures caused by local residents. The U.S. National Research Council, in its 2002
study on Oil in the Sea III, estimated that, worldwide, less than 15% of the oil in the world's
oceans is from industry sources, including transportation. [3] By far, the largest sources of oil
pollution are urban runoff and natural seeps.
To illustrate their efforts to protect the environment and demonstrate their performance
improvement, many of the multinational oil companies issue environmental reports either as part
of, or as supplements to, their annual reports. If you are interested in these reports, please check
the websites of these companies. In addition, many of the associations that serve the oil and gas
industry issue environmental reports demonstrating industry's environmental performance. A few
of these include: International Association of Oil & Gas Producers, American Petroleum Institute,

International Petroleum Industry Environmental Conservation Association, Australian Institute of


Petroleum, and the Canadian Association of Petroleum Producers. These reports provide real
examples and measurements of the ways the oil and gas industry is working to protect the
environment.
The industry has shown repeatedly that energy production and environmental protection are not
mutually exclusive. The industry can produce the oil and gas needed to give consumers the
freedom and mobility they demand, the warmth and light needed to survive, and still preserve the
natural beauty of the environment.

Frontiers of Technology
Celebrating 50 Years of the Best Technology
Like all good products and services, JPT was created to fill a need. The American Institute of
Mining & Metallurgical Engineers (AIME), from which todays Society of Petroleum Engineers
International evolved, had as part of its membership a small, but growing, number of petroleum
engineers. These members had specific technological issues far removed from their peers in the
older, more-established engineering disciplines. The AIME Petroleum Branch Executive
Committee (whose members represented such companies as The Pure Oil Co., Dowell Inc.,
Humble Oil and Refining Co., and The Texas Co.) determined that the best way to meet this
groups needs was to develop a publication so dominantly petroleum as to secure wide reader
interest, one that would rapidly gain the respect of the industry Over the years, that goal was
achieved. JPT came to be recognized worldwide as the source for authoritative technical
information in the petroleum industry. The first technical paper published in JPT was A Hydraulic
Process for Increasing the Productivity of Wells, by J.B. Clark. The hydraulic fracturing technique
for well stimulation that Clark discussed was responsible for the subsequent recovery of billions of
barrels of oil. Complementing the technology of the technical papers were articles relating to the
oil and gas business. Titles appearing in the first issue included Private Financing of Oil
Producing Properties and Middle Eastern Oil and Its Importance to the World.
JPT, like the profession and the industry it serves, has evolved over the years. But some things in
the publishing business never change, as indicated by this January 1949 statement from I.W.
Alcorn, Chairman of the Petroleum Branch Executive Committee, concerning JPTs purpose. We
entertain no illusion of our ability to please everyone all the time, but since the Journal is for
petroleum engineers, we believe that with your help we can please most of the people most of
the time.
Celebrating Technological Achievement
In recognition of the magazines 50th birthday, JPT, in collaboration with the SPE Foundation,
presented a series of historical articles throughout 1999. These articles look at the significant
technologies that have shaped the oil and gas business and that have led the industry into new
frontiers. Included are technologies in the seismic, drilling, and completion areas, along with
reservoir engineering, deepwater structures, subsurface equipment, surface facilities, horizontal
and multilateral wells, subsea completions, and formation evaluation.
These articles give readers insight into the forces that drove development of these technologies,
including economic and environmental considerations, and the impact these technologies have
on todays operations. Interviews with knowledgeable, key individuals in each of these topic areas
provide readers with a first-hand look at the various stages of technological growth. As was its
intent 50 years ago, JPTs mission is still to inform you well.

An Historical Perspective
Origins of the Profession
Petroleum engineering was recognized as a new and separate field of practice during the first 2
decades of the current century. The name is an acknowledgement that the primary practitioners
of the profession were those engaged in the business of producing petroleum. For at least a
century, digging or drilling into the Earth to obtain crude oil and/or natural gas had been done in
the U.S. and Europe with adaptations of ancient crafts and techniques that had been used to find
underground sources of water or salt. These activities produced various drilling techniques, some
of which were clearly associated with mining technology. As the volume of drilling activity grew,
professionals were attracted to it from other fields of engineering and science, and its transition
from a craft to an engineering profession got under way. The identification of a separate
profession can be marked as occurring about one-half century after Drake drilled for oil. Warner1
describes how, in 1907, the Kern Oil and Trading Co. of California hired five mining and geology
graduates from Stanford U. to do oil-production work. In 1913, the American Institute of Mining
Engineers (AIME) created its Oil and Gas Committee, and the U.S. Bureau of Mines set up its
Petroleum and Natural Gas Div. in 1914, with these developments recognized as offshoots from
the mining profession. In 1916, the Doherty Training School was organized and operated out of
the offices of the Empire Gas & Fuel Co. in Oklahoma.1 University courses to acknowledge the
new profession appeared in 1912, and the first degrees were given in 1916.
Development and Identity of the Profession
A definitive history of the development of petroleum engineering has yet to be written, although
the American Petroleum Inst.s (APIs) History of Petroleum Engineering2 provides much
information. With a general developmental yardstick that is applicable to all fields of engineering,
the stages of petroleum engineering can be identified as cut-and-try (before 1915), measurement
and correlation (191535), analysis and synthesis (193565), and systemization (post-1965).3
Such general divisions are somewhat misleading, however, because a single time scale does not
apply to every level and segment of technology that is encompassed by petroleum engineering. A
better understanding of the professions development can be derived from considering how its
various functions unfolded. Until the 1930s, petroleum engineering centered around the drilling,
completing, and producing activities associated with individual wells. Improvements in technology
took place through activities to upgrade specific techniques and methods in these arenas; to use
better materials; to standardize equipment; to measure distances, directions, pressures,
temperatures, and formation variances within the wellbore; to recover and analyze core samples
and samples of produced fluids; and to control the loss of energies from the natural gas and
water that accompanied oil production. The U.S. Bureau of Mines played a key role in these early
technological developments, followed by the pursuit of research projects under the sponsorship of
API, by establishment of oil company research laboratories, by theoretical and/or experimental
studies within universities, and by the creative inventiveness of many individuals. From the last in
particular, many service organizations and consulting groups emerged to provide specialized
engineering know-how.
During the 1930s, the primary emphasis on production from the individual well gave way to the
recognition that the characteristics of the oil reservoir had to be taken into consideration. Leading
companies established working groups and/or staffs for reservoir engineering, and the topic
began to appear as an item in petroleum engineering curricula. The focus on reservoir
engineering received impetus from events that accompanied overproduction of oil after the east
Texas oil field was opened, as well as from rejuvenation of older segments of the industry in the
Appalachian region that had survived by use of secondary-recovery methods. Public awareness
was directed toward the importance of conservation principles, including the concepts of
unitization, the best uses of reservoir energy, and the achievement of maximum recoverable oil.
Several states enacted laws to ensure that operating practices would be in accordance with the

best application of these principles, and the Interstate Oil Compact came into being. These laws
and their regulatory implementations not only vectored the technology toward a focus on the
nature of petroleum reservoirs and the management of the resource, but also expanded the need
for petroleum engineers.
The focus on reservoir engineering accelerated establishment of petroleum industry research
laboratories, particularly during the period immediately following World War II. Major research
attention was directed toward the principles, processes, and methods for improvement of oil
recovery that included waterflooding; high-pressure-gas injection; miscible processes; use of
carbon dioxide, nitrogen, and other gases; and development of surfactants. The consideration of
reservoirs as complex flow systems also brought into play the importance of measuring reservoir
characteristics and of producing both physical and mathematical models; the latter led to a
leadership role by petroleum technology in the development and use of computers. This
expansion and growth of reservoir-engineering principles and their successful applications in
many producing situations offered petroleum engineering a new identity and a better way in which
the profession could be differentiated from other branches of engineering.4
The evolution of reservoir concepts, however, also brought about a new appreciation for the
importance of the individual well and the manner in which its characteristics influenced reservoir
and recovery events. Where wells were located and how they were drilled, completed, and
operated were seen to be related to an understanding of the reservoir and how it might be
developed. Advances in drilling and completion technology, such as improved drilling fluids,
acidizing, and hydrofracturing; in formation evaluation through logging and well-performance
analysis; and in history matching with computers, opened up new possibilities for reservoir
development and control. The overall realization was two-fold: first, that the reservoir and its
assemblage of wells needed to be considered as an entity and, second, that a detailed
understanding of the subsurface environment was critical to all aspects of petroleum engineering.
During the past several decades, consolidation and integration of four major elements of
petroleum engineering have occupied the profession. The following lists these elements.
1. Extending our capabilities to gain access to, to couple with, and to operate within a greater
portion of the subsurface environment (e.g., offshore locations, overpressured environments,
marginal reservoirs, horizontal drilling, complex flow systems, acidizing, and hydrofracturing).
2. Developing methods for detailed characterization of subsurface formations, their fluids, and
their surroundings (e.g., geostatistics, well logging, indirect geophysical measurements, wellperformance analysis, and basin analysis).
3. Recovering a greater proportion of the petroleum within reservoirs that have been accessed
and understanding the transfer operations that accompany the recovery (e.g., a broadened
spectrum of injected fluids and fluid additives, phased fluid-injection programs, extensions of
reservoir flow paths, in-field drilling, and horizontal wellbores).
4. Systematizing technological management and coupling it with business decision making (e.g.,
history matching, risk analysis, reservoir management, software packages, and team projects).
The processes of consolidation and integration have depended on and have made increased use
of computerized systems for acquiring; organizing, processing, and displaying information in all
forms and at all levels of the technological spectrum.
Petroleum Engineering Practitioners
Those who have practiced petroleum engineering from its inception to the present have come
from three sources: those who received formal petroleum engineering education, those who were
trained in other fields, and those who were unschooled and learned through experience. In its
early periods, any new profession tends to be dominated by the latter two sources, and petroleum
engineering was no exception. No practicing petroleum engineer held a degree in petroleum
engineering before 1915. Warners1 tabulation of more than 200 persons designated as early
petroleum engineers (18601920) shows that about 5% of the degrees held by these individuals

were in petroleum engineering. This percentage undoubtedly grew rapidly, but how fast is hard to
determine because statistics are not readily available for either numbers of students or degrees
granted in the first 3 decades of petroleum engineering education. However, the percentage has
stayed surprisingly low even until today and even though, as a mature profession, petroleum
engineering includes a relatively large component of practitioners who transferred from outside
fields. Regular tabulations of petroleum engineering manpower were initiated when the energy
crisis of the 1970s developed, and analyses consistently showed that those who held petroleum
engineering degrees represented from one-half to two-thirds of those who were engaged in
petroleum engineering practice. A 1995 SPE count of its members showed that 58.7% of its U.S.
members with a degree in engineering held one in petroleum engineering, and Bureau of Labor
statistical reports for 197893 indicate that 65.5% of the engineering work force in oil and gas
extraction was in the field of petroleum engineering.5
An International Technology
Although its spread has not been uniform, petroleum engineering is now practiced worldwide as a
sophisticated and mature technology. While some of the earliest uses of drilling as a craft that
preceded the emergence of petroleum engineering were found in Europe, a separate profession
for petroleum engineering did not develop in Europe as it did in the U.S. The U.S. also proved to
offer the more fertile industrial environment for developing and advancing the profession. As a
consequence, many techniques that had early origins in Europe, such as rotary drilling and
electric well logging, were brought to fruition in the U.S.; and U.S. petroleum engineers became
the international leaders in the profession that evolved during the early decades of the century.
The need for petroleum energy, however, was recognized universally, and the nations of western
Europe, in particular, were part of the international competition to develop such resources, and
the best uses of petroleum engineering practices were competitive elements. U.S. oil companies
entered into agreements to explore for and produce oil in other countries before World War II, but
it was after that war that the tempo and range of these arrangements exploded. The nations of
Europe involved with international petroleum development were in a postwar recovery situation;
consequently, the U.S. flow of technology dominated the international petroleum engineering
scene. It became common practice for other nations to send their students to this country to
obtain degrees in petroleum engineering, in most cases at the expense of their home
governments or industries. This practice produced a cadre of leaders in the international
petroleum industry who were schooled in U.S. practices of the technology.
Production of petroleum outside the U.S. passed that within the U.S. in the mid-1950s, and, with
an increasing level of discovery, development, and use of petroleum, countries that had not
considered a need for native petroleum engineers initiated internal programs to ensure that they
would have a supply. Much of the infrastructure to support a profession of petroleum engineering
within these countries followed the international energy crises of the mid-1970s, a period during
which a large number of specialized courses in all aspects of petroleum engineering were offered
throughout the world by SPE, by individual companies, by universities, and by for-profit
educational vendors.
Within the U.S. today, there are fewer higher-educational programs in petroleum engineering,
with a lower student enrollment, than in the rest of the world, a situation opposite that of the
1950s. Extensive research laboratories for advancing the exploration for and recovery of
petroleum exist worldwide. The maturity of the profession and the quality of its practices are no
longer measured and evaluated solely by the standards of practice in the U.S. These matters are
now considered within an international community of petroleum engineers. The Society of
Petroleum Engineers, once a predominantly U.S. organization, is international in scope and
membership; its non-U.S. membership has grown from about 15.5% during 196075 to
approximately 45% today.

In as much as the petroleum resources of the U.S. were developed intensively as petroleum
engineering was being developed, they are in a more advanced stage of depletion than those in
the rest of the world, and annual production in the U.S. is in decline. Future growth and evolution
of the technology should be primarily outside the U.S., where development is in a younger phase
of its life cycle and where known geological basins have not yet been explored fully. The future
technology should be characterized more by global leadership than in the past.
Contributions of the Profession to Society
Petroleum engineering is a part of the overall technology spectrum that has produced the world of
today, molding our way of living and impacting our future. Within the spectrum, our profession can
claim at least two major, unique contributions. The first is its major contribution to the provision of
the energy and chemicals needed to underpin our total technological society. The second is its
creation of technological methods for exploring Earths inner space and the resources there.
Although the human race has relied on energy and chemicals from many sources, it has found no
other source as cheap and as available over the past 2 centuries as Earths fossil organic
residues. The petroleum engineering profession has brought all available engineering knowledge
and methods to bear on the acquisition and development of these resources in fluid form; has
embraced and extended their underlying sciences; and has developed techniques of
measurement, analysis, synthesis, and management for their understanding and control.
In the broadest sense, what petroleum engineering has is a technology for the general
exploration and development of the Earths subsurfacea region where humankind does not live
but feels compelled to explore, from which it obtains information about the nature and history of
the planet, and on which it may depend for resources not available at the planets surface.
Engineering within the Earths subsurface might be likened aptly to that for exploring and
developing space. Man enters these environments only in a limited way and, for the most part,
gains knowledge of these regions by remote and indirect measurements. Operations are
conducted at a distance by means of tools and/or materials that are designed for and placed
within the environment for special purposes. In the case of the Earths subsurface, the means of
access is a well, and the economic incentive for the special purpose of drilling the well is primarily
the recovery of petroleum. An important result of finding and producing petroleum is a technology
that inherently possesses broader uses and provides a scientific understanding of an unknown
region. These are technological and scientific gains that parallel human experiences in probing all
kinds of unknown and inaccessible regimes.

Seismic Technology
Evolution of a Vital Tool for Reservoir Engineers
In the Duri field on the island of Sumatra, Caltex Pacific Indonesia is using four-dimensional
(4D)or three-dimensional (3D) time-lapseseismic technology on a large scale to improve oil
recovery and optimize energy use in the worlds largest steamflood. In the Ekofisk field in the
North Sea, Phillips Petroleum has integrated borehole seismic images (also known as vertical
seismic profiles) with 3D and 4D surface seismic data to monitor the reservoir and to position new
wells more confidently to revitalize production. In the Kinsler field of southwestern Kansas,
Amoco, Texaco, and Conoco have used crosswell seismic imaging to aid in drilling decisions and
reservoir characterization. These accomplishments mark significant steps forward in the evolution
of seismic technology beyond exploration to becoming a vital tool for field development and
production. Similarly, these achievements exemplify the potential for turning untapped resources
into recoverable reserves that can be realized when engineers and geoscientists work together in
true multidisciplinary asset-management teams. Success in the oil field always has depended on

minimizing risk and uncertainty. That, in turn, has driven the quest to acquire information about
the subsurface and also to store, process, and manage that information to optimize production
and minimize risk and cost. Petroleum seismic technology was used initially to generate structural
images of subsurface targets. With the advent of 3D seismic and steady advances in acquisition,
processing, and interpretation, seismic data now deliver not only more structural detail but also
stratigraphic information and direct hydrocarbon indicators. When seismic data are integrated
with well logs, core data, and other subsurface information, reservoir description and
monitoringand, thus, economicsare significantly enhanced.1
In the Beginning
In 1924, the discovery of an oil field beneath the Nash salt dome in Brazoria County, Texas, was
the first to be based on single-fold seismic data.2 Before that, oilfield exploration was very much a
guessing game based on surface signs. Stakes were high, and rewards could be tremendous, but
losses from dry holes could be devastating. Then, engineers and geoscientists discovered that
they could use low-frequency sound waves to map subsurface geologic structures and locate
possible hydrocarbon traps.
Seismic instruments to record and measure movements of the ground during earthquakes were
first developed during the middle of the 19th Century. John C. Karcher gave birth to the formula
that is the basis of reflection seismology.3 This formula heralded a revolutionary change from
refraction to reflection as the basis of oilfield seismology. In reflection seismology, subsurface
formations are mapped by measuring the time it takes for acoustic pulses generated in the Earth
to return to the surface after reflection from interfaces between geological formations with
different physical properties.
In the early 1900s, Reginald Fessenden, chief physicist for the Submarine Signaling Co. of
Boston, used sound waves to measure water depths and to detect icebergs. In 1913, seismic
instruments he invented were used to record both refractions and reflections through Earth
formations near Framingham, Massachusetts. In September 1917, the U.S. Patent Office issued
a patent for Method and Apparatus for Locating Ore Bodies.4
During World War I, Ludger Mintrop invented a portable seismograph for the German army to use
to locate Allied artillery. By recording Earth vibrations from positions opposite Allied
bombardments, Mintrop could calculate gun locations so accurately that the first shot from a
German gun often would make a direct hit.5 The Germans discovered that varying velocities
among the geological formations through which their vibrations passed introduced errors into their
distance calculations and that certain assumptions about geology had to be made to compute the
distances. After the war, Mintrop reversed the process by measuring the distances and
computing the geology from the Earths vibrations recorded on his portable seismograph and, in
April 1923, was awarded a U.S. patent for the new process.
This was shortly after the 1921 Olahoma City tests conducted by Karcher, William Haseman,
Irving Derrine, and William Kite that proved the validity of the reflection seismograph as a useful
tool in the search for oil.6 In 1925, Karcher and DeGolyer persuaded Fessenden to sell his orebodies patent to Geophysical Research Corp. [On 16 May 1930, Karcher and Eugene
McDermott, with the financial backing of DeGolyer, founded Geophysical Service Inc. (GSI).]
On 25 March 1925, Dabney Petty, Associate State Geologist for the Texas Bureau of Economic
Geology, wrote to his brother, a structural engineer in Dallas, about application of Mintrops
method by his company, Seismos, on the Texas gulf coast. In his return letter of 1 April, O. Scott
Petty wrote of his idea to use the then-new vacuum tube to develop a seismograph that could
operate without dynamite. It occurs to me that if we had a seismograph that we could operate
without using great quantities of dynamiteno dynamite at all, I meanwe would be able to put it
all over these big companies, Petty wrote. Lets try to invent a seismograph using a vacuum
tube to detect the Earth vibrations so that it will be sensitive enough to register the vibrations

made by simply dropping a heavy chunk of lead on the ground . This correspondence led to
the invention and development of the first displacement-sensitive seismograph and also gave
birth to the third of the pioneering geophysical firms, the Petty Cos.
Applying the seismic principles developed by these pioneers revolutionized the search for
hydrocarbons and brought remarkable discoveries. Cecil Green, who was also a founder of
Texas Instruments, once reminisced that geophysics was a perfect combination of technology
and people. The high demands of science breed integrity and modesty as well, he said.
Show me a geologist, a geophysicst whos brimming with ego, and Ill show you a probable
newcomer to the business. Mother Earth has a way of quickly showing you youre always the
upstart.7
Transistors, Tape, and CDP
Following World War II, GSI acquired a license to build transistors that ultimately resulted in the
birth of Texas Instruments, of which GSI then became a subsidiary. The move to transistorized
equipment dramatically lightened the load for field crews.
Another advancement during the mid-1950s was the recording of seismic signals in variably
magnetized tracks along the length of a magnetic tape. Changing from paper to taped records
pointed the way to machine processing, development of the analog processor, and a total change
in the way seismic data were collected and processed.
A third advancement in the 1950s was W. Harry Maynes invention of common-depth-point (CDP)
data stacking. Maynes invention, also referred to as common midpoint or common reflection
point, proved to be the main signal-to-noise-enhancing technique in seismic exploration and is
still the basis from which novel techniques of economic continuous subsurface coverage depart.8
A fourth advancement, Conocos development of Vibroseis, made it possible to substitute
manmade vibrations or waves for those caused by dynamite-generated explosions. Vibroseis
relies on specially designed vibrating or weight-dropping equipment to create waves that
penetrate the surface, strike underground formations, and reflect back to the seismograph in
exactly the same manner as explosion-generated waves. The introduction of Vibroseis meant that
the multiplicity of source points necessitated by CDP would be feasible without the associated
increase in cost that was inevitable when dynamite was the only energy source
Digital Technology: The Second Revolution
According to Graebner, the second revolution in petroleum seismology occurred in the early
1960s with the arrival of digital technology. In a joint effort with Texas Instruments and several oil
companies in 1961, GSI introduced the first digital field system and computer for seismic-data
processing. Three years later, IBM introduced its 360 series of digital computers, and computers
suddenly moved from novelty to commercial popularity. Geoscientists began moving data from
bookshelves and file cabinets into computers, and processors had a heyday generating
processing algorithms.
The evolution of modern petroleum seismic technology and the evolution of information
technology (IT) are closely related. In fact, the two have developed in tandem, and the petroleum
geophysical industry continually is one of the largestand bestusers of IT outside the high-tech
industry itself.
With digital technology, signals detected by sensitive geophones could be read at millisecond
intervals and recorded as binary digits across the width of a 1-in. format tape. Early well files
mimicked the paper files from which they had descended. They contained primarily raw data.
Seismic sections were correlated by hand to sonic logs to evaluate prospects. Computer filing
quickly grew more sophisticated, however, and databases evolved. Complete digital gathering
and processing systems were developed, systems approaches were adopted, and the amount of
real subsurface information available improved dramatically.

Computing added a third dimension to reservoir modeling and increased the number of grids,
which improved resolution. It also made it possible, for the first time, to model Earth properties
with nonlinear characteristics.
Three-Dimensional Seismic: The Third Revolution
Graebner characterizes the move from two-dimensional (2D) to 3D seismic as the third major
revolution in seismic technology. The concept of 3D-seismic surveying has existed since the
earliest days of geophysics. However, the ability to implement that concept was restricted by the
efficiency and accuracy of data acquisition and the cost and computing power necessary to
condense, process, display, and help interpret data. All that changed in just over 1 decade and
made 3D seismic a reliable and cost-effective method of optimizing field development and
management.
By the early 1970s, the industry had developed a data-processing arsenal that contained, among
other things, programs for single and multichannel processing, deconvolution, velocity filtering,
automated statics, velocity analysis, migration, inversion, and noise reduction. These processing
accomplishments and the accompanying improvements in data collection advanced seismic
prospecting by levels of magnitude, but imaging methods were still 2D.9
The first 3D seismic survey was shot by Exxon over the Friendswood field near Houston in 1967.
In 1972, GSI enlisted the support of six oil companiesChevron, Amoco, Texaco, Mobil, Phillips,
and Unocalfor a major research project to evaluate 3D seismic. The site selected for the
experiment was the Bell Lake field in southeastern New Mexico.
The Bell Lake field was a structural play with nine producers and several dry holes. It also had
sufficient borehole data to ensure that 3D seismic could be correlated to subsurface geology. The
acquisition phase took only about 1 month, but processing the half million input traces required
another 2 years, and producing migrated time maps without workstations or any other form of 3D
interpretation aid was also a lengthy process. Nonetheless, the project was a defining event in
seismic history because the resulting maps confirmed the fields nine producers, condemned its
three dry holes, and revealed several new drilling locations in a mature field. The development of
3D seismic was one of the most important technological breakthroughs in an industry in which
profitability is closely tied to innovation and technology. Finally, the subsurface could be depicted
on a rectangular grid that provided the interpreter with detailed information about the full 3D
subsurface volume. The images produced from 3D data provided clearer and more accurate
information than those from 2D data. Any desired cross section could be extracted from the
volume for display and analysis, including vertical sections along any desired zigzag path.
Lateral detail also was enhanced by the dense spatial coverage in 3D surveys. Slicing the data
volume horizontally at fixed reflection times yielded comprehensive overviews of subsurface
structural features, particularly faulting. Attributes could be mapped and displayed along curved
reflector surfaces. The accurate positioning of events made possible through 3D migration also
improved subsurface imaging of flatter-lying stratigraphic targets. The result was an extension of
the value of seismic data for exploration and production functions.
Modern 3D Seismic Technology
Today, 3D-seismic technology is applied to solve problems and reduce uncertainties across the
entire range of exploration, development, and production operations. Surveys are used to
characterize and model reservoirs, to plan and execute enhanced-oil-recovery strategies, and to
monitor fluid movement in reservoirs as they are developed and produced. These capabilities
have been made possible by advancements in data acquisition, processing, and interpretation
that have both improved accuracy and reduced turnaround time.
Acquisition. Reduction in 3D data-acquisition time has reduced the price of 3D data and
dramatically increased the amount of data available. Better and more reliable instrumentation,
better and more streamers per swath, improved and faster navigation processing, and onboard

quality control and data processing have dramatically reduced downtime. Today, marine seismic
vessels used for 3D acquisition have, on average, four or five streamers, although some
supersized vessels can tow up to 16 streamers simultaneously.10 Another dramatic improvement
in acquisition technology has come about through ocean-bottom-cable (OBC) surveying methods.
Once considered a specialized technique, OBC acquisition has become competitive with
streamer operations in water depths of up to 650 ft. With the OBC method, cables connected to
stationary receiver stations are deployed on the ocean bottom, and a marine vessel towing an
array of air guns serves as the energy source. This makes it possible to survey congested areas
safely and uniformly.11 Additionally, resolution is higher because the quality of measurements is
less affected by noise and other disruptions and because control of actual positioning makes
repeated surveys more reliable. Processing. Commercialization of 3D depth migration, the
process by which geophysical-time measurements are processed into depth readings, owes itself
to parallel computing. In places where lateral changes in the Earth take place quickly, time
images of the subsurface are distorted by those changes. When these data are processed into
depth, a substantially more accurate picture of the Earths subsurface is yielded if velocities of the
rocks are known. Today, 3D depth migration is emerging as a truly interpretive data-processing
method that is closing the communications gap between geologists, geophysicists, and reservoir
engineers.
Interpretation. Maturation of 3D interactive workstations has played a key role in the widespread
acceptance of 3D-seismic data. The amount of raw data to be interpreted per survey has
increased by a factor of more than 5,000 over the past 15 years, placing a premium on efficiency
in the interpretive process.12
One of the most exciting advancements in 3D interpretation is 3D visualization. Humans perceive
the 3D world through a variety of visual cues that include perspective, lighting and shading, depth
of focus, depth cueing, transparency and obscuration, stereopsis, and peripheral vision. With the
addition of each visual cue, 3D-seismic interpretation has become more efficient, accurate, and
complete. Large amounts of data have been integrated into easily understood displays, and
communication between the various members of asset and management teams has improved.
Methods for 3D visualization have evolved from a lighted 3D horizon surface to desktop
visualization to the current immersive environments that engage peripheral vision. In 1997 Arco,
Texaco, and Norsk Hydro each installed large immersive visualization environments. The Texaco
facilities are visionariumsthat is, 8- to 10-ft-tall screens that curve horizontally through
approximately 160, with data projected by use of three projectors that each covers one-third of
the screen. Arco and Norsk Hydro use immersive visualization rooms based on the virtual reality
interface CAVE, invented at the U. of Illinois at Chicago. In a CAVE, three walls and the floor are
used as projection surfaces, and the images on the walls are backprojected, while the image on
the floor is projected from the top down. In these environments, the data not only surround the
interpreters but actually appear to fill the room. Members of the asset team literally can walk
through the data and discuss the reservoir with one another. With a 3D pointer, a new production
well can be planned from inside the reservoir and the effects of any changes experienced
immediately.13
Four-Dimensional Seismic
Time-lapse, or 4D, seismic, consists of a series of 3D-seismic surveys repeated over time to
monitor how reservoir properties (such as fluids, temperature, and pressure) change throughout
the productive life. Consequently, fluid movements can be anticipated before they affect
production. Similarly, placement of extraction and injector wells can be fine tuned, bypassed oil
and gas can be recovered, and production rates can be accelerated.14
For example, in the Duri steamflood project in Sumatra, which produces approximately 300,000
B/D of high-viscosity oil, placing injector wells with the aid of 4D seismic is expected to help
operator Caltex (a Chevron and Texaco affiliate) raise recovery efficiency in a complex reservoir

from 8% primary recovery to nearly 60%. A 4D-seismic pilot was conducted in the field with a
baseline survey and six monitor surveys recorded at various intervals over 31 months. The pilot,
which consisted of a central steam-injection well surrounded by six production wells,
demonstrated that the horizontal and vertical distribution of steam could be tracked over time. On
the basis of the quality and detail of reservoir information from the pilot study, a multidisciplinary
asset team assessed the economic feasibility of large-scale 4D monitoring of the Duri field. The
assessment took into account the benefits of time-lapse seismic as well as the cost of seismic
data and the risk probability of various outcomes. Benefits included shutting off injection in swept
zones; putting steam into cold zones; and locating observation wells in the right places, possibly
eventually reducing the need for them. When these benefits were weighed vs. seismic-data cost
and risk factors and compared with other operating scenarios for the field, the conclusion was
that the largest net present value could be obtained by aggressively managing the steamflood
with 4D seismic.15 Four-dimensional reservoir-monitoring projects are also being conducted in
numerous other parts of the world, including the North Sea, Southeast Asia, and the Gulf of
Mexico.
Crosswell Seismic Technologies
Detailed understanding of reservoir flow and barrier architecture is crucial to optimizing
hydrocarbon recovery. Crosswell seismologythat is, using seismic sources in a wellbore and
recording the wave propagation in another wellboreis the only spatially continuous, very-highresolution method that can image such features as faults, stratigraphic boundaries,
unconformities, sequence porosity, fracturing, and additional untapped reservoir bodies away
from the well. Crosswell data currently are expensive to acquire, and processing the data through
topographic inversion and migration requires considerable expertise. However, the fact that
answers to many of the most challenging geophysical problems reside within this high-resolution,
wide-azimuth illumination of rocks on a macroscale is driving the technology to become a viable
and regularly used tool.16
Looking Ahead
Integration, miniaturization, and production are likely to be operative words in describing seismic
technology in the 21st Century. With most, if not all, of the worlds more obvious reservoirs
already discovered and in production, and given the likelihood that unstable oil prices are here to
stay, emphasis in the oil field will focus on integrating technologies and disciplines to optimize
recovery in existing fields and develop new fields quickly. All this means that seismic technology
will become more and more a tool for production rather than exploration work. It also means that
geoscientists and reservoir engineers will work together more closely and cooperatively.
Full-Vector Wavefield Imaging: The Fourth Revolution. According to Graebner, a fourth revolution
in seismic technology, full-vector wavefield (or multicomponent) imaging, which includes both
shear and compressional waves (S- and P-waves, respectively) to capture rock properties
between wells, will add further value to seismic as a production tool.
P-waves, the traditional waves of seismic exploration, are influenced not only by rock frame
properties but also by the nature of the fluid in the rock pores. S-waves, on the other hand, are
insensitive to the type of fluid in sediments. Full-vector wavefield imaging makes it possible,
among other things, to see through gas chimneys that plague economically important areas,
such as the North Sea. These chimneys, which are caused by free gas in the sediments, destroy
P-wave continuity but hardly affect S-wave reflections. Combining P- and S-waves also helps
asset-team members discriminate among sands and shales and is valuable in helping detect
fractures.17 Using multicomponent imaging to detect fractures and stratigraphic traps will bring
engineers and geoscientists closer together. They will learn one anothers jargon and paradigms
both on the job and in university education. They will monitor reservoirs routinely with time-lapse
seismic and process data both in the field and in their offices essentially in real time, thanks to
cost-effective, high-bandwidth satellite communication. Finally, of course, they will continue to

search for even better and faster ways of improving success ratios and reducing risk in the oil
field.

Drilling
Before there was a great demand for petroleum, people dug and drilled wells into the earth to
reach sources of water and salt. Oil well drillers adopted techniques from these earlier
drillers, and added some innovations of their own. One early drilling rig, consisting of a springing
pole attached to a rope with a heavy bit at the end, was operated by a persons foot,
and an early rotating drill was operated by mule power. Colonel Drakes 1859 well was drilled
using a cable-tool rig, a device which pounds a chisel-like bit into the ground. The rig
was powered by a steam engine. The Spindletop well in Texas, in addition to being Americas
first gusher, was notable as one of the earliest successful uses of a modern rotary drilling system
(Figure 4.4). A rotary rig uses a rotating bit which drills a hole into the ground. Today, some wells
are still drilled with cable-tool rigs, but use of the cable-tool rig has been far surpassed by the
rotary rig. Coiled tubing drilling using a continuous small pipe (2 to 4 inches) with a the rotating
drill bit has been used in an increasing number of wells since 1990. Arotary-rig bit cuts and
crushes rock as it descends. The bit is attached to the bottom of a long string of steel pipe. At the
top of the string is the kelly, a square or hexagonal section of pipe that is inserted through a steel
disk known as the rotary table. The rotary table has a square or hexagonal hole in the center to
receive the kelly and is geared to a motor at its perimeter. The table turns, rotating the whole
string of kelly,drill pipe, and bit. As a well gets deeper, more pipe is added to the drill string
below the kelly. Support for the heavy drill string is provided by a tall steel structure known as a
derrick. The derrick contains heavy equipment which can lift the entire drill string from the well
and then return it, an operation called tripping that must be done to change the bit (drill bits wear
out in use, also certain rock types are drilled more efficiently with specialized bits), make repairs
to the downhole drillstring, and test the well. During a complete trip, all of the drill pipe is removed
from the hole, unscrewed into 90 foot sections, and stacked, and then the whole process is done
again in reverse.Drilling mud, usually a mixture of minerals and water or oil,is critical to drilling.
Mud is pumped down the well through the center of the drill string and drill bit and then circulated
back to the top, pushing the pieces of rock which have been cut by the bit up and out of the hole.
The mud also keeps the bit cool, coats the sides of the well to keep soft rock formations from
caving in, and controls pressure from fluids in the formation to prevent blowouts (sudden
explosive releases of gas or oil).

Source: Energy Information Administration, Office of Oil and Gas.

Properties such as the weight and thickness of the mud are specifically designed for each well.
Some geologic formations allow the use of air as a drilling mud, especially where clay minerals in
the rocks show a tendency to swell when wetted with water, thereby plugging a reservoirs pore
spaces.
Most wells are lined with steel pipe, known as well casing,which is cemented into place. It
prevents collapse of the sidewalls of the well and prevents the unwanted movement of fluids
between rock layers. During drilling operations, a set of strong valves known as blowout
preventers is affixed to the top of the surface casing beneath the rotary table. If a these valves
can be closed to prevent a blowout. When the well is producing, the top of the casing serves as a
place to auction tubing is inserted into most wells as part of the completion process. It can
allow simultaneous production from more than one reservoir at a time.In onshore operations, a rig
usually drills a single vertical well. Offshore rigs most often drill several wells from one location
using directional drilling, where the angle of a well is diverted from vertical. Directional drilling is
also used onshore to reach under obstacles such as riverbeds and lakes. In recent years, there
has been an increase in horizontal drilling activity. Horizontal drilling can reach difficult areas of
thin reservoirs, and can also be used to increase the area of a reservoir that can be contacted by
one well, substantially increasing the rate of production. Horizontal wells are usually drilled off of
vertical wells as they approach the oil-rich zone. Existing wells have had horizontal sections or
laterals added in the producing zone using coiled tubing drilling units. In 1997, the average cost
for drilling oil wells horizontally was 24 percent higher than the average cost for oil wells

not drilled horizontally.3Offshore drilling methods are similar to those used onshore,but the
equipment must be adapted to meet the harsh marine conditions. A special structure is needed to
support the derrick and other drilling equipment. This structure may rest on
the sea floor or float in or on the water. The type of structure chosen usually depends on the
water depth, weather, and sea conditions in the area where drilling will occur.The major types of
offshore drilling rigs are: barges, fixed platforms, jack-ups, semi-submersibles, and drill ships.
Barges are used in inland waters while the others are usually used off the coast. Fixed steel
platforms rest on the sea floor,usually in shallow water, and they are seldom moved once
they have been put in place. Jack-ups have legs which can be raised up and down in a manner
similar to the action of an automobile jack. With the legs raised they can be moved to a
location, then the legs can be jacked down to the sea floor.The base of a semi-submersible rig
actually floats below the surface of the water. Its position-holding anchors are supplemented
by the computer-controlled motors of the vessel.Semi-submersibles are very stable in bad
weather conditions and yet can be moved from place to place. Drill ships, which have been
adapted for drilling by cutting out a vertical opening in the middle of the ship, have been used in
the deepest waters. These ships use anchors or motors to hold them steady over the well.
Because of the difficulty of transporting drilling crews to offshore locations, offshore rigs provide
living quarters for the workers, with facilities for sleeping, eating, entertainment,and other
activities not usually provided at onshore drill sites. Helicopters are now the primary means of
transportation between land and rig. Therefore, one prominent feature of the offshore rig is a
helipad. In case of hurricanes in the Gulf of Mexico, crews must secure the production platforms
and leave well ahead of the possible arrival of a hurricane.Offshore wells are much more
expensive to drill than onshore wells (Figure 4.5). In 1997, the average onshore oil well cost
$74.23 per foot drilled, while the average offshore well cost $526.37 per foot.4 Drilling costs are
also influenced by well depth and the characteristics of the formations being drilled.
The number of wells drilled usually responds to changes in the price of crude oil. When crude oil
prices were increasing during the late 1970s, companies invested in more drilling activity. For the
most part, U.S. drilling activity has decreased since the 1981 price peak. In 1986, the number of
oil wells completed fell almost 50 percent from the previous year. Rising crude oil prices following
Iraqs invasion of Kuwait in 1990 helped increase drilling activity for the first time since 1984.
However, the total number of oil well completions fell from 35,118 in 1985 to 6,300 in 1998.5

Horizontal and Multilateral Wells


Increasing Production and Reducing Overall Drilling and Completion Costs
Cost experts agree that horizontal wells have become a preferred method of recovering oil and
gas from reservoirs in which these fluids occupy strata that are horizontal, or nearly so, because
they offer greater contact area with the productive layer than vertical wells.1 While the cost factor
for a horizontal well may be as much as two or three times that of a vertical well, the production
factor can be enhanced as much as 15 or 20 times, making it very attractive to producers.
Despite these facts, it took several decades for the industry to embrace the technique.
Some of the earliest development toward horizontal drilling took place during the early 1940s
when John Eastman and John Zublin developed short-radius drilling tools designed to increase
the productivity of oil wells in California, explains Frank Schuh, a horizontal-drilling consultant.
The tools were designed to drill 20- to 30-ft (6.096 to 9.144 m) radii and horizontal distances of
100 to 500 ft (30.48 to 152.4 m), and they permitted the drilling of numerous laterals in the same

formation in various directions around the wellbore. Typical designs used between four and eight
laterals.
The equipment preceded downhole survey tools and included extraordinary knuckle-jointed
flexible drill collars that could be rotated around the extremely high curvatures. Also, it allowed for
the employment of a drilling technique that was the perfect completion companion to standard,
vertical open-hole completions being used at the time. Basically, Eastman and Zublin were
instrumental in drilling the first multilaterals, Schuh states. Todays multilateral wells are simply
modern versions of these earlier efforts.
Unlike a directional well that is drilled to position a reservoir entry point, a horizontal well is
commonly defined as any well in which the lower part of the wellbore parallels the pay zone. And
the angle of inclination used to drill the well does not have to reach 90 for the well to be
considered a horizontal well. Applications for horizontal wells include the exploitation of thin oilrim reservoirs, avoidance of drawdown-related problems such as water/gas coning, and
extension of wells by means of multiple drainholes.2
Early Experimentation
True development and employment of horizontal-well techniques began in the U.S. during the
mid-1970s. However, horizontal-drilling experimentation began much earlier.
The U.S. Dept. of Energy (DOE) marks the starting date as 1929 in Texon, Texas. Here, says the
DOE, the first true horizontal well was drilled. Additionally, the DOE cites a well drilled in
Yarega, U.S.S.R., in 1937 and a 500-ft (152.4 m) well drilled in 1944 in the Franklin Heavy Oil
field in Venango City, Pennsylvania, as being some of the first wells to be drilled horizontally.
During the 1950s, the Soviet Union drilled 43 horizontal wells, a considerable effort with respect
to the equipment available then. Following their foray into horizontal drilling, the Soviets
concluded that while horizontal wells were technically feasible, they were economically
disappointing or, in other words, not profitable. As a result, they abandoned the method.
In the mid-1960s10 years after the Soviet experiencethe Chinese drilled two horizontal wells.
The first, 500 m (1,640.4 ft) in length and not cased, collapsed after a week of production. The
second was interrupted by the Cultural Revolution. Like the Russians, the Chinese concluded
that horizontal drilling was uneconomical and abandoned the method for more than 20 years.3
True Development Begins
North American Horizontal Wells
From 1979 to 1982, a renaissance of true horizontal-well development work occurred in North
America. It was during this period that Alan Barnes, an engineer for a major oil company, used a
complex reservoir-simulation model to promote the benefits of the Eastman/Zublin short-radius
technique to his superiors.
Following his modeling studies, the company drilled approximately 12 horizontal wells in the
Empire Abo reef in New Mexico. They targeted a thinning oil column in a massive limestone
reservoir with a significant gas cap and active water drive. Oil recovery of the first hole exceeded
the production of a comparable vertical well by more than 20 times before breakthrough of the
gas cap. The success of the Empire Abo project led the company to look for means of a broader
application. The company appointed Schuh to lead the search.
We developed what is generally referred to now as medium radius (20/100 ft) horizontal
drilling, Schuh says as he recalls the project. The development determined the maximum hole
curvatures possible in drilling horizontal wells without damaging conventional drillstring and
drilling tools. We found that the unique application of horizontal drilling allows hole curvatures that

are five to 10 times greater than can be used in conventional directional drilling. We utilized the
latest advancements in downhole motors and measurement-while-drilling (MWD) equipment to
develop methods for establishing long, low-cost horizontal boreholes. Using their technique,
Schuh and his colleagues drilled their first medium-radius well in January 1985. During the 1980s,
more than 300 horizontal wells were drilled in North America including the first one in Prudhoe
Bay, Alaska, in 1985. During this period, Texas Austin Chalk trend also received a great deal of
attention from horizontal-well operators who, at the time, drilled some of the highest-producingrate wells in the U.S.
But the decade of the 1990s most certainly will become known as the decade of the horizontal
well. Through 1998, the number of horizontal wells drilled in the U.S. has totaled more than
3,000, an increase of 1,000% over the previous 10-year period. By the late 1990s, a dramatic
shift in corporate philosophy regarding horizontal drilling occurred when one major operator set a
requirement that prior management approval was necessary for all vertical wells.4
European Horizontal Wells
The renaissance of horizontal-well drilling techniques in Europe began about the same time as in
North America. In 1977, Elf Aquitaine and LInstitut Franaise du Ptrole (IFP) began work on the
FORHOR project, which eventually led to the success of the Rospo Mare field, the only oil field in
the world at that time that produced systematically through horizontal wells. Drilled in the Adriatic
Sea in water depths ranging from 200 to 300 ft (60.96 to 91.44 m), the technical and economic
success of this field is credited with triggering the worlds interest in horizontal drilling.
Jacques Bosio, a former R&D deputy director and Vice President of Elf Aquitaine, was one of the
pioneers in the field of horizontal drilling as a project manager of the Elf/IFP FORHOR horizontaldrilling research study.
What I remember about that period, when nobody in the world would believe that horizontal wells
could become a new tool for the industry, is that it was more difficult to change, by 90, the way
people were thinking than it was to do it with the wells, says Bosio, recalling those early days in
Italy. We had been raised with the idea that the maximum possible inclination for a well could not
exceed 70. I dont know why, thats just the way we were taught. But, one of the main reasons
the FORHOR project succeeded was because we had the perseverance to go one step further
with a rotary drilling rig. Remember, we didnt have downhole motors then.
When we talked to our drillers [about going beyond 70 inclination] . . . they first laughed and
then turned real mad at those crazy R&D people, Bosio muses. Even supposing that you could
drill it, a horizontal well made no economic sense, they said. It will cost at least 10 times as
much as a nearby vertical well but will never produce 10 times more. Besides, no coring, logging
or testing will be possible, and it will collapse on you before a liner can be run.
In spite of the ridicule and disbelief of others, Bosio and his colleagues pressed on in May 1980 to
drill the Lacq 90 (a total coincidence that this was the name of the well) in southern France, the
first well drilled at 90 inclination.
We had to swear that we would plug the well if it happened to disturb the drainage of the
reservoir so production could go back to normal, Bosio says as he stifles a laugh. Lacq 90 went
275 m (902.2 ft) within the reservoir with 100 m (328 ft) purely horizontal at a cost of 3.2 times
that of a vertical well, he continues. It did produce . . . much more water than its neighbors since
the reservoir was 90% watered out. This led to claims that horizontal wells were only good for
producing water, an unfair statement that did nothing to advance the technology. Shrugging off
such comments, Bosio had much better luck later on with the wells successor, the Lacq 91.

With their data in hand, Bosios group set out to apply it in the Rospo Mare field, a perfect
laboratory for the development of horizontal-drilling techniques. The field is unique because the
nature of its reservoir and the characteristics of its oil prevent it from being produced through
conventional vertical wells. By early 1981, five wells, all vertical, had been drilled from a platform
at the center of the field to appraise, set the fields limits, and begin exploitation.5
Our attention now turned to the Rospo Mare field, states Bosio enthusiastically. We drilled the
Rospo Mare 6 in January 1982, 370 m (1,213.9 ft) of which was horizontal at a cost factor of 2.1
times more than a vertical well. More importantly, it was an immediate success, producing 20
times more oil than a neighboring vertical well and boosting the fields recoverable reserves from
near zero to 70 million barrels, says Bosio proudly.
Bosio believed the Rospo Mare 6 wells success would jolt the industry into jumping aboard the
horizontal-well bandwagon. Unfortunately, the success was greeted with a big industry yawn.
Bosio recalls his experience in giving a paper on the well at the 1983 World Petroleum Congress
(WPC) meeting in London. When I went to the chair to present the first paper ever presented on
horizontal wells, more than half the room, which was full from the preceding paper, got up and
left! They simply werent interested, Bosio explains. At the next WPC in 1987 in Houston, the
paper I presented attracted a small crowd. Then, at the 1991 WPC in Buenos Aires, we had a full
session on horizontal wells.
Finally, producers had begun to realize that horizontal wells can increase production rates and
ultimate recovery, reduce the number of platforms or wells required to develop the reservoir,
reduce stimulation costs, and bypass environmentally sensitive areas.6
Multilateral Wells
The acknowledged father of multilateral technology is Alexander Grigoryan. In 1949, Grigoryan
became involved in the theoretical work of American scientist L. Yuren, who maintained that
increased production could be achieved by increasing borehole diameter in the productive zone.
Grigoryan took the theory a step further and proposed branching the borehole in the productive
zone to increase surface exposure.
Grigoryan put his theory into practice in the former U.S.S.R.s Bashkiria field (todays
Bashkortostan). There, in 1953, he used downhole turbodrills without rotating drillstrings to drill
Well 66/45 in the Bashkiria Ishimabainefti field. His target was the Akavassky horizon, an interval
that ranged from 10 to 60 m (32.8 to 196.8 ft) in thickness. He drilled the main bore to a total
depth of 575 m (1,886.4 ft), just above the pay zone, and then drilled nine branches from the
open borehole without cement bridges or whipstocks. When completed, the well had nine
producing laterals with a maximum horizontal reach from kickoff point of 136 m (446.1 ft). It was
the worlds first truly multilateral well, although rudimentary attempts at multilaterals had been
made since the 1930s.
Compared to other wells in the same field, 66/45 was 1.5 times more expensive, but it penetrated
5.5 times the pay thickness and produced 17 times more oil each day. Grigoryans success with
the 66/45 well inspired the Soviets to drill an additional 110 multilateral wells in their oil fields
during the next 27 years, with Grigoryan drilling 30 of them himself.
Like horizontal wells, multilateral wells justify their existence through their economics. Defined as
a single well with one or more wellbore branches radiating from the main borehole, they can be
an exploration well, an infill development well or a re-entry into an existing well. But they all have
a common goal of improving production while saving time and money.

Multilateral-well technology has not yet evolved to the point of horizontal-well technology. The
complexity of multilateral wells ranges from simple to extremely complex. They may be as simple
as a vertical wellbore with one sidetrack or as complex as a horizontal extended-reach well with
multiple lateral and sublateral branches.7 While existing techniques are being applied and fresh
approaches are being developed, complications remain, and the risks and chances of failure are
still high.
The Future
As indicated earlier, it took several decades for the industry to endorse the concept of drilling
horizontal and high-angle wells. Producers had to be convinced that the two- or three-fold cost
increase of horizontally drilled wells would be justified. Once producers got a taste of the 15- to
20-fold production increases, they wholeheartedly jumped on the bandwagon.
This initial growth of horizontal drilling has been quite rapid and now represents about 10 to 15%
of all drilling activity. The future growth of horizontal wells depends on how the industry handles
the next rounds of technological advancement, Schuh says.
The present state-of-the-art is economically attractive in easily drilled formations where the
reservoir can be efficiently produced without the need of mechanical intervention. The greatest
growth potential is in harder-to-drill formations and reservoirs that require selective completions,
selective isolations, and stimulation operations. Success in these areas will require new drilling
equipment, a great expansion of completion options and development of new completion
equipment and well-repair techniques, Schuh concludes.
It seems that the future of multilateral technology will follow that same course. According to Jim
Longbottom,8 a service/supply company engineer in multilateral technology and a highly
published author, multilateral completions have a bright future, but it will be some time before that
future is realized. Drilling and completion of multilateral wells is at the same development state
as horizontal drilling and completion was 10 years ago, he says. Acceptance and expansion of
multilateral drilling indicate that within a decade, multilaterally completed wells will be as
commonplace throughout the industry as horizontal wells are now.
Asset managers have at their disposal the tools and technology to extract more value than ever
before from their holdings, he continues. Horizontal and re-entry multilateral drilling has
increased 50% during the past 5 years and will likely grow at more than 15% a year through
2000.
However, if Longbottoms predictions are to come true, multilateral technology will have to win
over the Gulf of Mexico (GOM) operators, who seem to possess a mysterious lack of enthusiasm.
Apparently these producers, who by nature are conservative, differ with their more risk-oriented
counterparts operating in other parts of the world. GOM operators have a long tradition of
resisting innovation, opting instead for systems that are dominated by near-term profit. They tend
to shun new, exotic solutions to their daily problems.9
Some believe the future of multilateral-well development is tied to advances in the methods for
drilling these wellsdirectional and horizontal drilling techniques, advanced drilling equipment,
and coiled- tubing drilling. This may be true. However, it is also important to note that the
industrys ability to analyze the production and reservoir performance of multilaterals, particularly
in a cost-effective manner, has fallen behind. Currently, drilling technology has temporarily
outstripped the industrys capabilities in production and reservoir-engineering analysis. It will
catch up, but these factors are also a major impediment to more widespread application of
multilaterals, particularly where improved-recovery methods are expected to be used.

Perhaps the biggest push on operators to install multilaterals in the future will come from the
technologys economics. Historically, when operators have found themselves in extended periods
of depressed oil prices about which they could do nothing, they have reduced operating and
capital expenditures to help the bottom line. Then, to help squeeze more oil from every drilling
and completion dollar spent, they have turned to new technologies, even if they hadnt endorsed
them before. Most recently that technology has included geosteering, improved seismic data, and
horizontal wells.
Also, multilateral technology offers an attractive package of economic incentives to producers
looking for bottom-line help. Multilaterals allow multiple wells to be drilled from a single main
wellbore, eliminating costly rig days for drilling an upper hole section for each well. And the ability
to tap several zones from branches off a single wellbore, rather than a number of vertical ones
drilled through the same section, holds the added attraction of risk reduction.
But the biggest economic driver will be deepwater offshore wells, where risks are high and the
huge cost of deepwater installations can be reduced by multilaterals that shrink the number of
wells and the amount of ancillary drilling and completion work needed to access high-productionrate fields.
As for horizontal wells, their future is assured. For multilateral wells, the pendulum is beginning to
swing in their favor as operators steadily realize that the advantages of these systems are
increasingly outweighing their risks. This is making their future look a lot more secure.

Refinery Processes and Facilities


Physical Separation
Processes
Atmospheric distillation is the first phase of refinery processing and generally follows a
preparation process to remove salts, water, and soluble metals. Crude oil vaporizes
when it is heated within the distillation column. As the boiling points of different products
(hydrocarbons) are reached,the vapors condense and are collected in stream or fractions.
Those with the lowest boiling points, the light fractions such as fuel gas, light naphtha, and
straight-run
gasoline, vaporize at the top of the distillation column. These products are used as reformer
feedstocks, gasoline blendstocks, and petrochemical feedstocks, or used directly
(e.g., solvents and liquefied petroleum gases). Fractions in the intermediate boiling ranges, such
as gas oil, heavy naphtha,and distillates, are used to produce kerosene, diesel fuel,distillate fuel
oil, jet fuel, blending stocks, and catalytic cracker feedstocks. Atmospheric bottoms have the
highest boiling range and remain after other fractions have been collected.
They are used to produce heavy fuel oil, vacuum distillation feedstock, and asphalt.
At atmospheric pressure, heavy bottoms decompose into elemental carbon, coke, and hydrogen
before they boil and vaporize. Vacuum distillation reduces pressure, which lowers
the boiling point, allowing further vaporization to occur with charring or coking. Vacuum distillation
produces gas oil to feed downstream processes and heavier oils for lubricant and asphalt
production. Vacuum bottoms are processed in coking units and converted to gasoline
components, petroleum coke, and refinery gases or used as fuel.Two other physical separation
processes are mainly used to make lubricating oils: solvent extraction, in which a solvent
removes one component and leaves behind a less soluble one; and crystallization, in which
cooling allows a component with a high melting point to solidify and separate from
the remaining liquid.

Breakdown Processes
Thermal cracking uses heat and pressure to crack or break down heavy oils and the residuum
left from distillation to increase the yield of lighter products (such as gasoline components and
light fuel oils) and to make coke. Coking, a form of thermal cracking, uses heat and low pressure
to break down heavy crudes and residual oils into fuel gas, gasoline blending stocks, distillates
and coke. Viscosity Breaking, or visbreaking, a milder form of thermal cracking, raises the
yield of fuel oil, lowers the viscosity of heavy fuel oil, and produces small amounts of gasoline
blendstocks and gas oil.Catalytic cracking employs a catalyst to convert vaporized oil from
distillation and other units to high octane gasoline and other lighter distillates. It produces higher
gasoline yields than thermal cracking and the gasoline produced has higher octane ratings than
gasoline produced using distillation or thermal cracking. Catalytic cracking units operate at
high temperature near atmospheric pressure. As the reaction takes place, the catalyst is
progressively deactivated as carbon (coke) forms, but can be regenerated between cycles by
burning off the coke. In continuous catalytic cracking, the catalyst cycles between the reactor
and regenerating kiln. In fluid catalytic cracking, the fine powder catalyst behaves as
fluid when vaporized oil bubbles through the particles and circulates continuously between
reaction and regeneration zones.
Hydrocracking uses hydrogen and a catalyst to reduce heavy gas oils to gasoline, jet fuel, and
diesel fuels. Operating temperatures are lower, and pressures higher, than in catalytic cracking.
Hydrocracking is more effective than catalytic cracking in converting previously cracked gas oils
to lighter products, but it is more expensive.

Change Processes
Catalytic reforming and isomerization are primarily used in making high octane unleaded
gasoline to upgrade hydrocarbon streams whose molecules are the right size but the wrong
configuration. Catalytic reforming upgrades low octane naphthas to produce high octane
gasoline blending stocks. It also produces high yields of aromatic hydrocarbons used for

petrochemical feedstocks, and generates hydrogen required for many refinery processes.
Isomerization was developed to produce isobutane from normal butane, for use in the
alkylation process. Isomerization uses platinum catalysts to Petroleum: An Energy Profile, 1999 63
Note: This Appendix is based on information contained in U.S. Petroleum Refining,Volume II,
published by the National
Petroleum Council, (Washington, DC, 1993) Appendix
H.
convert low octane normal pentanes to high octane isopentane for unleaded motor gasoline.

Buildup Processes
Alkylation and polymerization join small, highly volatile, low octane molecules to form large,
less volatile, higher octane molecules. Alkylation is basically the reverse of cracking;
it combines small molecule hydrocarbons from catalytic cracking to form high octane compounds
in the gasoline range for use in high octane unleaded motor gasoline and aviation gasoline.
The polymerization process links short chain molecules together. Although polymer gasolines
are somewhat lower in octane and yields are lower than can be obtained through the
alkylation process, capital and operating costs are lower for polymerization than for alkylation. As
a result, polymerization recovered importance following the Environmental Protection
Agencys (EPA) restriction of the lead content in gasoline. The process is also important in the
petrochemical industry.

Supporting Operation and Facilities

The refinery gas plant extracts alkylation feed and heavier components from refinery by
product gases.
Hydrodesulfurization is a catalytic process to remove impurities from liquid petroleum
fractions, especially when their presence would harm the process catalyst. It reduces sulfur
emissions, improves product yields, and upgrades petroleum fractions into finished products.
Hydrogen required for this process and for other refinery processes such as hydrocracking and
isomerization may be produced in a hydrogen production unit. Either steam reforming of
hydrocarbons such as methane or partial oxidation of heavier hydrocarbons may be used to
produce the hydrogen required.
Chemical treating removes such impurities as carbon dioxide, oxidants, and various corrosion
compounds from processing systems.
Stabilization processes allow controlled distillation to remove enough light hydrocarbons that
the remaining product has the desired volatility. Solvent treating,dewaxing, and hydrofinishing
change heavy gas oils produced by vacuum distillation into high quality lubricants.
In solvent treating, solvents and lube (lubricant) stocks flow against one another in a tower to
remove impurities. Dewaxing uses a solvent to dilute lube stocks in order to remove the wax that
impedes the flow at normal temperature. After dilution the oil is chilled to crystallize the wax so
that it can be filtered out. Hydrofinishing uses hydrogen and a catalyst to remove impurities.
In percolation filtration, lube stock percolates through a tower filled with fullers earth and
bauxite to stabilize its color.
Propane deasphalting produces bright stock lubricating oil from select crude oil in vacuum
tower bottoms.Heavy reduced crude is mixed with and dissolves in propane; the lubricating oil is
drawn off with the propane,leaving the asphalt behind.
Blending various fractions into finished products is the final step in the refinery operation. In
gasoline blending,an automated system meters and mixes into a finished product various
components, or blending stocks, from the process unit (e.g., butane, alkylate, isomerization stock,
reformate, catalytic gasoline,naphtha or straight run gasoline) and various additives.
The light ends recovery unit recaptures light hydrocarbon gases, such as methane, ethane,
propane, and butane,some olefins and isobutane. The methane and ethane are used for refinery
fuel. Then olefins and isobutene for alkylation feedstock may be separated from the recovered
propane for liquefied petroleum gases and normal butane for gasoline blending.
An acid gas treating system uses absorption in an alkaline solution to remove the acid gases,
hydrogen sulfide and carbon dioxide, from the sour gas produced in a number of refinery
processes, before it can be used

for refinery fuel. The acid gases can then be processed in a sulfur recovery unit.
Sour water stripping is used to reduce hydrogen sulfide and ammonia levels in sour water
condensate, or water containing sulfides and ammonia, that is produced in various refinery
operations.
Storage tanks for crude oil, intermediate products, and finished products are necessary to the
operation of the refinery, even though they are not directly involved in the operation of processing
units.
Steam generation systems provide steam for refinery processes, generate electric power, and
run turbines for blowers.
Receiving and distribution systems bring materials,crude oil, and products into the refinery
and distribute finished products to the consumer. They include pumps, pipelines, storage tanks,
tankers, tank cars, tank trucks, loading and unloading facilities.
64 Petroleum: An Energy Profile, 1999

Safety systems include flare and fire control systems. A flare system consists of pipes to
collect gases, devices to remove liquid, and a terminal burner to flare (burn off) the gases safely.
The fire control system includes a separate water system with storage pumps, piping,
and water spray devices in process areas; sewer systems with seals, covers, traps, and baffles;
foam systems;fire trucks and other specialized equipment; and firefighters.
Environmental protection systems are often incorporated into the refinery processes
themselves. Cooling systems use water and air to remove excess heat. Most water used by
refineries is used for cooling. In some refineries,total recirculation has been achieved through
air cooling and closed systems, greatly reducing the need for cooling water. Water pollution
control systems used distillation to remove chemicals from contaminated water from process
systems before the water is reused in other plant services or sent to the waste treatment facility
and holding ponds. Waste water is also treated chemically so impurities can be filtered out.
Some refineries have separate basins to segregate storm water runoff so its flow through the
treatment facility can be kept to manageable levels.
Air pollution control measures include chemical treatment and other measures to reduce
sulfur emissions during production and scrubbers to remove sulfur from combustion gases and
tail gases (residue gases from a sulfur recovery unit.) Vapor collection systems, floating
tanks roofs, and seals are among the means used to reduce evaporation during storage.
Particulate emissions from catalytic crackers are collected by electrostatic
precipitators, scrubbers, and filters.
Solid waste produced by various pollution control equipment requires careful handling. Sludge
from water
treatment facilities is incorporated in landfills. Oily wastes are deposited in selected sites.
Powdery catalysts that contain valuable metals are returned to the manufacturers for recycling.

Glossary
Alcohol. The family name of a group of organic chemical compounds composed of
carbon, hydrogen, and oxygen.The series of molecules vary in chain length and are
composed of a hydrocarbon plus a hydroxyl group;CH3-(CH2)n-OH (e.g., methanol,
ethanol, and tertiary butyl alcohol).
Alkylate. The product of an alkylation reaction. It usually refers to the high octane
product from alkylation units. This alkylate is used in blending high octane gasoline.
Alkylation. A refining process for chemically combining isobutane with olefin
hydrocarbons (e.g., propylene, butylene) through the control of temperature and pressure
in the presence of an acid catalyst, usually sulfuric acid or hydrofluoric acid. The product,
alkylate, an isoparaffin, has high octane value and is blended with motor and aviation
gasoline to improve the antiknock value of the fuel.
API Gravity. An arbitrary scale expressing the gravity or density of liquid petroleum
products. The measuring scale is calibrated in terms of degrees API; it may be calculated
in terms of the following formula:The higher the API gravity, the lighter the

compound.Light crudes generally exceed 38 degrees API and heavy crudes are
commonly labeled as all crudes with an API gravity of 22 degrees or below. Intermediate
crudes fall in the range of 22 degrees to 38 degrees API gravity.
Aromatics. Hydrocarbons characterized by unsaturated ring structures of carbon atoms.
Commercial petroleum aromatics are benzene, toluene, and xylene (BTX).
Asphalt. A dark-brown-to-black cement-like material containing bitumens as the
predominant constituent
obtained by petroleum processing. The definition includes crude asphalt as well as the
following finished
products: cements, fluxes, the asphalt content of emulsions (exclusive of water), and
petroleum distillates
blended with asphalt to make cutback asphalts. The conversion factor for asphalt is 5.5
barrels per short ton.
ASTM. The acronym for the American Society for Testing and Materials.
Atmospheric Crude Oil Distillation. The refining process of separating crude oil
components at atmospheric pressure by heating to temperatures of about 600 to 750 F
(depending on the nature of the crude oil and desired products) and subsequent
condensing of the fractions by cooling.
Aviation Gasoline (Finished). All special grades of gasoline for use in aviation
reciprocating engines, as given in ASTM Specification D910 and Military Specification
MIL-G-5572. Excludes blending components which will be used in blending or
compounding into finished aviation gasoline.
Aviation Gasoline Blending Components. Naphthas which will be used for blending or
compounding into
finished aviation gasoline (e.g., straight-run gasoline,alkylate, reformate, benzene,
toluene, and xylene).
Excludes oxygenates (alcohols, ethers), butane, and pentanes plus. Oxygenates are
reported as other
hydrocarbons, hydrogen, and oxygenates.
Barrel. A volumetric unit of measure for crude oil and petroleum products equivalent to
42 U.S. gallons. This measure is used in most statistical reports. Factors for converting
petroleum coke, asphalt, still gas and wax to barrels are given in the definitions of these
products.
Barrels Per Calendar Day. The maximum number of barrels of input that can be
processed during a 24-hour period after making allowances for the following limitations:
the capability of downstream facilities to absorb the output of crude oil processing
facilities of a given refinery. No reduction is made when a planned distribution of
intermediate streams through other than downstream facilities is part of a refinerys
normal operation;the types and grades of inputs to be processed;the types and grades of
products expected to be manufactured;the environmental constraints associated with
refinery operations;the reduction of capacity for scheduled downtime such as routine
inspection, mechanical problems, maintenance,repairs, and turnaround; and 141.5
sp.gr.60 F/60 F - 131.5 Degrees API =
the reduction of capacity for unscheduled downtime such as mechanical problems,
repairs, and slowdowns.

Barrels Per Stream Day. The amount a unit can process running at full capacity under
optimal crude oil and product slate conditions.
Benzene (C6H6). An aromatic hydrocarbon present in small proportion in some crude
oils and made
commercially from petroleum by the catalytic reforming of naphthenes in petroleum
naphtha. Also made from coal in the manufacture of coke. Used as a solvent, in
manufacturing detergents, synthetic fibers, and
petrochemicals and as a component of high-octane gasoline.
Blending Components. See Motor or Aviation Gasoline Blending Components.
Blending Plant. A facility which has no refining capability but is either capable of
producing finished
motor gasoline through mechanical blending or blends oxygenates with motor gasoline.
Bonded Petroleum Imports. Petroleum imported and entered into Customs bonded
storage. These imports are not included in the import statistics until they are:
(1)withdrawn from storage free of duty for use as fuel for vessels and aircraft engaged in
international trade; or
(2)withdrawn from storage with duty paid for domestic use.
BTX. The acronym for the commercial petroleum aromatics benzene, toluene, and
xylene. See individual
categories for definitions.
Bulk Station. A facility used primarily for the storage and/or marketing of petroleum
products which has a total bulk storage capacity of less than 50,000 barrels and receives
its petroleum products by tank car or truck.
Bulk Terminal. A facility used primarily for the storage and/or marketing of petroleum
products which has a total bulk storage capacity of 50,000 barrels or more and/or receives
petroleum products by tanker, barge, or pipeline.
Butane (C4H10). A normally gaseous straight-chain or branch-chain hydrocarbon
extracted from natural gas or refinery gas streams. It includes isobutane and normal
butane and is designated in ASTM Specification D1835
and Gas Processors Association Specifications for commercial butane.
Isobutane (C4H10). A normally gaseous branch-chain hydrocarbon. It is a colorless
paraffinic gas that boils at a temperature of 10.9 F. It is extracted from natural gas or
refinery gas streams.
Normal Butane (C4H10). A normally gaseous straight-chain hydrocarbon. It is a
colorless paraffinic
gas that boils at a temperature of 31.1 F. It is extracted from natural gas or refinery gas
streams.
Butylene (C4H8). An olefinic hydrocarbon recovered from refinery processes.
Captive Refinery Oxygenate Plants. Oxygenate production facilities located within or
adjacent to a
refinery complex.
Catalytic Cracking. The refining process of breaking down the larger, heavier, and more
complex hydrocarbon molecules into simpler and lighter molecules. Catalytic cracking is
accomplished by the use of a catalytic agent and is an effective process for increasing the
yield of gasoline from crude oil. Catalytic cracking processes fresh feeds and recycled
feeds.

Fresh Feeds. Crude oil or petroleum distillates which are being fed to processing units
for the first time.
Recycled Feeds. Feeds that are continuously fed back for additional processing.
Catalytic Hydrocracking. A refining process that uses hydrogen and catalysts with
relatively low temperatures and high pressures for converting middle boiling or residual
material to high-octane gasoline, reformer charge stock, jet fuel, and/or high grade fuel
oil. The process uses one or more catalysts, depending upon product output, and can
handle high sulfur feedstocks without prior desulfurization.
Catalytic Hydrotreating. A refining process for treating petroleum fractions from
atmospheric or vacuum
distillation units (e.g., naphthas, middle distillates,reformer feeds, residual fuel oil, and
heavy gas oil) and
other petroleum (e.g., cat cracked naphtha, coker naphtha,gas oil, etc.) in the presence of
catalysts and substantial quantities of hydrogen. Hydrotreating includes desulfurization,
removal of substances (e.g., nitrogen compounds) that deactivate catalysts, conversion of
olefins to paraffins to reduce gum formation in gasoline,and other processes to upgrade
the quality of the fractions.
Catalytic Reforming. A refining process using controlled heat and pressure with catalysts
to rearrange certain hydrocarbon molecules, thereby converting paraffinic and naphthenic
type hydrocarbons (e.g., low-octane gasoline boiling range fractions) into petrochemical
feedstocks and higher octane stocks suitable for blending into finished gasoline. Catalytic
reforming is reported in two categories. They are:
Low Pressure. A processing unit operating at less than 225 pounds per square inch gauge
(PSIG) measured at the outlet separator.
High Pressure. A processing unit operating at either equal to or greater than 225 pounds
per square inch gauge (PSIG) measured at the outlet separator.
Charge Capacity. The input (feed) capacity of the refinery processing facilities.
Commercial Kerosene-Type Jet Fuel. SeeKerosene-Type Jet Fuel.
Contract Arrangements. Long-term contracts which specify the volumes of a
commodity, such as crude oil, to be delivered and fix the price for a specified period of
time.
Crude Oil (Including Lease Condensate). A mixture of hydrocarbons that exists in
liquid phase in underground reservoirs and remains liquid at atmospheric pressure after
passing through surface-separating facilities. Included are lease condensate and liquid
hydrocarbons produced from tar sands, gilsonite, and oil shale. Drip gases are also
included, but topped crude oil (residual oil) and other unfinished oils are excluded.
Liquids produced at natural gas processing plants and mixed with crude oil are likewise
excluded where identifiable. Crude oil is considered as either domestic or foreign,
according to the following:
Domestic. Crude oil produced in the United States or from its outer continental shelf as
defined in 43 USC 1331.
Foreign. Crude oil produced outside the United States. Imported Athabasca
hydrocarbons (tar sands from
Canada) are included.
Crude Oil Losses. Represents the volume of crude oil reported by petroleum refineries as
being lost in their

operations. These losses are due to spills, contamination,fires, etc. as opposed to refinery
processing losses.
Crude Oil Production. The volume of crude oil produced from oil reservoirs during
given periods of time. The amount of such production for a given period is measured as
volumes delivered from lease storage tanks (i.e., the point of custody transfer) to
pipelines, trucks, or other media for transport to refineries or terminals with adjustments
for (1) net differences between opening and closing lease inventories, and (2) basic
sediment and water
(BS&W).
Crude Oil Qualities. Refers to two properties of crude oil,the sulfur content and API
gravity, which affect refinery processing complexity and product characteristics.
Crude Oil, Refinery Receipts. Receipts of domestic and foreign crude oil at a refinery.
Includes all crude oil in transit except crude oil in transit by pipeline. Foreign crude oil is
reported as a receipt only after entry through customs. Crude oil of foreign origin held in
bonded storage is excluded.
Delayed Coking. A process by which heavier crude oil fractions can be thermally
decomposed under conditions of elevated temperatures and pressure to produce a mixture
of lighter oils and petroleum coke. The light oils can be processed further in other
refinery units to meet product specifications. The coke can be used either as a fuel or in
other applications such as the manufacturing of steel or aluminum.
Development Well. A well drilled within the proved area of an oil or gas reservoir to the
depth of a stratigraphic horizon known to be productive.
Disposition. The components of petroleum disposition are stock change, crude oil losses,
refinery inputs, exports, and products supplied for domestic consumption.
Distillate Fuel Oil. A general classification for one of the petroleum fractions produced
in conventional distillation operations. It is used primarily for space heating,on-and-offhighway diesel engine fuel (including railroad engine fuel and fuel for agricultural
machinery), and electric power generation. Included are products known as No. 1, No. 2,
and No. 4 fuel oils; No. 1, No. 2, and No. 4 diesel fuels. Distillate fuel oil is reported in
the following sulfur categories: 0.05% sulfur and under, for use in on-highway diesel
engines which could be described as meeting EPA regulations; and greater than 0.05%
sulfur,for use in all other distillate applications.
No. 1 Distillate. A petroleum distillate which meets the specifications for No. 1 heating
or fuel oil as defined in ASTM D 396 and/or the specifications for No. 1 diesel fuel as
defined in ASTM Specification D 975 with distillation temperatures of 420 F at the 10percent recovery point and 550 F at the 90-percent recovery point, and kinematic
viscosities between 1.4 and 2.2 centistokes at 100 F.
No. 2 Distillate. A petroleum distillate which meets the specifications for No. 2 heating
or fuel oil as defined in ASTM D 396 and/or the specifications for No. 2 diesel fuel as
defined in ASTM Specification D 975 with distillation temperatures of 540 and 640 F
at the 90-percent recovery point, and kinematic viscosities between 2.0 and 4.3
centistokes at 100 F.
No. 4 Fuel Oil. A fuel oil for commercial burner installations not equipped with
preheating facilities. It is
used extensively in industrial plants. This grade is a blend of distillate fuel oil and
residual fuel oil stocks that conforms to ASTM Specification D396 or Federal

Specification VV-F-815C; with minimum and maximum kinematic viscosities between


5.8 and 26.4 centistokes at 100 F. Also included is No. 4-D, a fuel oil for low and
medium-speed diesel engines that conforms to ASTM Specification D975.
Electricity (Purchased). Electricity purchased for refinery operations that is not produced
within the refinery complex.
Ending Stocks. Primary stocks of crude oil and petroleum products held in storage as of
12 midnight on the last day of the month. Primary stocks include crude oil or
petroleum products held in storage at (or in) leases,refineries, natural gas processing
plants, pipelines, tank farms, and bulk terminals that can store at least 50,000 barrels of
petroleum products or that can receive petroleum products by tanker, barge, or pipeline.
Crude oil that is in-transit by water from Alaska, or that is stored on Federal leases or in
the Strategic Petroleum Reserve is included. Primary Stocks exclude stocks of foreign
origin that are held in bonded warehouse storage.
ETBE (Ethyl tertiary butyl ether) (CH3)3COC2H5. An oxygenate blend stock formed
by the catalytic etherification of isobutylene with ethanol.
Ethane (C2H6). A normally gaseous straight-chain hydrocarbon. It is a colorless
paraffinic gas that boils at a temperature of -127.48 F. It is extracted from natural gas
and refinery gas streams.
Ether. A generic term applied to a group of organic chemical compounds composed of
carbon, hydrogen, and oxygen, characterized by an oxygen atom attached to two
carbon atoms (e.g., methyl tertiary butyl ether).
Ethylene (C2H4). An olefinic hydrocarbon recovered from refinery processes or
petrochemical processes.
ExploratoryWell. A well drilled to find and produce oil or gas in an unproved area; to
find a new reservoir in a field previously found to be productive of oil or gas in another
reservoir; or to extend the limit of a known oil or gas reservoir.
Exports. Shipments of crude oil and petroleum products from the 50 States and the
District of Columbia to foreign countries, Puerto Rico, the Virgin Islands, and other U.S.
possessions and territories.
Field. An area consisting of a single reservoir or multiple reservoirs all grouped on, or
related to, the same individual geological structural feature and/or stratigraphic
condition. There may be two or more reservoirs in a field that are separated vertically by
intervening impervious strata, or laterally by local geologic barriers, or by both.
Field Area. A geographic area encompassing two or more pools that have a common
gathering and metering system,the reserves of which are reported as a single unit. This
concept applies primarily to the Appalachian region. (See Pool)
Field Production. Represents crude oil production on leases, natural gas liquids
production at natural gas processing plants, new supply of other hydrocarbons/
oxygenates and motor gasoline blending components, and fuel ethanol blended into
finished motor gasoline.
First Purchase (of crude oil). An equity (not custody)transaction involving an armslength transfer of ownership of crude oil associated with the physical removal of the
crude oil from a property (lease) for the first time. A first purchase normally occurs at the
time and place of ownership transfer where the crude oil volume sold is measured and
recorded on a run ticket or other similar physical evidence of purchase. The reported cost

is the actual amount paid by the purchaser, allowing for any adjustments (deductions or
premiums) passed on to the producer or royalty owner.
Flexicoking. A thermal cracking process which converts heavy hydrocarbons such as
crude oil, tar sands bitumen,and distillation residues into light hydrocarbons.
Feedstocks can be any pumpable hydrocarbons including those containing high
concentrations of sulfur and metals.
Fluid Coking. A thermal cracking process utilizing the fluidized-solids technique to
remove carbon (coke) for continuous conversion of heavy, low-grade oils into
lighter products.
Fresh Feed Input. Represents input of material (crude oil, unfinished oils, natural gas
liquids, other hydrocarbons and oxygenates or finished products) to processing units at a
refinery that is being processed (input) into a particular unit for the first time.
Examples:
(1) Unfinished oils coming out of a crude oil distillation unit which are input into a
catalytic cracking unit are considered fresh feed to the catalytic cracking unit.
(2) Unfinished oils coming out of a catalytic cracking cracking unit to be reprocessed are
not considered fresh feed.
Fuel Ethanol (C2H5OH). An anhydrous denatured aliphatic alcohol intended for
gasoline blending as described in Oxygenates definition.
Fuels Solvent Deasphalting. A refining process for removing asphalt compounds from
petroleum fractions,such as reduced crude oil. The recovered stream from this
process is used to produce fuel products.
Futures Contract. A promise to deliver a specified quantity of a specified commodity at
a specified place,price, and time in the future.
Gas Oil. A liquid petroleum distillate having a viscosity intermediate between that of
kerosene and lubricating oil.It derives its name from having originally been used in the
manufacture of illuminating gas. It is now used to produce distillate fuel oils and
gasoline.
Gasohol. A blend of finished motor gasoline and alcohol (generally ethanol but
sometimes methanol), limited to 10 percent by volume of alcohol.
Gasoline Blending Components. Naphthas which will be used for blending or
compounding into finished aviation or motor gasoline (e.g., straight-run gasoline,
alkylate,reformate, benzene, toluene, and xylene). Excludes oxygenates (alcohols,
ethers), butane, and pentanes plus.
Gross Input to Atmospheric Crude Oil Distillation Units.
Total input to atmospheric crude oil distillation units.Includes all crude oil, lease
condensate, natural gas plant liquids, unfinished oils, liquefied refinery gases, slop oils,
and other liquid hydrocarbons produced from tar sands,gilsonite, and oil shale.
Heavy Gas Oil. Petroleum distillates with an approximate boiling range from 651 to
1000 F.
Hydrocracking. (See Catalytic Hydrocracking)
Hydrogen. The lightest of all gases, occurring chiefly in combination with oxygen in
water; exists also in acids, bases, alcohols, petroleum, and other hydrocarbons.
Hydrotreating. (See Catalytic Hydrotreating)
Idle Capacity. The component of operable capacity that is not in operation and not under
active repair, but capable of being placed in operation within 30 days; and capacity not

in operation but under active repair that can be completed within 90 days.
Imported Crude Oil Burned As Fuel. The amount of foreign crude oil burned as a fuel
oil, usually as residual fuel oil, without being processed as such. Imported crude
oil burned as fuel includes lease condensate and liquid hydrocarbons produced from tar
sands, gilsonite, and oil shale.
Imports. Receipts of crude oil and petroleum products into the 50 States and the District
of Columbia from foreign countries, Puerto Rico, the Virgin Islands, and
other U.S. possessions and territories.
Indicated Additional Reserves of Crude Oil. Quantities of crude oil (other than proved
reserves) which may become economically recoverable from existing productive
reservoirs through the applicaton of improved recovery techniques using current
technology. These recovery techniques may:
1. Already be installed in the reservoir, but their effects are not yet known to the degree
necessary to classify the additional reserves as proved;
2. Be installed in another similar reservoir, where the results of that installation can be
used to estimate the indicated additional reserves. Indicated additional reserves are not
included in proved reserves due to their uncertain economic recoverability.When
economic recoverability is demonstrated, the indicated additional reserves must be
transferred to proved reserves as positive revisions.
Isobutane. See Butane.
Isobutylene (C4H8). An olefinic hydrocarbon recovered from refinery processes or
petrochemical processes.
Isohexane (C6H14). A saturated branch-chain hydrocarbon. It is a colorless liquid that
boils at a temperature of 156.2 F.
Isomerization. A refining process which alters the fundamental arrangement of atoms in
the molecule without adding or removing anything from the original material. Used to
convert normal butane into isobutene (C4), an alkylation process feedstock, and normal
pentane and hexane into isopentane (C5) and isohexane (C6),high-octane gasoline
components.
Isopentane. See Natural Gasoline and Isopentane.
Kerosene. A petroleum distillate that has a maximum distillation temperature of 401 F at
the 10-percent recovery point, a final boiling point of 572 F, and a minimum flash point
of 100 F. Included are the two grades designated in ASTM D3699: No. 1-K and No. 2K,and all grades of kerosene called range or stove oil.Kerosene is used in space heaters,
cook stoves, and water heaters and is suitable for use as an illuminant when
burned in wick lamps.
Kerosene-Type Jet Fuel. A quality kerosene product with a maximum distillation
temperature of 400 F at the 10-percent recovery point and a final maximum boiling
point of 572 F. The fuel is designated in ASTMSpecification D1655 and Military
Specifications MIL-T-5624R and MIL-T-83133D (Grades JP-5 andJP-8). A relatively
low-freezing point distillate of thekerosene type used primarily for turbojet and turboprop
aircraft engines.
Commercial. Kerosene-type jet fuel intended for use in commercial aircraft.
Military. Kerosene-type jet fuel intended for use in military aircraft.
Lease Condensate. A natural gas liquid recovered from gas well gas (associated and nonassociated) in lease separators or natural gas field facilities. Lease condensate

consists primarily of pentanes and heavier hydrocarbons.


Light Gas Oils. Liquid petroleum distillates heavier than naphtha, with an approximate
boiling range from 401 F to 650 F.
Liquefied Petroleum Gases (LPG). Ethane, ethylene, propane, propylene, normal butane,
butylene, isobutane,and isobutylene produced at refineries or natural gas
processing plants, including plants that fractionate raw natural gas plant liquids.
Liquefied Refinery Gases (LRG). Liquefied petroleum gases fractionated from refinery
or still gases. Through compression and/or refrigeration, they are retained in the
liquid state. The reported categories are ethane/ethylene,propane/propylene, normal
butane/butylene, and isobutane/isobutylene. Excludes still gas.
Lubricants. A substance used to reduce friction between bearing surfaces or as process
materials either incorporated into other materials used as processing aids in the
manufacturing of other products, or as carriers of other materials. Petroleum lubricants
may be produced either from distillates or residues. Other substances may be added to
impart or improve certain required properties.Do not include byproducts of lubricating oil
refining such as aromatic extracts derived from solvent extraction or tars derived from
deasphalting. Lubricants includes all grades of lubricating oils from spindle oil to
cylinder oil and those used in greases. Reporting categories include:
Paraffinic. Includes all grades of bright stock and neutrals with a Viscosity.
Naphthenic. Includes all lubricating oil base stocks with a Viscosity.
Note: The criterion for categorizing the lubricants is based solely on the Viscosity Index
of the stocks and is independent of crude sources and type of processing used to produce
the oils.
Exceptions: Lubricating oil base stocks that have been historically classified as
naphthenic or paraffinic by a refiner may continue to be so categorized irrespective of
the Viscosity Index criterion.
Example:
(1) Unextracted paraffinic oils that would not meet the Viscosity Index test.
Merchant Oxygenate Plants. Oxygenate production facilities that are not associated with
a petroleum refinery.Production from these facilities is sold under contract or on the spot
market to refiners or other gasoline blenders.
Methanol (CH3OH). A light, volatile alcohol intended for gasoline blending as described
in Oxygenate definition.
Middle Distillates. A general classification of refined petroleum products that includes
distillate fuel oil and kerosene.
Military Kerosene-Type Jet Fuel. See Kerosene-Type Jet Fuel.
Miscellaneous Products. Includes all finished products not classified elsewhere (e.g.,
petrolatum, lube refining byproducts (aromatic extracts and tars), absorption oils,
ram-jet fuel, petroleum rocket fuels, synthetic natural gas feedstocks, and specialty oils).
Motor Gasoline (Finished). A complex mixture of relatively volatile hydrocarbons, with
or without small quantities of additives, that has been blended to form a fuel suitable for
use in spark-ignition engines. Motor gasoline, as given in ASTM Specification D-4814 or
Federal Specification VV-G-1690C, includes a range in distillation temperatures from
122 degrees to 158 degrees F at the 10-percent recovery point and from 365 degrees to
374 degrees F at the 90-percent recovery point. Motor gasoline includes reformulated
gasoline, oxygenated gasoline, and other finished gasoline. Blendstock is

excluded until blending has been completed.


Reformulated Gasoline. Gasoline formulated for use in motor vehicles, the composition
and properties of which meet the requirements of the reformulated gasoline regulations
promulgated by the U.S. Environmental
Protection Agency under Section 211K of the Clean Air Act. Includes oxygenated fuels
program reformulated gasoline (OPRG). Excludes reformulated gasoline blendstock for
oxygenate blending (RBOB).
Oxygenated Gasoline. Gasoline formulated for use in motor vehicles that has an oxygen
content of 1.8 percent or higher, by weight. Includes gasohol. Excludes reformulated
gasoline, oxygenated fuels program reformulated gasoline (OPRG) and reformulated
gasoline blendstock for oxygenate blending (RBOB).
OPRG. Oxygenated Fuels Program Reformulated Gasoline is reformulated gasoline
which is intended for use in an oxygenated fuels program control period.
Other Finished or Conventional Gasoline. Motor gasoline not included in the
oxygenated or reformulated gasoline categories. Excludes reformulated gasoline
blendstock for oxygenate blending (RBOB).
Motor Gasoline Blending. Mechanical mixing of motor gasoline blending components
and oxygenates to produce finished motor gasoline. Mechanical mixing of finished
motor gasoline with motor gasoline blending components or oxygenates which results in
increased volumes of finished motor gasoline, and/or changes in the classification of
finished motor gasoline (e.g., other finished motor gasoline mixed with MTBE to
produce oxygenated motor gasoline), is considered motor gasoline blending.
Motor Gasoline Blending Components. Naphthas which will be used for blending or
compounding into finished motor gasoline (e.g., straight-run gasoline, alkylate,
reformate, benzene, toluene, xylene) and includes reformulated gasoline blendstock for
oxygenate blending (RBOB). Excludes oxygenates (alcohols, ethers), butane,and
pentanes plus. Oxygenates are reported as individual components and included in the
total for other hydrocarbons, hydrogens, and oxygenates.
MTBE (Methyl tertiary butyl ether) (CH3)3COCH3. An ether intended for gasoline
blending as described in Oxygenate definition.
Naphtha. A generic term applied to a petroleum fraction with an approximate boiling
range between 122 and 400F.
Naphtha Less Than 401 F. See Petrochemical Feedstocks.
Naphtha-Type Jet Fuel. A fuel in the heavy naphtha boiling range. ASTM Specification
D1655 specifies for this fuel maximum distillation temperatures of 290 F at the 20percent recovery point and 470 F at the 90-percent point, meeting Military Specification
MIL-T-5624L (Grade JP-4). JP-4 is used for turbojet and turboprop aircraft engines,
primarily by the military. Excludes ram-jet and petroleum rocket fuels.
Natural Gas. A mixture of hydrocarbons and small quantities of various onhydrocarbons
existing in the gaseous phase or in solution with crude oil in underground reservoirs.
Natural Gas Field Facility. A field facility designed to process natural gas produced
from more than one lease for the purpose of recovering condensate from a stream of
natural gas; however, some field facilities are designed to recover propane, normal
butane, pentanes plus, etc., and to control the quality of natural gas to be marketed.
Natural Gas Plant Liquids. Natural gas liquids recovered from natural gas in gas
processing plants, and in some situations, from natural gas field facilities. Natural gas

liquids extracted by fractionators are also included. These liquids are defined according
to the published specifications of the Gas Processors Association and the American
Society for Testing and Materials and are classified as follows: ethane, propane, normal
butane, isobutane, and pentanes plus.
Natural Gas Processing Plant. A facility designed (1) to achieve the recovery of natural
gas liquids from the stream of natural gas which may or may not have been processed
through lease separators and field facilities, and (2) to control the quality of the natural
gas to be marketed.Cycling plants are classified as gas processing plants.
Natural Gasoline and Isopentane. A mixture of hydrocarbons, mostly pentanes and
heavier, extracted from natural gas, that meets vapor pressure, end-point,and other
specifications for natural gasoline set by the Gas Processors Association. Includes
isopentane which is a saturated branch-chain hydrocarbon, (C5H12), obtained
by fractionation of natural gasoline or isomerization of normal pentane.
Net Receipts. The difference between total movements into and total movements out of
each PAD District by pipeline, tanker, and barge.
Normal Butane. See Butane.
OPEC. The acronym for the Organization of Petroleum Exporting Countries, that have
organized for the purpose of negotiating with oil companies on matters of oil production,
prices and future concession rights. Current members are Algeria, Indonesia, Iran, Iraq,
Kuwait,Libya, Nigeria, Qatar, Saudi Arabia, United Arab Emirates, and Venezuela. The
Neutral Zone between Kuwait and Saudi Arabia is considered part of OPEC.
Prior to January 1, 1993, Ecuador was a member of OPEC.Prior to January 1995, Gabon
was a member of OPEC.
OPRG. Oxygenated Fuels Program Reformulated Gasoline is reformulated gasoline
which is intended for use in an oxygenated fuels program control area during an
oxygenated fuels program control period.
Operable Capacity. The amount of capacity that, at the beginning of the period, is in
operation; not in operation and not under active repair, but capable of being placed in
operation within 30 days; or not in operation but under active repair that can be
completed within 90 days.Operable capacity is the sum of the operating and idle
capacity and is measured in barrels per calendar day or barrels per stream day.
Operating Capacity. The component of operable capacity that is in operation at the
beginning of the period.
Operable Utilization Rate. Represents the utilization of the atmospheric crude oil
distillation units. The rate is calculated by dividing the gross input to these units by the
operable refining capacity of the units.
Operating Utilization Rate. Represents the utilization of the atmospheric crude oil
distillation units. The rate is calculated by dividing the gross input to these units by the
operating refining capacity of the units.
Other Finished. See Motor Gasoline (Finished).
Other Hydrocarbons. Materials received by a refinery and consumed as a raw material.
Includes hydrogen, coal tar derivatives, gilsonite, and natural gas received by the
refinery for reforming into hydrogen. Natural gas to be used as fuel is excluded.
Other Oils Equal To or Greater Than 401 F. See
Petrochemical Feedstocks.

Other Oxygenates. Other aliphatic alcohols and aliphatic ethers intended for motor
gasoline blending (e.g.,isopropyl ether (IPE) or n-propanol).
Outer Continental Shelf (OCS). U.S. offshore waters that are under Federal domain.
Oxygenated Gasoline. See Motor Gasoline (Finished).
Oxygenates. Any substance which, when added to gasoline, increases the amount of
oxygen in that gasoline blend. Through a series of waivers and interpretive rules,
the Environmental Protection Agency (EPA) has determined the allowable limits for
oxygenates in unleaded gasoline. The Substantially SimilarInterpretive Rules (56 FR
(February 11, 1991)) allows blends of aliphatic alcohols other than methanol and
aliphatic ethers, provided the oxygen content does not exceed 2.7 percent by weight. The
Substantially SimilarInterpretive Rules also provides for blends of methanol up
to 0.3 percent by volume exclusive of other oxygenates,and butanol or alcohols of a
higher molecular weight up to 2.75 percent by weight. Individual waivers pertaining to
the use of oxygenates in unleaded gasoline have been issued by the EPA. They include:
Fuel Ethanol. Blends of up to 10 percent by volume anhydrous ethanol (200 proof)
(commonly referred to as the gasohol waiver).
Methanol. Blends of methanol and gasoline-grade tertiary butyl alcohol (GTBA) such
that the total oxygen content does not exceed 3.5 percent by weight and the ratio of
methanol to GTBA is less than or equal to 1. It is also specified that this blended fuel
must meet ASTM volatility specifications (commonly referred to as the ARCO
waiver).
Blends of up to 5.0 percent by volume methanol with a minimum of 2.5 percent by
volume cosolvent alcohols having a carbon number of 4 or less (i.e., ethanol,
propanol, butanol, and/orGTBA). The total oxygen must not exceed 3.7 percent by
weight, and the blend must meet ASTM volatility specifications as well as phase
separation and alcohol purity specifications (commonly referred to as the DuPont
waiver).
MTBE (Methyl tertiary butyl ether). Blends up to 15.0 percent by volume MTBE which
must meet the ASTM D4814 specifications. Blenders must take precautions that the
blends are not used as base gasolines for other oxygenated blends (commonly referred to
as the Sun waiver).
Pentanes Plus. A mixture of hydrocarbons, mostly pentanes and heavier, extracted from
natural gas. Includes isopentane, natural gasoline, and plant condensate.
Persian Gulf. The countries that comprise the Persian Gulf are: Bahrain, Iran, Iraq,
Kuwait, Qatar, Saudi Arabia,and the United Arab Emirates.
Petrochemical Feedstocks. Chemical feedstocks derived from petroleum principally for
the manufacture of chemicals, synthetic rubber, and a variety of plastics. The categories
reported are Naphtha Less Than 401 F and Other Oils Equal To or Greater Than 401
F.
Naphtha Less Than 401 F. A naphtha with a boiling range of less than 401 F that is
intended for use as a petrochemical feedstock.
Other Oils Equal To or Greater Than 401F. Oils with a boiling range equal to or
greater than 401 F that are intended for use as a petrochemical feedstock.
Petroleum Administration for Defense (PAD) Districts.
Geographic aggregations of the 50 States and the District of Columbia into five districts
by the Petroleum Administration for Defense in 1950. These districts were

originally defined during World War II for purposes of administering oil allocation.
Petroleum Coke. A residue, the final product of the condensation process in cracking.
This product is reported as marketable coke or catalyst coke. The conversion factor is 5
barrels per short ton.
Marketable Coke. Those grades of coke produced in delayed or fluid cokers which may
be recovered as relatively pure carbon. This green coke may be sold as is or further
purified by calcining.
Catalyst Coke. In many catalytic operations (e.g.,catalytic cracking) carbon is deposited
on the catalyst,thus deactivating the catalyst. The catalyst is reactivated by burning off
the carbon, which is used as a fuel in the refining process. This carbon or coke is not
recoverable in a concentrated form.
Petroleum Products. Petroleum products are obtained from the processing of crude oil
(including lease condensate), natural gas, and other hydrocarbon compounds. Petroleum
products include unfinished oils,liquefied petroleum gases, pentanes plus, aviation
gasoline, motor gasoline, naphtha-type jet fuel,kerosene-type jet fuel, kerosene, distillate
fuel oil,residual fuel oil, petrochemical feedstocks, special naphthas, lubricants, waxes,
petroleum coke, asphalt, road oil, still gas, and miscellaneous products.
Pipeline (Petroleum). Crude oil and product pipelines used to transport crude oil and
petroleum products respectively, (including interstate, intrastate, and intracompany
pipelines).
Plant Condensate. One of the natural gas liquids, mostly pentanes and heavier
hydrocarbons, recovered and separated as liquids at gas inlet separators or scrubbers in
processing plants.
Pool. In general, a reservoir. In certain situations a pool may consist of more than one
reservoir.
Processing Gain. The volumetric amount by which total output is greater than input for a
given period of time. This difference is due to the processing of crude oil into products
which, in total, have a lower specific gravity than the crude oil processed.
Processing Loss. The volumetric amount by which total refinery output is less than input
for a given period of time.This difference is due to the processing of crude oil into
products which, in total, have a higher specific gravity than the crude oil processed.
Product Supplied, Crude Oil. Crude oil burned on leases and by pipelines as fuel.
Production Capacity. The maximum amount of product that can be produced from
processing facilities.
Products Supplied. Approximately represents consumption of petroleum products
because it measures the disappearance of these products from primary sources,
i.e., refineries, natural gas processing plants, blending plants, pipelines, and bulk
terminals. In general, product supplied of each product in any given period is computed
as follows: field production, plus refinery production,plus imports, plus unaccounted for
crude oil, (plus net receipts when calculated on a PAD District basis), minus stock
change, minus crude oil losses, minus refinery inputs, minus exports.
Propane (C3H8). A normally gaseous straight-chain hydrocarbon. It is a colorless
paraffinic gas that boils at a temperature of -43.67 F. It is extracted from natural gas
or refinery gas streams. It includes all products designated in ASTM Specification D1835
and Gas Processors Association Specifications for commercial propane and HD-5
propane.

Propylene (C3H6). An olefinic hydrocarbon recovered from refinery processes or


petrochemical processes.
Proved Reserves of Crude Oil. Proved reserves of crude oil as of December 31 of a given
year are the estimated quantities of all liquids defined as crude oil, which geological and
engineering data demonstrate with reasonable certainty to be recoverable in future years
from known reservoirs under existing economic and operating conditions.
Reservoirs are considered proved if economic producibility is supported by actual
production or conclusive formation tests (drill stem or wire line), or if economic
producibility is supported by core analyses and/or electric or other log interpretations.
The area of an oil reservoir considered proved includes (1) that portion delineated by
drilling and defined by gas-oil and/or oil-water contacts, if any; and (2) the immediately
adjoining portions not yet drilled, but which can be reasonably judged as economically
productive on the basis of available geological and engineering data. In the absence of
information on fluid contacts, the lowest known structural occurrence of hydrocarbons is
considered to be the lower proved limit of the reservoir.Volumes of crude oil placed in
underground storage are not to be considered proved reserves.Reserves of crude oil
which can be produced economically through application of improved recovery
techniques (such as fluid injection) are included in the proved classification when
successful testing by a pilot project, or the operation of an installed program in the
reservoir, provides support for the engineering analysis on which the project or program
was based. Estimates of proved crude oil reserves do not include the following: (1) oil
that may become available from known reservoirs but is reported separately as indicated
additional reserves; (2) natural gas liquids (including lease condensate); (3) oil, the
recovery of which is subject to reasonable doubt because of uncertainty as to
geology, reservoir characteristics, or economic factors;(4) oil that may occur in undrilled
prospects; and (5) oil that may be recovered from oil shales, coal, gilsonite, and
other such sources. It is not necessary that production,gathering or transportation
facilities be installed or operative for a reservoir to be considered proved.
Proved Reserves of Natural Gas Liquids. Proved reserves of natural gas liquids as of
December 31 of a given year are those volumes of natural gas liquids (including lease
condensate) demonstrated with reasonable certainty to be separable in the future from
proved natural gas reserves,under existing economic and operating conditions.
RBOB. Reformulated Gasoline Blendstock for Oxygenate Blending is a motor gasoline
blending component which, when blended with a specified type and percentage of
oxygenate, meets the definition of reformulated gasoline.
Refiner Acquisition Cost. The cost of crude oil to the refiner, including transportation
and fees. The composite cost is the weighted average of domestic and imported
crude oil costs.
Refinery. An installation that manufactures finished petroleum products from crude oil,
unfinished oils, natural gas liquids, other hydrocarbons, and oxygenates.
Refinery Capacity Utilization. (See Operable Utilization Rate and Operating
Utilization Rate)
Refinery Input, Crude Oil. Total crude oil (domestic plus foreign) input to crude oil
distillation units and other refinery processing units (cokers, etc.).
Refinery Input, Total. The raw materials and intermediate materials processed at
refineries to produce finished petroleum products. They include crude oil,

products of natural gas processing plants, unfinished oils,other hydrocarbons and


oxygenates, motor gasoline and aviation gasoline blending components and finished
petroleum products.
Refinery Production. Petroleum products produced at a refinery or blending plant.
Published production of these products equals refinery production minus refinery input.
Negative production will occur when the amount of a product produced during the month
is less than the amount of that same product that is reprocessed (input) or reclassified to
become another product during the same month. Refinery production of unfinished oils,
and motor and aviation gasoline blending components appear on a
net basis under refinery input.
Refinery Yield. Refinery yield (expressed as a percentage) represents the percent of
finished product produced from input of crude oil and net input of unfinished oils. It is
calculated by dividing the sum of crude oil and net unfinished input into the individual
net production of finished products. Before calculating the yield for finished motor
gasoline, the input of natural gas liquids, other hydrocarbons and oxygenates, and net
input of motor gasoline blending components must be subtracted from the net production
of finished motor gasoline. Before calculating the yield for finished aviation gasoline,
input of aviation gasoline blending components must be subtracted from the net
production of finished aviation gasoline.
Reformulated Gasoline. See Motor Gasoline
(Finished).
Reseller. A firm (other than a refiner) that carries on the trade or business of purchasing
refined petroleum products and reselling them to purchasers other than ultimate
consumers; e.g., retailers at the gasoline service stations.
Reseller/Retailer. A firm (other than a refiner) that carries on the trade or business
activities of both a reseller and a retailer; i.e., purchasing refined petroleum products and
reselling them to purchasers who may be either ultimate or other than ultimate
consumers.
Reserve Additions. Consist of adjustments, net revisions, extensions to old reservoirs,
new reservoir discoveries in old fields, and new field discoveries.
Reserves. (See Proved Reserves)
Reserves Changes. Positive and negative revisions,
extensions, new reservoir discoveries in old fields, and new field discoveries, which
occurred during a given year.
Reserves Extensions. The reserves credited to a reservoir because of enlargement of its
proved area. Normally the ultimate size of newly discovered fields, or newly discovered
reservoirs in old fields, is determined by wells drilled in years subsequent to discovery.
When such wells add to the proved area of a previously discovered reservoir,the increase
in proved reserves is classified as an extension.
Reserves Revisions. Changes to prior year-end proved reserves estimates, either positive
or negative, resulting from new information other than an increase in proved acreage
(extension). Revisions include increases of proved reserves associated with the
installation of improved recovery techniques or equipment. They also include correction
of prior report year arithmetical or clerical errors and adjustments to prior year-end
production volumes to the extent that these alter reported prior year reserves estimates.

Reservoir. A porous and permeable underground formation containing an individual and


separate natural accumulation of producible hydrocarbons (oil and/or gas)which is
confined by impermeable rock or water barriers and is characterized by a single natural
presure system.
Residual Fuel Oil. The heavier oils that remain after the distillate fuel oils and lighter
hydrocarbons are distilled away in refinery operations and that conform to ASTM
Specification D396. Included are No. 5, a residual fuel oil of medium viscosity; Navy
Special, for use in steam-powered vessels in government service and in shore power
plants; No. 6, which includes Bunker C fuel oil, and is used for commercial and industrial
heating, electricity generation and to power ships.
Residuum. Residue from crude oil after distilling off all but the heaviest components,
with a boiling range greater than 1000 F.
Retailer. A firm (other than a refiner, reseller, or reseller/retailer) that carries on the trade
or business of purchasing refined petroleum products and reselling them to ultimate
consumers.
Road Oil. Any heavy petroleum oil, including residual asphaltic oil used as a dust
pallative and surface treatment on roads and highways. It is generally produced in six
grades from 0, the most liquid, to 5, the most viscous.
Rotary Rig. A machine, used for drilling wells, that employs a rotating tube attached to a
bit for boring holes through rock.
Sales for Resale. Sales of refined petroleum products to purchasers who are other-thanultimate consumers;wholesale sales.
Sales to End Users. Sales made directly to the consumer of the product. Includes bulk
consumers, such as agriculture, industry, and utilities, as well as residential and
commercial consumers.
Shell Storage Capacity. The design capacity of a petroleum storage tank which is always
greater than or equal to working storage capacity.
Special Naphthas. All finished products within the naphtha boiling range that are used as
paint thinners,cleaners, or solvents. These products are refined to a specified flash point.
Special naphthas include all commercial hexane and cleaning solvents conforming to
ASTM Specification D1836 and D484, respectively. Naphthas to be blended or marketed
as motor gasoline or aviation gasoline, or that are to be used as petrochemical
and synthetic natural gas (SNG) feedstocks are excluded.
Spot Price. A transaction price concluded on the spot,that is, on a one-time, prompt
basis to sell or buy one shipment of a commodity, such as crude oil.
Steam (Purchased). Steam, purchased for use by a refinery, that was not generated from
within the refinery complex.
Still Gas (Refinery Gas). Any form or mixture of gases produced in refineries by
distillation, cracking, reforming,and other processes. The principal constituents are
methane, ethane, ethylene, normal butane, butylene,propane, propylene, etc. Still gas is
used as a refinery fuel and a petrochemical feedstock. The conversion factor is 6
million BTUs per fuel oil equivalent barrel.
Stock Change. The difference between stocks at the beginning of the month and stocks at
the end of the month.A negative number indicates a decrease in stocks and a positive
number indicates an increase in stocks.

Stocks, Crude Oil. Crude oil and lease condensate held at refineries, in pipelines, at
pipeline terminals, and on leases.
Stocks, Primary. Stocks of crude oil or petroleum products held in storage at (or in)
leases, refineries, natural gas processing plants, pipelines, tank farms, and bulk
terminals. Crude oil that is in transit from Alaska, or that is stored on Federal leases or in
the Strategic Petroleum Reserve, is included. Excluded are stocks of foreign origin that
are held in bonded warehouse storage.
Strategic Petroleum Reserve (SPR). Petroleum stocks maintained by the Federal
Government for use during periods of major supply interruption.
Sulfur. A yellowish nonmetallic element, sometimes known as brimstone.
Supply. The components of petroleum supply are field production, refinery production,
imports, and net receipts when calculated on a PAD District basis.
TAME (Tertiary amyl methyl ether)
(CH3)2(C2H5)COCH3. An oxygenate blend stock formed by the catalytic etherification
of isoamylene with methanol.
Tank Farm. An installation used by gathering and trunk pipeline companies, crude oil
producers, and terminal operators (except refineries) to store crude oil.
Tanker and Barge. Vessels that transport crude oil or petroleum products. Data are
reported for movements between PAD Districts; from a PAD District to the Panama
Canal; or from the Panama Canal to a PAD District.
TBA (Tertiary butyl alcohol) (CH3)3COH. An alcohol primarily used as a chemical
feedstock, a solvent or feedstock for isobutylene production for MTBE; produced
as a co-product of propylene oxide production or by direct hydration of isobutylene.
Thermal Cracking. A refining process in which heat and pressure are used to break
down, rearrange, or combine hydrocarbon molecules. Thermal cracking includes gas
oil, visbreaking, fluid coking, delayed coking, and other thermal cracking processes (e.g.,
flexicoking). See
individual categories for definition.
Toluene (C6H5CH3). Colorless liquid of the aromatic group of petroleum hydrocarbons,
made by the catalytic reforming of petroleum naphthas containing methyl cyclohexane. A
high-octane gasoline-blending agent,solvent, and chemical intermediate, base for TNT.
Ultimate Recovery Appreciation. The commonly observed increase overtime of the sum
of cumulative production on a specific day and the estimate of proved reserves on that
same day. Also known as reserves growth.
Unaccounted for Crude Oil. Represents the arithmetic difference between the calculated
supply and the calculated disposition of crude oil. The calculated supply is the sum of
crude oil production plus imports minus changes in crude oil stocks. The calculated
disposition of crude oil is the sum of crude oil input to refineries, crude oil exports, crude
oil burned as fuel, and crude oil losses.
Undiscovered Recoverable Resources (crude oil and natural gas). Those economic
resources of crude oil and natural gas, yet undiscovered, that are estimated to exist in
favorable geologic settings.
Unfinished Oils. Includes all oils requiring further processing, except those requiring
only mechanical blending. Includes naphthas and lighter oils, kerosene and light gas oils,
heavy gas oils, and residuum. See individual categories for definition.

Unfractionated Streams. Mixtures of unsegregated natural gas liquid components


excluding those in plant condensate. This product is extracted from natural gas.
United States. The United States is defined as the 50States and the District of Columbia.
Vacuum Distillation. Distillation under reduced pressure (less the atmospheric) which
lowers the boiling temperature of the liquid being distilled. This technique with its
relatively low temperatures prevents cracking or decomposition of the charge stock.
Visbreaking. A thermal cracking process in which heavy atmospheric or vacuum-still
bottoms are cracked at moderate temperatures to increase production of distillate
products and reduce viscosity of the distillation residues.
Wax. A solid or semi-solid material consisting of a mixture of hydrocarbons obtained or
derived from petroleum fractions, or through a Fischer-Tropsch type process, in which
the straight chained paraffin series predominates. This includes all marketable wax,
whether crude or refined, with a congealing point (ASTM D 938)between 100 and 200 F
and a maximum oil content (ASTM D 3235) of 50 weight percent. The conversion
factor is 280 pounds per 42 U.S. gallons per barrel.
Well. A hole drilled for the purpose of finding or producing crude oil or natural gas or
providing services related to the production of crude oil or natural gas. Wells
are classified as oil wells, gas wells, dry holes,stratigraphic or core tests, or service wells.
Working Storage Capacity. The difference in volume between the maximum safe fill
capacity and the quantity below which pump suction is ineffective (bottoms).
Xylene (C6H4(CH3)2). Colorless liquid of the aromatic group of hydrocarbons made the
catalytic reforming of certain naphthenic petroleum fractions. Used as
high-octane motor and aviation gasoline blending agents,solvents, chemical
intermediates. Isomers are metaxylene, orthoxylene, paraxylene.

Drilling Technology
The Key to Successful Exploration and Production
Using steam-powered cable-tool rigs that pounded the Earth with fishtail bits until it gave up its
oil, turn-of-the-century drillers managed to find oil with surprising frequency. The drilling
technology wasnt much by todays standards, but it worked on the shallow prospects of
Pennsylvania and Ohio during the late 1800s and early 1900s.
However, greater drilling challenges occurred when oilmen expanded their efforts to Texas,
where the oil lay encased in deeper pay zones. Here, drilling targets could not be reached using
standard cable-tool methods. Instead, rotary rigs were brought in to drill the deeper plays. These
rigs represented the latest technology, and they quickly became the preferred method for drilling.
The Rotary Rig
Between 1915 and 1928, rotary rigs slowly began to replace existing cable-tool rigs. The
invention of the rotary-drilling rig made cable-tool rigs obsolete, for all intents and purposes. The
rigs, named for the rotary table through which drillpipe is inserted and rotated, could make deeper
holes because they used a bit that drilled rather than pulverized the rock formation.

Also, rotary rigs eliminated the laborious, time-consuming bailing process used by cable-tool rigs
to remove rock cuttings from the hole. Instead, drilling fluid was circulated down the drillpipe,
through the bit, and up to the surface. As a result, rock cuttings created by the bit were lifted by
the fluid and carried to the surface for disposal.
The new rigs created a need for experienced drillers who knew how to use them. Experienced
cable-tool drillers had to learn quickly how to operate the new rotary rigs so they could make the
transition to them. Some did, and some didnt. Those who made the transition stayed employed.
Those who didnt became unemployed. One of those drillers who successfully made the transition
was John Goddard, a cable-tool driller who eventually became one of the original stockholders of
Humble Oil and Refining Co.
Because of his reputation for drilling successes in the oil fields of Ohio, Goddard was brought to
Texas to drill with the rotary rig. When he got to Texas, Goddard was careful not to mention to
anyone that he had never even seen one of the new rotary rigs. Instead, he quickly and quietly
learned how to run the new equipment and, over the years, contributed greatly to Humble Oils
growth to an oil giant. From 1928 to 1934, some of the largest oil fields of all time were
discovered, and drilling was highly competitive. There were insistent demands for equipment that
could drill and complete wells in minimum time periods. During this period, heavy and more
powerful rigs were developed.
After 1934, rates of penetration with rotary rigs increased more rapidly than in any period, before
or since. And the use of steam-powered engines gave way to the internal-combustion engine as
the most important prime mover. During World War II, further development and refinement of
rotary rigs was put on hold since most of the nations resources were largely diverted to the
manufacture of war implements. However, with the wars end in the mid-1940s, an increase in
demand for petroleum products led to a rise in oil- and gas-well drilling. Practically all of the
drilling equipment that was available was either obsolete or worn out, which led to the
development of new and better drilling equipment, especially rigs. Todays modern rotary-drilling
rigs are powered by diesel and diesel-electric prime movers.1
Rolling Bits
Another milestone was the development of bits for use on rotary-drilling rigs. In 1908, Howard
Hughes Sr., a wildcatter and speculator in Texas oil leases, turned his ingenuity and hobby of
tinkering with mechanical devices into good fortune when he invented the rolling bit (later called
the roller-cone or rock bit).
In the period following the turn of the century, existing drilling technology was unable to penetrate
the thick rock of southwest Texas. Until 1910 or 1911, the only drilling bits available for rotary rigs
were the fishtail, the diamond point (mainly for sidetracking), and the circular-toothed bitall of
which limited the rigs to soft formations. Oilmen could extract only the oil that lay just beneath the
surface. Frustrated, they were forced to ignore the vast resources they knew were locked in the
deeper formations.
Fortunately for them, Hughes had an idea for a bit that used 166 cutting edges arrayed on the
surface of each of two metal cones mounted opposite each other to tear away the hard-rock
formations. He also solved the problem of how to cool and lubricate the bit in the high
temperature produced by the friction of the metal and rock contact.
Hughes, along with his partner Walter B. Sharp, formed the Sharp-Hughes Tool Co. and
produced a model of his new bit. Rather than sell his bits to oil drillers, Hughes and Sharp opted
to lease the bits on a job basis, charging U.S. $30,000 per well. With no competitors to duplicate
their drilling technology, they soon garnered the lions share of the market. Flush with their

success, the partners built a factory on 70 acres east of downtown Houston, where they turned
out the roller-cone bits that quickly revolutionized the drilling process.2
About the same time Hughes developed his bit, Granville A. Humason of Shreveport, Louisiana,
patented the first cross-roller rock bit, the forerunner of the Reed cross-roller bit.3 That bit, built in
1913, used two rolling cutters that were placed in the bit face in a + shape. This bit, screwed
onto the end of the rotating drill pipe, cut the rock formation as it turned, enabling the rig to
penetrate the formation without destroying the bits cutting surfaces.
The rotary rig, rolling bit, and cross-roller rock bit were pioneering inventions that paved the way
for the development of a great many other devices that improved drilling processes and
techniques.
Drill Collars
One of the earliest problems drillers encountered in rotary drilling was that of keeping their
boreholes straight. The deeper drillers went, the more the boreholes deviated from vertical. It was
common practice at that time to use only large drill pipe and all available weight (weight indicators
were not yet available). Often, deviation didnt matter because the targeted formation was
eventually reached and the well declared a success. In fact, most drillers were never aware of
their deviation from vertical.
For the most part, the entire petroleum industry was unaware of the problem of hole deviation
until the Seminole, Oklahoma, boom of the mid-1920s. Town lot spacing was the primary factor
contributing to the experiences of the industry. There are actual recorded incidents of two rigs
drilling the same hole, offset wells drilling into each other, drillers wandering into producing wells,
and wells in the geometric center of the structure coming in low or missing the field completely.
These experiences led to the development of the drill collar for weight and rigidity and the use of
stabilizers at various points in the string to control deviation and to provide rigidity. This helped
control unintentional deviation, but a total understanding of the forces associated with borehole
deviation didnt occur until Arthur Lubinski performed his study of the problem in the 1950s. His
successful studies led to the development of directional drilling, a method used extensively by the
industry to cost-effectively drill and complete multiple wells from a single location.4
Well Control
With continued increases in drilling depth came increasingly higher formation pressures that had
to be controlled during the drilling process. If released by the penetration of the formation by the
drill bit, these enormous pressures could spit drillpipe out of the hole, unleashing raging infernolike fires that would instantly destroy the drilling rig. Such a catastrophic event could delay drilling
operations for days or even months, until the fires could be extinguished. This made the
development of some sort of blowout-prevention device a priority among those engaged in oilwell
drilling.
In the 1920s, driller James Abercrombie sought out Harry Cameron, a machine-shop operator, to
design and build a device that would prevent catastrophic well blowouts during drilling operations.
Following a period of experimentation, Abercrombie and Cameron designed and manufactured
the first successful blowout preventer (BOP). The preventer was capable of containing formation
pressures of 2,000 to 3,000 psi in 8-in. boreholes. It did not take long for the revolutionary device
to dominate the industry.
The need for control greatly expanded when drilling began on the U.S. gulf coast and in the Gulf
of Mexico. Containing normally pressured formations is relatively simple when compared with
containing and controlling highly pressurized geopressured zones. Today, offshore BOP stacks
that can hold pressures of 15,000 psi in 183/4-in boreholes are available, if needed.

Louis Records also made great contributions in well-control equipment. His company, Drilling
Well Control, offered well-control expertise and a monitoring service. Records was one of the first
to truly understand the mechanics required to circulate oil and gas from a well without allowing
additional gas or oil to enter the wellbore. C.C. Brown, another industry drilling pioneer, invented
the wellhead packoff, a device that allowed a well to be completed without letting it flow freely
during the operation.
Drilling Fluids
In the earliest years, the drilling fluid used by the cable-tool rigs was probably water. It was used
to soften the earth and make it more pliable for drill-bit penetration. With the advent of rotary rigs
and roller-cone bits, more elaborate drilling fluids, called muds, were introduced into the
borehole to cool and lubricate the bit, circulate the rock cuttings from the bottom of the hole to the
surface, and hydrostatically balance the drilling-fluid column.
These drilling muds were originally natural and were formed from the formation drilled or from
native material. Later, they were weighted up using barite or similar products to counter the
higher pressures that were experienced as formation depths increased. This helped prevent the
dreaded blowouts. Initially, mud materials were low-cost waste products from other industries, but
as the drilling goals required drilling deeper, hotter holes and higher fluid densities, the industry
began developing specialty chemical products designed for specific purposes.
Over the years, drilling-fluid challenges and the resulting solutions have moved the drilling-fluids
industry from the use of fluids costing very little to complex oil- and water-based materials that
typically cost U.S. $300 to $400/bbl.
Well Cementing
With drilling operations routinely penetrating multiple rock and sediment layers that contained
water, oil and gas, it became necessary to install steel casing to isolate these layers from one
another and from the wellbore. The casing was installed in a variety of sizes. Large-diameter pipe
was installed near the surface, and as the depth increased, the pipe diameter became
progressively smaller.
Originally, the formation was mudded off, and extra-thick drilling mud was left behind each string
of casing in an effort to minimize fluid communication. Mechanical devices also were used on
occasion. However, this was ineffective and soon led to the use of cement. Casing had been
cemented in cable-tool-drilled wells prior to 1900.
Early cementing jobs were very rudimentary. Cement was mixed on location by hand and
installed with a dump bailer. After depositing the cement, the casing, which had been held several
feet off bottom, was lowered into the cement so it would remain behind the pipe after hardening to
shut off the water from above. Later, tubing was used to convey the cement with pumps. But, in
1921, Erle P. Halliburton put an end to these laborious cementing methods and, in the process,
revolutionized well cementing.
Halliburton started the Halliburton Oil Well Cementing Company in 1919 in his one-room woodframe home in Wilson, Oklahoma. Two years later, Halliburton perfected his Cement Jet Mixer,
an on-the-fly mixing machine that eliminated the hand mixing of cement at the wellsite. This
invention set the stage for Halliburtons domination of the business of well-casing cementing.
Then, in 1924, Halliburton convinced seven of his customers --- all large oil companies --- to
invest in Howco. The company went public, and Halliburton became the companys first President
and Chief Executive Officer.

We intend to build up and maintain a complete organization, he stated when asked about his
intentions in those first days after going public. We will cover all phases of oil well cementing
service. We will maintain an aggressive and sustained program of research. We shall give
uniform quality and service. Well get there somehow, regardless of location, he continued.5 This
focus on service and technology research has kept the company in the forefront of the oil and gas
services industry, where it remains a leader today.
Formation-Evaluation Logging
The earliest drillers were severely limited in their opportunities to evaluate potential reservoir
rocks without testing to see what they produced. Wells were drilled without any possible means of
measuring the inclination angle of the borehole, which, as we learned more, turned out to be quite
high. The whole series of developments in that area first produced crude inclination indicators by
lowering a glass tube with an etching fluid inside and mechanical instruments that measured the
inclination angle. This eventually led to tools that could be used to determine the azimuth of the
hole and finally offer the ability to calculate the position of the bottom of the hole with fair
accuracy.
Formation-evaluation efforts began with the art of mud logging, in which samples of the cuttings
were analyzed to determine the formations that were being drilled. Drillers also began recording
the penetration rate vs. depth to define more precisely where formation changes occurred.
In 1911, electrical well logging originated with two brothers, Conrad and Marcel Schlumberger.
The science of geophysics was new, and the use of magnetic or gravimetric methods of exploring
the internal structure of the Earth was just beginning. Their electric logs extended the electrical
prospecting technique from the surface into the oil well.
The technique was actually invented by Conrad Schlumberger. As a physics teacher at Ecole des
Mines, Conrad had an interest in the Earth sciences, particularly those involving prospecting for
metal-ore deposits. He believed that among the physical properties of metal ores, their electrical
conductivity could be used to distinguish them from their less-conductive surroundings.
In 1912, he used very basic equipment to record his first map of equipotential curves in a field
near Caen, France. The plot of curves derived from his surveys not only confirmed the methods
ability to detect metal ores but also revealed features of the subsurface structure. This information
led to an ability to locate subsurface structures that could form traps for minerals.
To understand the measurements made at the surface better, Conrad and Marcel knew they had
to incorporate resistivity information from deeper formations. In 1927, in a 1,640-ft (500-m) well in
Frances Pechelbronn field, Conrads son-in-law, Henri Doll, an experimental physicist,
successfully produced the worlds first electric log using successive resistivity readings to create
a resistivity curve.
Four years later, in 1931, the discovery of a spontaneous potential produced naturally between
the borehole mud and formation water in permeable beds introduced a new basic measurement.
When recorded simultaneously with the resistivity curve, permeable oil-bearing beds could be
differentiated from impermeable nonproducing beds. Thus, electrical well logging was born.6
Blowout Control
Most drillers believed that, if you drilled hard enough and long enough, a blowout was inevitable.
And, for the most part, they were right. Blowouts did occur. Equipment failures, improper
technique, and bad luck created blowouts that needed to be handled quickly, safely, and properly.
This meant a call to the wild-well-control experts. A special breed of firefighter, the well-control
experts took on the raging inferno and brought it under control using a series of steps that
deprived the fire of oxygen, allowing it to be safely capped.

The first heroes of this parade were Myron Kinley and Red Adair. Whenever anyone had a
blowout, the solution was to call Kinley or Adair to control, then cap, the blowout. While their
skills, capabilities, and performance allowed them to handle most any blowout in a workmanlike
manner, the nature of their work and the publics perception of it made them legendary. This, in
turn, allowed them to become very effective public relations men for their clients.
Their experiences also led to the development of a number of active and passive firefighting
devices for offshore rigs. Since firefighting equipment was not readily available at offshore drilling
sites, technology was developed that could be permanently mounted on the offshore drilling rig.
In the event of a blowout, it helped cool the tremendous heat generated by the blowout, allowing
workers to safely evacuate the rig. It also enabled the firefighting team to place cooling water
where they needed it when they arrived on the scene to cap the well.
Wells Get Deeper
The development of technology for controlling blowouts enabled the drilling of deeper wells, but it
was not until 1938 that the drill passed 15,000 ft, a record that stood until 1947. The 20,000-ft
barrier was penetrated two years later in 1949, a record that held until Phillips Petroleum drilled
the University EE-1 well to 25,340 ft in 1958 in west Texas.7
The 1960s and 1970s saw wells attain ultradeep status. Improved metallurgy and techniques for
handling higher temperatures and pressures and corrosive atmospheres made ultradeep wells
attainable and less formidable than before. The 30,000-ft barrier was broken in 1974 by the
31,441-ft Bertha Rogers No. 1 in Oklahomas Anadarko basin. But British Petroleums Wytch
Farm M11 well garnered the depth record at 34,967 ft when it was drilled and completed in 1998.
Offshore Drilling
Some of the most significant technical achievements in the evolution of drilling technology have
occurred in the offshore drilling arena.
When Kerr-McGee Corp. drilled the first offshore well out of sight of land in 1947 to officially begin
todays offshore industry, it wasnt the first well drilled offshore. According to oil historian J.E.
Brantly in his book History of Oil Well Drilling, operators actually took their first steps in drilling
submarine as early as 1897 when a well was drilled from a wharf in Californias Summerland
field.
Eight years later, in 1905, Unocal Corp. drilled an offshore well near Houston, and others followed
by drilling in swamps and transition zones during the next 20 years. In the 1940s, operators took
more definitive steps by mounting land rigs on piers jutting several hundred feet into lakes, bays
and coastal waters. However, Kerr-McGees Kermac 16 well drilled in 20 ft of water from a
platform 43 miles southwest of Morgan City, Louisiana, severed the industrys umbilical cord with
land.8 The early 1950s saw a major expansion beyond the Gulf of Mexico to the California coast
and to the bountiful offshore basins of Brazil and Venezuelas Lake Maracaibo. Another big
advance occurred in the Gulf of Mexico in 1954 with the introduction of the moveable,
submersible offshore drilling barge. The portability of the submersible drilling barge produced a
major increase in the attractiveness of drilling offshore, but brought a new set of challenges to
drillers.
The biggest challenge involved finding a solution to drilling from a floating barge while using
conventional rigs, casing heads, and BOP equipment. Needless to say, the motions of the barge
and rig derrick made it a daunting experience. One of the earliest solutions to rig movement was
the submersible drilling barge, the Mr. Charlie. The rig was the brainchild of A.J. LaBorde, a
marine superintendent for Kerr-McGee Oil Industries in Morgan City, Louisiana. In this capacity
he had a front row seat for observing the problems of offshore oil drillers in varying conditions of
water depth, wind, and wave action. Knowing their problems, he designed a submersible drilling

barge and suggested to his employer, Kerr-McGee, that they build the barge. After considering
the proposal, they declined.
Having been rejected by Kerr-McGee, LaBorde promptly resigned. Shortly thereafter, he formed
the Ocean Drilling & Exploration Co., or Odeco, with John Hayward and Charles Murphy Jr. of
Murphy Oil Co. Hayward possessed a patent on submersible-barge methods, and Murphy was
looking for innovative technology that would let his small company compete with bigger
companies drilling offshore. Together they decided to name their new rig the Mr. Charlie in honor
of Murphys father.
On June 15, 1954, its builder, J. Ray McDermott Co., turned over the completed drilling barge to
Odeco. It wasnt long before it set sail for its first job. Since no one had ever built a submersible
drilling barge before, skeptics anxiously crowded the site of its first job to see if it would work. For
LaBorde, there was no privacy if a mishap or problem occurred. However, to his reliefand to the
surprise of the skepticsthe barge worked perfectly. During the next 32 years, the rig went on to
drill hundreds of wells in the Gulf of Mexico. Mr. Charlie retired from service in 1986 when drilling
activity pushed into deeper waters beyond its capabilities.9
The Future
Without a doubt, drilling technology will continue to progress toward more cost efficiency and
speed. That is what it has always done because operators demand it. Therefore, manufacturers
and service suppliers will continue to hone their technology to provide more efficient equipment
throughout every aspect of the drilling process. According to George Boyadjieff of Varco Intl.,
much of the technological gain in the future will be in the information area. The Internet will come
to the drilling industry. Rigs will be connected just as offices are connected now. Real-time well
site data will be provided routinely to offices of drilling engineers, operations managers,
geologists, asset managers, reservoir engineers, and the like, says Boyadjieff in an article
commemorating the 50th anniversary of the offshore industry. Also, I believe well see
underbalanced drilling become as common as horizontal and multilateral drilling. Its not just for
reservoirs.10

Reservoir Engineering: Primary Recovery


In 1904, Anthony Lucas, the discoverer of Spindletop, returned to Beaumont, Texas, from a job in
Mexico and was asked by a reporter to comment on Spindletops rapid decline in production. He
answered that the field had been punched too full of holes. The cow was milked too hard, he
said, and moreover she was not milked intelligently."1
Lucas comments were lost on early oil operators, who gave little thought to reservoir depletion
and behavior as they drilled well after well in their newly discovered fields. When natural flow
played out, they simply placed their wells on pumps. When the pumps could no longer bring up
economical amounts of oil or when water production became excessive, a reservoir was
considered depleted. In the late 1920s, methods for estimating oil reserves and the quantities that
might be recoverable hadnt been worked out. Of course, many of the pioneer oilmen knew that
the gas represented energy which, if it could be controlled, could be put to work lifting oil to the
surface. But control involved numerous problems, and everyone was more interested in
producing the oil and selling it. Regulation of drilling and production was still nonexistent, so
waste and overproduction were widespread.2 Gas associated with oil was flared or simply
released into the atmosphere.
Several years later, the U.S. federal government referred to the billions of cubic feet of gas that
had been lost and publicly deplored the practice. Remedial measures were proposed that
included cooperative production by field operators and legislation to control producing rates and
to prohibit gas waste.1 Once operators discovered the results of their wasteful ways, they quickly
initiated a series of technical studies of reservoir behavior and the physical properties that
controlled this behavior. Thus, the profession of reservoir engineering was officially born.
The Early Years
According to most authorities, reservoir engineering officially began in the late 1920s. At this time,
engineers engaged in the recovery of petroleum began giving serious consideration to gasenergy relationships. They recognized their need for more precise information about hydrocarbon
activity in reservoirs that they were producing.
Actually, reservoir study can be traced to an earlier beginning when, in 1856, Frenchman H.
Darcy became interested in the flow characteristics of sand filters for water purification. This
interest led him to resort to experiments which, in turn, led to the real foundation of the
quantitative theory of the flow of homogeneous fluids through porous media. These classic
experiments resulted in Darcys law.3 Since 1928, the art of forecasting the future performance of
an oil and/or gas reservoir based on probable or presumed conditions has evolved steadily. In the
early 1920s, reservoir engineering was concerned largely with empirical performance, with the
exception of the laboratory work done on fluid and rock properties. Ultimately, this experimental
work provided a foundation for the mathematical equations that were derived later during the
1930s.
From the beginning, engineers recognized that oil-recovery methods based on wellhead or
surface data were generally misleading.4 They knew they must obtain a more thorough
understanding of the functions of the reservoir in order to maximize the recovery of its
hydrocarbons. This fact set in motion the evolution that has resulted in todays engineered
reservoir. Along the evolutionary trail leading to the present, developments in applied
mathematics, numerical analysis, computer hardware and software, geology, geophysics, and
geostatistics became part of reservoir engineering.

Fluid Flow
Hydrocarbons are complex fluids that generally exist in an untapped reservoir in liquid and
gaseous states and are considered to be at equilibrium. Likewise, they are expected to behave in
accordance with predictable functional pressure/volume/temperature (PVT) relationships. If all the
gas is dissolved in the oil, the single phase, is considered to be a liquid phase, and the reservoir
is called a dissolved-gas reservoir. On the other hand, if there are hydrocarbons as vaporized
gas that are recoverable as natural gas liquids on the surface, the single phase is considered to
be a gas phase, and the reservoir is called a wet-gas reservoir. In some reservoirs, both liquid
and gaseous phases may exist. These are called gas-cap reservoirs. If an artesian water supply
is directly associated with any of these reservoirs or expanding water is the dominant producing
force, the reservoir is termed a waterdrive reservoir.
Challenges to reservoir engineers begin when the reservoir is opened to production and the flow
of hydrocarbons begins. At this point, reservoir pressures drop; fluids comprising gas, oil, and
water expand; phase equilibria are disturbed; and alterations in the physical properties of the fluid
phases occur in various degrees throughout the entire reservoir. In short, the oil has become
active. With further withdrawal of fluids, changes continue and difficult second-order partialdifferential equations are needed to describe the unsteady-state flow of expansible fluids.
From 1927 to 1930, Jan Versluys, a well-known hydrologist working for Royal Dutch Shell, wrote
numerous articles on the physics of oil producing formations that were widely published. In 1931,
Morris Muskat and H.G. Botset wrote several papers on the flow of reservoir fluids. These papers
and articles were instrumental in advancing the knowledge of reservoir dynamics to its present
state.
Today, most reservoir engineers consider that, of the many great reservoir-engineering pioneers,
Muskat probably had the greatest impact, relates Joe Warren, a personal friend of the late Morris
Muskat. A native of Riga, Latvia, Muskat attended Marietta College and Ohio State U. and
ultimately received a PhD degree in physics from the California Inst. of Technology in 1929.
Following his graduation from Cal Tech, Muskat joined the Gulf Research and Development Co.
where, at the age of 31, he wrote The Flow of Homogeneous Fluids Through Porous Media, a
seminal publication for reservoir engineering. Twelve years later, in 1949, he wrote a second
book, Physical Principles of Oil Production. Together, these books provided a sound analytical
foundation for reservoir engineering by combining fluid mechanics with phase behavior.
Muskat also published technical papers in such diverse fields of interest as hydrodynamics,
lubrication theory, and the mechanics of shaped charges, Warren recalls. As a matter of fact, he
received an original patent for his work on the use of shaped charges in oilwell perforating
applications.
A paper written in 1933 by T.V. Moore, Ralph J. Schilthuis, and William Hurst advanced reservoir
science further. The paper presented the first equation for unsteady-state radial flow of
expansible reservoir fluids. It reported the development of a linear second-order equation similar
to the classic heat-flow equation that adequately described the flow of a single-phase
compressible (or expansible) liquid in a reservoir. A year later, in 1934, Schilthuis and Hurst
published the application of the equation to the calculation of reservoir-pressure changes in an
east Texas field and to the prediction of the effect thereon of changes in production rates.5

Phase Relationships
In considering the drive mechanisms influencing a reservoir, a reservoir engineer must determine
the fluid phases that exist, their compositions, and the changes that normally would take place
during natural flow under the drive in order to predict the behavior of the reservoir.

Among the first to realize the importance of fundamental studies of phase relationships were B.H.
Sage and W.N. Lacey. In the 1930s, they published a series of papers reporting the results of
their continuing research in the field of phase behavior. Among their significant contributions was
the recognition and characterization of condensate reservoirs.6
Sampling and Measurement Devices
Early reservoir engineers recognized that both temperature and pressure influence the behavior
of reservoir fluids. Since the measurement of reservoir pressure and temperature was basic to
enabling reservoir-performance calculations, the development of a method or device that would
measure them became a priority. The development of continuously recording instruments such as
the pressure gauges invented by P. Comins and Geophysical Research Corp. and subsurface
temperature-measuring devices developed by C.E. Van Orstrand contributed greatly to this new
science.
Likewise, early pioneers realized that, in order to calculate volumes of oil and gas in place, they
would need to know the change in the physical properties of bottomhole samples of the reservoir
fluids with pressure. Accordingly, in 1935, Schilthuis described a sampler and a method of
measuring the physical properties of bottomhole samples.
Measurements included PVT relationships, saturation or bubble-point pressure, total quantity of
gas dissolved in the oil, quantities of gas liberated under various conditions of temperature and
pressure, and the shrinkage of the oil resulting from the release of its dissolved gas from solution.
These data made the development of certain useful equations feasible and provided an essential
correction to the volumetric equation for calculating oil in place.7
Material-Balance Equations
In 1935, D.L. Katz of the U. of Michigan proposed a tabular method of obtaining a material
balance for a closed reservoir. Basically, a material-balance equation is a statement that
accounts for the volumes and quantities of fluids that are initially present in, produced from,
injected into, and that remain in a reservoir at any state of its depletion.
Also, that same year, Schilthuis published a material-balance equation that included the same
terms of fluid volumes and changes with time as Katzs method. The application of Katzs method
required the experimental determination of phase equilibria data; the Schilthuis method
represented a simplification in that the requisite terms were reduced to simpler expressions.
A bit later, Schilthuis proposed a method to calculate water encroachment using the materialbalance equation, but his method required accurate production-history data. Several years later,
William Hurst developed a method for determining the rate of water influx that was independent of
the material-balance equation and production history; only data on pressure history and rock and
fluid properties were required.8
Displacement-Efficiency Equation
In 1940, S. Buckley and M.C. Leverett proposed two displacement-efficiency equations
concerning the displacement of immiscible fluids. These equations provided another powerful tool
for reservoir engineers and scientists. One equation describes the fraction of immiscible
displacing fluid flowing with the oil through a unit rock volume; the other describes the rate of
advance of a particular degree of saturation of the displacing fluid that exists in that volume.
These valuable equations are used in the calculation of recovery by an immiscible displacing
fluid, natural or induced. And they played a key role in allowing later engineered waterflood
predictions. Applications include prediction of the effects of relative viscosity or permeability,
volumetric rate, formation dip, differential fluid density, and wetting and pressure gradient on
recovery under specified conditions.9

Maximum Efficient Rate of Production


Through the years, it has been learned that oil is recovered by three different natural
mechanismssolution-gas drive, gas-cap drive, and waterdrive. These mechanisms may be
effective individually or in combination. They differ in recovery efficiency. Recovery can be
increased by controlling the reservoir so that the most efficient available mechanism becomes the
dominant one or by injecting gas or water to supplement or modify the natural drive.
In practice, one of the most effective means of achieving efficient recovery is through control of
the rate of production of oil, water, and gas. The knowledge gained through studies of reservoir
behavior led to the concept of maximum efficient rate of production. For each particular reservoir,
it is the rate that, if exceeded, would lead to avoidable underground waste through loss of
ultimate oil recovery. This concept has found widespread application by both industry and
regulatory bodies for the efficient recovery of petroleum.10
Reservoir Simulation
By the 1950s, most of the fundamentals of modern reservoir engineering were in place. The next
evolutionary milestone was the emergence of reservoir simulation. The earliest simulators (circa
1930) were essentially sandboxes constructed with transparent glass sides. These elementary
simulators allowed researchers to view fluid flow directly. During this era, most reservoir scientists
assumed that the reservoir was a single tank or cell in which the fluid flowed from one side to the
other.
These early modeling attempts were used to study water coning, states Donald Peaceman, a
retired Exxon researcher and industry consultant. The models allowed researchers to see the
activity that occurs when a well is produced. The production of the oil causes the pressure around
the well to decrease, and that causes the water to cone up and be produced with the oil.
It wasnt until the 1930s that people in the oil industry started looking at reservoir mechanics in
any kind of a scientific way, he continues. So this was one of the first attempts to understand
why water starts to be produced with the oil and why the produced-water/oil ratio increases with
time.
Twenty years later, with the advent of computers, reservoir modeling advanced from sandboxes
and electrical analogs to numerical simulators. In numerical simulation, the reservoir is
represented by a series of interconnected blocks, and the flow between blocks is solved
numerically. Early computers were small and had little memory, which limited the number of
blocks that could be used.
When I went to work in 1951, recalls Peaceman, we had nothing that you could call a
computer. We did have access to some accounting machines that the accounting department
would let us use, but only at night, he remembers. Our job was to model the flow of gas through
the porous rock of a field. To accomplish this, we had to use a converted accounting machine that
had a capacious memory of 56 words of eight decimal digits each, could not store a program, and
strained to complete five floating-point operations per second, says Peaceman as though he still
finds it hard to believe.
Our management did have the vision to see that digital computation was going to be the way to
do reservoir modeling in the future, but that vision was still pretty faint, he remembers.
In 1955 we significantly increased our computing capacity when we acquired a Bendix G-15,
explains Peaceman, as he recalls his past experiences involving the evolution of reservoirsimulation computers. This [computer] had vacuum-tube electronics, but its storage was almost
completely on a magnetic drum. Within the next few years, we obtained IBMs first widely used
scientific computer, the 704. It was a binary machine, with built-in floating-point hardware. Its

central memory was magnetic core, and its secondary storage was magnetic tape, he continues.
Also, Fortran was not yet available. Our programs were written in assembly language, but that
didnt bother us, since we were already used to dealing with machines that were much less user
friendly.
During the following decades, computing power increased, which, in turn, allowed engineers to
create bigger, more geologically realistic models that required greater data input. This demand
was met by the creation of increasingly complex and efficient simulation programs with easy-touse data preparation and results-analysis packages.
Over the years, numerical simulation has continued to evolve to the point that it has become a
reservoir-management tool for all stages of the life of the reservoir. No longer is it used only for
comparing the performance of reservoirs under different production schemes or for
troubleshooting failed recovery methods. Today, they plan field development, design
measurement campaigns, and guide investment decision-making.11
Reservoir Management
Webster defines management as the judicious use of means to accomplish an end. Thus,
reservoir management can be interpreted as the judicious use of various means available in order
to maximize the benefits from a reservoir. According to several authors who have written on
reservoir-management practices, reservoir management involves making certain choices: either
let it happen or make it happen. Without planning, they say, the generation of benefits from a
reservoir operation is left to chance.12 With sound management practices, they conclude, the
generation of benefits is enhanced, and chances of profit are maximized.
In 1963, John C. Calhoun Jr., in an article written for the JPT, described the engineering system
of concern to the petroleum engineer as being composed of three principal subsystems.
1. Creation and operation of wells.
2. Surface processing of the fluids.
3. Fluids and their behavior within the reservoir.
The first two depend on the third because the type of fluids (oil, gas, and water) and their
behavior in the reservoir will dictate where and how many wells to drill and how they should be
produced and processed to maximize profits, states Calhoun.13
Technically, reservoirs have been managed for more than a 100 years, but true reservoir
management has been practiced only when a major expenditure is planned, such as original field
development or waterflood installation. In fact, until 1970, most people considered reservoir
management as synonymous with reservoir engineering.14 However, during the past three
decades, its integration with other sciences, such as geology, has created a truer reservoirmanagement approach. During its evolution from purely reservoir engineering to the more
integrated reservoir-management function, the science of forecasting the future performance of
an oil or gas reservoir went through two distinct periods.
In the first period --- the four decades before 1970 --- reservoir engineering was considered the
only item of technical importance in managing a hydrocarbon reservoir. In 1962, Wyllie
emphasized two key points --- clear thinking using fundamental reservoir-mechanics concepts
and automation using basic computers.15
In the second period --- the three decades since 1970 --- the concept of managing oil and gas
reservoirs has evolved more toward the integration of reservoir engineering with other scientific
disciplines, namely geology and geophysics.
Craig emphasized the value of detailed reservoir description using geological, geophysical, and
reservoir-simulation concepts.16 He challenged explorationists, with their knowledge of

geophysical tools, to provide a more accurate reservoir description that could be used in
engineering calculations.
In the last 10 years, it has become clear that reservoir management is not synonymous with
reservoir engineering and/or reservoir geology. Instead, it is a blending of these disciplines into a
team effort. Projects undertaken during the past 10 to 15 years have seen the integration of
efforts into multidisciplinary project teams that work together to ensure development and
execution of the reservoir-management plan.
The Future
The science of reservoir engineering will continue to evolve; newer and better methods of
predicting reservoir behavior will be found. However, when it comes to reservoir management,
true integration of the geosciences into reservoir engineering will take time because the
disciplines do not communicate well. Simply recognizing that integration is beneficial will not be
sufficient. True integration will require persistence.17
And, while a comprehensive program for reservoir management is desirable, every reservoir may
not warrant a detailed program because it might not be cost-effective. In these cases, reservoir
engineering alone may be sufficient.

Reservoir Engineering: Augmented Recovery


In 1888, roustabout James Dinsmoor was working on several Third Venango sand wells on the
William Hill property in Venango County, Pennsylvania. On an adjoining property, a Third
Venango sand well was being deepened by its operator to tap the Speechley sand to obtain
natural gas for use on the lease. The operator found a considerable amount of gas in the
Speechley but did not have pipe available. So the well was shut in temporarily to save the gas.
Dinsmoor observed that the three Third Venango sand wells on the William Hill property
immediately experienced an improvement in oil production and that this increase was maintained
until the nearby Speechley well was completed by its operator. Following the wells completion,
Dinsmoor noticed that the Venango wells returned to their previous production levels. This
accidental repressuring represents the first known application of augmented recovery in the U.S.1
Additional instances of augmented-recovery methods occurred in the late 1870s or early 1880s
when other early well completions began to lose formation pressure and production started to
wane. Attempts by early operators to augment production included the application of gas
(vacuum) pumps to faltering wells. This practice established an increased pressure differential
between the reservoir and wellbore, increased the rate of fluid production slightly, and extended
the wells producing lifespan.2
During the ensuing period of approximately 119 years, a variety of augmented- recovery methods
have been developed and introduced. These methods include waterflooding, gas injection,
chemical flooding, and thermal recovery. Some methods, such as waterflooding and gas
injection, use materials native to the reservoir to either replace or augment natural drive forces.
However, they do not alter any of the fundamental factors that act to retain the oil within the
reservoir. Other techniques, such as chemical flooding and thermal recovery, use more dramatic
methods to overcome forces, such as surface tension and viscosity, that inhibit the flow of oil from
the formation.
Waterflooding
As early as 1880, observers of oil producing operations concluded that water could be an
effective method for driving oil flow within the formation. Most of these observations resulted from
the accidental intercommunication of natural forces under favorable circumstances that resulted
in increased production.
One of the first documented waterfloods was in Pennsylvanias Bradford field. The accidental
flooding of the field is thought to have begun in 1905; six years after the field began production.
Flooding continued for 15 years. During this time, production rates trended upward. Most
operators credited the production increase to the accidental flooding, and, even though it was
illegal, operators in both the U.S. and Canada instituted intentional field waterfloods. While the
unintentional flooding of fields is well-documented, data on intentional waterfloods by operators
prior to 1921 (the date waterflooding was legalized) are sketchy. Still, evidence from that time
suggests that intentional waterflooding occurred as early as 1875.
The earliest waterfloods were called circle floods because of the growth pattern of the waterinvaded zone. As nearby wells were watered out, they too became injection wells in order to
continue the extension of the area of waterflooding. The rate of advance of the water, diminishing
with time and cumulative injection, prompted one operator to convert a series of wells
simultaneously to form a line drive, a technique that increased oil production rates even more.

The first five-spot-pattern waterflood was attempted in 1924 on a tract in the southern part of the
Bradford field. Frank Haskell is credited with the idea, but Arthur Yahn receives credit for the
techniques first successful deployment. Haskells attempt failed to produce a speedy response
because of the 500-ft (152-m) distance between like wells. Yahns 190-ft (58-m) distance
between wells produced a much quicker response. Initially, only surface water entered the wells,
but, in late 1929, a pressure plant was installed to increase the rate of injection. Also, the fivespot pattern required that wells be reworked to achieve replacement of prior withdrawals in
reasonable time frames, and later five-spot well spacing varied, depending on formation
permeability. The technique gained widespread acceptance by 1937.
However, operators were slow to extend waterflood activities outside Pennsylvania due to the
economic conditions of 1929 and 1930. However, in 1931, the Carter Oil Co. initiated a pilot flood
in the shallow Bartlesville sand of Oklahoma. Soon, others followed; and all enjoyed favorable
results. In early 1936, waterflooding operations were extended to the shallow sands of the Fry
Pool in Brown County, Texas. However, results were marginal, and it wasnt until Magnolia
Petroleum Co. initiated the West Burkburnett flood in 1944 that outstanding results were
achieved. Operations soon followed in other states between 1944 and 1949.
During this expansion period, engineers became aware of the advantages of pressure control by
reinjecting produced water in natural-waterdrive fields. In 1936, the East Texas field was the site
of initial experiments involving reservoir-pressure control as a result of the disposal of produced
water in natural-waterdrive fields. Earlier analytical studies of the reservoir by Ralph J. Schilthuis
and William Hurst led to the conclusion that, as the reservoir pressure declined, salt water
contained in the aquifer of the Woodbine sand expanded and encroached into the oil reservoir.
Depending on the rate of production, the water sustained an equilibrium level of reservoir
pressure that was interdependent with the rate of production.3 After several years of observation,
the program was declared a success and was expanded to other fields. The fields natural
waterdrive resulted in a recovery factor that has, to date, exceeded 70%.
Gas Injection
Interest in gas injection continued during the days following the turn of the century. In gas
injection, compressors are used to force the air or natural gas through injection wells drilled for
that purpose or through old wells taken off production and used as key wells. Gas injection has
four objectives: to maintain or to restore formation pressure, to act as a drive mechanism, or to
place the gas in storage until it is needed.4
In August 1911, I.L. Dunn successfully demonstrated that repressuring a reservoir by injecting
gas could increase oil production. Dunn based his experiments on an idea he had gained in Ohio
in 1903 when gas, at a pressure of 45 psi, was forced into an oil well producing from a 500-ft
(152-m)-deep sand. According to Dunn, After 10 days the gas pressure was released and the
well began to pump much oil, which continued until the gas had worked out again.
To demonstrate his technique, Dunn conducted a series of experiments in which 150,000 ft3 of
free air was compressed and forced into one well daily at a pressure of 40 psi. Within a week, the
production of surrounding wells increased, after which the compressed-air method was extended
to other parts of the property.5 As a rule, the application of air resulted in a three- to four-fold
increase in the production rate, and the use of air proved more economical than natural gas.
In 1927, Marland Oil Co.s Seal Beach field in California was the site of the first use of higher
injection pressures. The higher pressure was necessary because of the hydrostatic pressure
exerted by high-head edge water. Pressures as high as 1,800 psi were required to force the gas
into the sand; however, after the gas was flowing into the formation, the pressure never exceeded
1,500 psi. A year later, 173 million ft3 of gas had been injected into the Bixby sand. As a result,
production increases as high as 50% were obtained in wells upstructure from the injection wells,
with little increase downstructure.

According to historians, annual U.S. production resulting from gas-injection projects reached a
peak in 1935, maintained a constant level until 1945, and then increased rapidly to 1952. Annual
production from gas-injection projects is estimated to have reached 212 million bbl in 1955.
Early Field Successes
Early gas- and water-injection projects were designed and implemented with empirical methods.
Fundamental scientific understanding of fluid flow began in the 1930s, and breakthrough
understanding of fluid displacement can be attributed to M.C. Leverett and his collaborators. This
understanding established the foundation for engineered design and prediction of waterflood
performance, including the method for layered reservoirs. It also set the stage for large-scale
applications in some of the biggest fields of the period.
Magnolia Petroleum Co.s West Burkburnett field in Texas is an example of the success of a large
waterflood secondary-recovery operation. Using a five-spot water-injection program, the
companys flood recovered 9 million bbl of oil between 1944 and 1953, or 1.4 times that
recovered by primary methods.
In Illinois, an initial five-spot program by Adams Oil & Gas Co.-Felmont Corp. in 1943 in the
Patoka field, Marion County, resulted in a much more rapid response in the oil production rate
than expected. The field, which produced 2.8 million bbl of oil by primary production, produced an
additional 6.4 million bbl of waterflood oil by August 1960.
In the Bradford field of Pennsylvania, results from an air-injection project were quickly realized.
The daily injection of air at an average of 68,600 ft3/well at 300 psi increased per-well production
from 0.25 to 12 BOPD in less than two months. Annual production from the 22-well project
increased from 3,474 bbl in 1925 to 18,524 bbl in 1927, and total production from the field is
estimated to have increased by 25% from the air injection alone.
Enhancing Displacement Efficiency Miscible-Gas Injection
The first use of miscible-gas injection by the petroleum industry occurred in the 1950s as a result
of a search for a miscibility process that would recover oil effectively during secondary and
tertiary production. The injection fluids used include liquefied petroleum gases (LPGs), such as
propane, high-pressure methane, methane enriched with hydrocarbons, and high-pressure
nitrogen and carbon dioxide (alone or followed by water). All of these are effective in displacing
trapped reservoir oil, but applications are dependent on the field and a variety of economic
considerations, such as the commercial marketability of these products individually.
The injection of products other than air or water into formations to encourage oil to flow to the
wellbore actually began as early as 1927. At that time, the Midwest Refining Co. injected surplus
liquefied-gas products into the secondary-gas-cap area of the Salt Creek First and Second Wall
Creek reservoirs. While they planned to improve gas-drive operations, they were not aware of the
enriched-gas-drive mechanism that was used later.
Block 31 Field
Started in 1949; the oldest active operation involving miscible-gas injection is Atlantic Richfields
(Arco) Block 31 field in Crane County, Texas. It began as a miscible-hydrocarbon-gas injection
and is still in operation as a mixed hydrocarbon/nitrogen-injection project.
The Block 31 field was discovered in 1947, states Ben Caudle, a former Arco employee and
now a professor of petroleum engineering at the U. of Texas, and the light crude in the field had
a high shrinkage factor. Our forecasted recovery rate was determined to be 10% at best, so we
began looking for a method of getting the oil out in quantities that would make the economics
work.

Barney Wharton, at Arcos research center, had an idea. Barney suggested that in order to
handle the shrinkage, we should inject readily available natural gas into the formation at high
pressures. The gas would then evaporate the light-ends liquids before we recovered the oil at the
surface, Caudle recalls.
The Texas Railroad Commission, Texas oil and gas governing body, was contacted to obtain
approval for the new, untried recovery method for the reservoir. They approved the procedure
and also exempted the field from the mandatory shut-in period in effect at the time. In those
days, we were on production allowables, says Caudle. Wells couldnt be produced more than 9,
10, maybe 11 days a month because of the abundance of available crude, Caudle says.
Next, we conducted experiments in the laboratory to determine the most appropriate method of
applying our idea. Nobody was more surprised than we were when, halfway through our
experiment, it dawned on us that the injected gas was becoming miscible due to the high
pressure. The multiple contacts of the gas in the pore spaces was drawing the gas and oil closer
together until it formed a miscible slug that drove the oil ahead of it, explains Caudle. We put our
method into operation and, as a result, Block 31 became the industrys first miscible-gas oilrecovery operation. Later on, it was determined that nitrogen gas could be substituted for the
natural gas. As a result of the miscible-gas-injection project, the fields recovery factor has
reached approximately 70%. The field is still in operation.
During the 1970s, the U.S. industry switched largely to carbon dioxide gas that was available
near major west Texas fields. It achieves miscible displacement at low pressures, has a greater
viscosity under pressure than many other gases, and is less costly than LPGs or methane.6
Hydrocarbon gases continue to be used widely in Alaska and elsewhere in the world.
Chemical Flooding
During 19361937, alcohol was injected into oil wells to displace capillary-held water from the
near-wellbore region, where it offered the most severe restriction to oil flow. Later, when oil
production from the well was resumed, the operator anticipated he could remove the alcohol
surrounding the wellbore and realize an increase in oil productivity.7 His activities marked one of
the earliest uses of a chemical to displace oil in a formation.
Natural-drive fluids, like water and gas, leave a large supply of oil behind in the reservoir under
the best of conditions. Because they are immiscible with formation oil, the oil resists being
displaced from the rock pores. Also, these fluids have densities and mobilities that are
incompatible with oil. Chemical flooding adds chemicals to the water in order to overcome these
problems. Three types of chemical floods are used: polymer, micellar/polymer, and
micellar/alkaline floods.
Polymer flooding is a type of chemical flood in which long, chainlike, high-weight molecules are
used to increase the viscosity of injected water. This improves the mobility ratio of injected water
to reservoir oil, resulting in a more effective displacement process. Micellar and alkaline floods
are two methods used that result in improved microscopic oil displacement by reduction of
oil/water interfacial tensions. In alkaline floods, the injected material reacts with naturally
occurring acidic oil components to form a surfactant. Micellar/polymer flooding is a two-part
recovery technique in which a surfactant/ water solution is injected to reduce oil/water interfacial
tensions, resulting in improved microscopic oil-displacement efficiency. This is followed by
polymer-thickened water to push the oil and surfactant slug toward the producing wells.
The popularity of chemical flooding reached a peak during the 1970s when research projects
abounded, and a large number of field tests were conducted in the 1980s. Chemical-flood field
tests reached a peak of 206 in 1986.8 However, the oil-price collapse that same year interrupted
the popularity of this recovery method when it presented operators with a fundamental dilemma

linked to economics. Since the cost of the materials is generally linked to the cost of petroleum, a
vicious cost spiral ensued as oil prices collapsed. Since the mid-1980s, oil prices have seen
some growth; however, this improvement has not been sufficient to reignite the interest seen in
the 1970s and 1980s. Proponents of chemical floods are seeking ways of breaking this economic
linkage by generating surfactants from nonhydrocarbon feedstocks.
Thermal Recovery Steamflooding
Of the two main methods of thermal recoverysteam injection and in-situ combustionthe
injection of hot fluids, such as steam, into the reservoir is the oldest and most controllable
method. Flooding with heated water, steam, or superheated steam has been around almost as
long as conventional waterflooding. In fact, the idea for using heated fluid supplied from the
surface can be traced back to a proposal by B.W. Lindsly in 1928.9
To date, most thermal-recovery work has been accomplished with steam. It has found significant
application in many parts of the world, including the U.S., Venezuela, Canada, Germany, Russia,
China, and Indonesia (the site of the largest steamflood).
Hot-fluid-injection theory is simple; heated water or steam is generated on the surface and
introduced into the formation through injection wells. The heat serves to lower the oil viscosity in
the formation, allowing it to flow more easily to the producers. Steam is preferred because it is
much more efficient at delivering thermal energy to the reservoir because of the latent heat of
vaporization. Although steamflooding can be effective in both light and heavy oils, it is used
predominantly in heavy oils.
In-Situ Combustion
Purposeful underground combustion started in Russia around 1933 (unintended combustion had
occurred previously during some air-injection projects). This early project was conducted in a
pressure-depleted reservoir containing 36API oil. Since then, in-situ combustion projects have
been implemented in many locations including Romania, Canada, the U.S., and India
In-situ combustion generates heat in a reservoir through the introduction of air into the reservoir,
after which a fire is ignited in the formation near an injection well. The fire and airflow move
simultaneously toward the production wells. This forward-combustion method uses the injected
air and vaporized formation water as the heat carrier (dry combustion) or may combine air and
water injection (wet combustion) to increase process efficiency. A seldom-used variantreverse
combustionallows the fire to move from the production well toward the injection well(s), and the
fire flows counter to the flow of injected air.
The first known attempts to apply an in-situ combustion process in the U.S. occurred in 1952,
when Magnolia and Sinclair engineers, each group working independently only 300 miles apart in
Oklahoma, initiated movements of combustion fronts in pattern-type experiments after several
years of laboratory testing.10 Reports of these two experiments provided the impetus for
additional research in other laboratories. Later on in the 1950s, General Petroleum Corp. and
Magnolia Oil Co. generated a cooperative underground combustion-field-test effort supported by
10 other oil companies in the South Belridge field in California.
Simultaneously, thermal-recovery techniques also were beginning to be applied in other areas of
the world. One of these areas was Venezuela.
The use of steam (cyclic steaming and steamflooding) began in the oil fields of Venezuela in the
late 1950s and was in routine use in eastern and western Venezuelan oil fields by the mid1960s, says Tom Reid, a former Phillips Petroleum Co. engineer who now works for the U.S.
Dept. of Energy. I took steam to the Morichal field in eastern Venezuela in 1964 because our
management was convinced that the economics of steaming high-sulfur, heavy oil, which had

been successfully substantiated by operators in California, could be used in Phillips high-sulfur,


heavy-crude operations in Venezuela, he explains.
During the 1960s, cyclic-steam, steam-drive, cyclic-hot-water and hot-water drive raised
production to over 100,000 B/D until the latter part of that decade when the company cut
production back to 25,000 B/D to match the needs of its refinery in England that was processing
the heavy crude into asphalt for paving roads, he continues.
During these steam operations, we learned a lot, Reid says. We received a couple of surprises
when the first well was steamed. One of these surprises was the production of hydrogen that
occurred when the high-temperature steam reached the reservoir sands. We traced this produced
hydrogen in the offset producers, and it indicated the direction of the steam front. Also, the cyclichot-water and hot-water drive we used produced excellent responses in these virgin reservoirs.
Reid also comments on the use of fireflooding in the Morichal field. It wasnt successful, he
states candidly. Counterflow combustion wasnt possible because of the occurrence of
spontaneous ignition, and conventional direct-drive fireflooding failed because of the low structure
(flatness) of the reservoirs. The nitrogen and carbon dioxide produced by the fireflood gassed
out distant producers, thereby reducing the overall field production.

Formation Evaluation
Logging and Testing
In ancient China, wells were drilled to depths of as much as 3,000 ft (914 m) to locate and tap
sources of salt brine. Their primitive drilling technique resembled cable-tool drilling methods used
centuries later to explore for oil. To obtain knowledge of the formation below, cuttings and fluids
brought to the surface by bailing operations were dumped on the ground and examined by
Chinese drillers. Cursory examination of the cuttings provided information on the formations
penetrated and enabled the drillers to determine how close they were to finding the needed brine.
When early oil pioneers drilled their wells 2,000 years later, they knew little about the formations
they were drilling and showed practically no interest in the stratigraphy their bits penetrated.
Instead, their interest was focused on making holes and looking for the presence of oil. Later,
realizing it was very helpful to know something about the formation, they examined and recorded
the characteristics of cuttings brought to the surface by bailing operations. Eventually
mineralogists applied a microscope to the cuttings and advanced formation-evaluation efforts
further by measuring the density, hardness, and electrical properties of the rocks and by making
chemical analyses of them.1
Core Sampling and Mud Logging
For almost 50 years, recorded descriptions of drill cuttings were the sole source of formation
knowledge. Around 1920, the first core-barrel sampling tools were put to work in California, west
Texas, and Colorado. These tools cut core samplings of the formations from the bottom of the
borehole. After collection, the cores were analyzed and underwent experimentation in the
laboratory to garner valuable reservoir data on the formations being drilled. Over the years,
diamond-coring tools became the preferred method of collecting core samples.
While mechanical coring was an improvement on simple cuttings records, it was expensive
because it had to be done continuously during the drilling of the well. In an attempt to be more
cost-effective, efforts were made to obtain as much data as possible from cuttings samples.
The advent of rotary rigs and the use of drilling mud to circulate cuttings from the bottom of the
hole to the surface produced an interest in comparisons of cuttings obtained from different drilling
depths. The cuttings were treated with acetone or ether and were exposed to ultraviolet light to
detect the presence of small amounts of oil. This test was repeated again and again during
drilling, and the results, or lack thereof, were documented.
During the late 1930s, John T. Hayward developed the continuous mud-analysis log, which
shows the combined results of mud analysis for both gas and oil and relates these results to
factors such as drilling rate and depth.
Actually, Hayward was more interested in the gases and liquids in the mud than the cuttings.
From appropriate measurements at the surface, he succeeded in determining the content of oil
and gas in the various formations traversed. He then correlated these observations with depth to
create a continuous diagram of the oil and gas content of the formations penetrated while drilling
was in progress.
As a result of his work, continuous mud-analysis logging furnishes a variety of useful formationevaluation data, including the amount of methane, liquid hydrocarbons, and oil in the cuttings.
This log will eliminate frequent mechanical coring. Cores will now be taken only when a show of
oil or gas make it advisable, proclaimed Hayward when asked about the long-term implications
of his technique.3

Electric Logging
In March 1921, Marcel Schlumberger and several associates used a 2,500-ft (820-m) borehole
and conducted downhole resistivity measurements to see if they could enhance the interpretation
of surface seismic data. They found that their measurements did indeed reflect the variation in the
nature of subsurface formations penetrated by the wellbore. Six years later, in a 1,640-ft (500-m)
well in Frances Pechelbronn field, experimental physicist Henri Doll successfully produced the
worlds first electric log using successive resistivity readings to create a resistivity curve.
The technique actually was invented by Conrad Schlumberger with the help of his brother,
Marcel. The Schlumberger brothers believed that, among the physical properties of metal ores,
their electrical conductivity could be used to distinguish them from their surroundings.
Very basic equipment was used to record their first map of equipotential curves in a field near
Caen, France, in 1911. Plots of curves derived from their surveys confirmed both an ability to
detect metal ores and a capability to reveal features of the subsurface structure. Subsequently,
this information led to the location of subsurface structures that could form traps for minerals.
To understand the measurements made at the surface better, the Stumbergers knew they had to
incorporate resistivity information from deeper formations. The result was Henri Dolls 1927
Pechelbronn field log.4
The procedure used at Pechelbronn was crude and makeshift in nature.
But, it didnt take long to realize that the resulting resistivity log could be a valuable formationevaluation tool. Clays have a low resistivity. Porous sands are conductive if saturated with salt
water, are moderately resistive if the water is fresh, and are very resistive if the impregnating fluid
is oil. Thus, important clues could be deduced from the log as to the formations character. As for
oil sands, a relationship was determined to exist between resistivity and oil potentialthe higher
the resistivity, the better the production.
Following its first use in France in 1927, electric logging was introduced in Venezuela, the
U.S.S.R., and the Dutch East Indies in 1929. In 1932, after a series of demonstrations, electric
logging came to the U.S. when Shell Oil issued Schlumberger contracts for work in California. By
the end of 1933, the initiation period was over, and 12 crews were applying the technique
worldwide.
SP Curve
Four years later, in 1931, the discovery of a spontaneous potential (SP) phenomenon produced
naturally between the borehole mud and formation water in permeable beds introduced a new
basic measurementthe SP curve. Attempts by researchers to explain how the phenomenon
works resulted in agreement that it was due to electrocapillarity (filtration of liquid from the
borehole into the permeable formations).
However, this explanation did not prove itself in subsequent logging, and another cause was
added to explain the SP curvethe electrochemical effect. Laboratory and field work confirmed
the importance of the chemical effect, but everyone involved agreed that knowledge of the
phenomenon needed further clarification. During the ensuing decade, the work of several
researchers (Mounce, Rust, and Tixier) indicated that the SP effect was mainly a chemical
potential, with only a small and sometimes negligible filtration potential, but it was M.R.J. Wyllie, a
Gulf Oil Co. researcher, who provided a comprehensive explanation of the SP curve. In a 1948
technical paper titled A Quantitative Analysis of the Electrochemical Component of the S.P.
Curve, Wyllie suggested that SP consists of two different effects.

The explanation he offered greatly enhanced the science of SP logging. When recorded
simultaneously with the resistivity curve, permeable oil-bearing beds could be differentiated from
impermeable, nonproducing beds. The combination of the resistivity and SP curves considerably
increased the chances of probable conclusions as to the characteristics of the formation material.
In early uses, the SP curve was used exclusively as a tool for locating permeable beds and
defining their boundaries. Later, with the introduction of quantitative analysis methods, the SP log
was used to derive information on formation-water resistivity, an essential element for computing
water saturation from log data.
In the late 1940s and early 1950s, Schlumberger introduced the microlog, laterolog, and
microlaterolog. The microlog provided a more accurate determination of permeable beds and
their boundaries in limestone, sand, and shale, where the SP log wasnt satisfactory.
Laterologging began with a device called the guarded electrode, which was invented by Conrad
Schlumberger in the 1920s. From this, the laterolog was developed for use in wells drilled with
highly conductive mud because it more sharply defined bed sequences in hard formations. The
microlaterolog soon followed. It provided a more reasonable estimate of the resistivity of an
invaded zone (Rxo) and of residual oil saturation in practically all formation types.
In the early 1970s, Schlumberger introduced its Dual Laterolog-Rxo tool. The dual laterolog
answered the need for a tool capable of producing useful resistivity measurements even when
true formation resistivity and mud resistivity are very high, as in the case of carbonates and
evaporites drilled with salty mud. It also provided greatly improved thin-bed resolution.6
Induction Logging
Resistivity logging is a valuable tool, but it does not operate well under all conditions. This is
especially true in cases in which there is no liquid filling the borehole to allow contact to be
established between the electrodes and the formation or in cases in which oil-based mud is used.
For these situations, induction logging is much more suitable.
Applied in shallow ore exploration for over 25 years, the induction process had not been used in
oil exploration before 1942. Its oil industry application is credited to Henri Doll. While seeking a
way of using the induction process to create a military vehicle that could detect enemy mines in
its path during World War II, Doll realized the possibilities that might be gained by applying the
induction process to oil-exploration logging. His colleagues at Schlumberger strongly opposed his
use of the induction process in logging, citing problems posed by very small signal strength, high
direct mutual-coupling interference, and the lack of adequate supporting technology.
Nevertheless, Doll persisted in this complicated task, leading a team that was determined to
develop an order-of-magnitude improvement in logging technology.
In induction logging, which was introduced in the mid-1940s, a sonde that employs alternating
current of constant magnitude and a coil (transmitter) is lowered into the well to create an
alternating magnetic field from which eddy currents are introduced into the formation. The eddy
currents follow circular paths centered on the axis of the sonde. The eddy currents, in turn, create
a secondary magnetic field that induces an electromotive force, or signal, in a second coil
(receiver) also located in the sonde. The signal is amplified, rectified to direct current, then
transmitted to the surface, where it registers in the form of a continuous log.
History validated Dolls vision, perseverance, and faith with the eventual success of his inductionlogging tool. Since its first commercial use in 1946, induction logging has become one of the most
widely used logging methods in the world and has overtaken electric logging because it is
regarded as superior in many applications.7

In 1963, the dual induction-laterolog tool was introduced. This tool provided the simultaneous
recording of three resistivity measurements and the SP curve. All measurements are focused to
give true formation resistivity in a variety of conditions for wells drilled with freshwater muds.
Nuclear Logging
In 1896, H. Becquerel discovered radioactivity when his photographic plate was affected by a
preparation of uranium. By the early 1900s, it became evident that all terrestrial materials contain
measurable quantities of radioactive elements in extremely minute quantities. Over time, these
radioactive elements disintegrate and transform into other elements. As they disintegrate, they
emit energy in the form of alpha, beta, and gamma rays. In rock formations, this radioactivity can
be measured and logged to determine the types and nature of rock formations being drilled.
Gamma Ray Logging
During the late 1930s, electric logging had been accepted as a viable method of determining
formation materials as the borehole was drilled. However, it could not be used if the hole was
lined with steel casing. Therefore, the development of a logging method for use in these
applications was crucial. The result was gamma ray logging, which measures and records natural
gamma ray activity in formations contacted by the borehole.
A Tulsa, Oklahoma, group made the first gamma ray log in a well near Oklahoma City. The
results clearly demonstrated that the technology could reveal the lithology of the borehole. A
company was founded soon, and the first commercial gamma ray survey was done for the
Stanolind Oil and Gas Co. in May 1940 in Texas Spindletop field. Subsequent use of the
technology determined that it was particularly good for defining oil beds, spelling out the geology,
and as a substitute for the SP curve in hard formations or with salty muds.
The gamma ray spectrometry log, first used in 1970, is a refinement of the gamma ray log. Like
the gamma ray log, it detects naturally occurring gamma rays, but it also defines the energy
spectrum of the radiation. Because potassium, thorium, and uranium are responsible for the
energy spectrum observed by the tool, their respective elemental concentrations can be
calculated. Calculated-concentration curves show a correlation to depositional environment,
diagenetic processes, clay type, and volume. Also, it is useful in estimating shale content.
Neutron Logging
While experimentation and development was occurring in gamma ray logging, R.E. Fearon
advanced, in 1938, his idea for a different type of nuclear-logging service. Bruno Pontecorvo
subsequently perfected it in 1941. The process is called neutron logging, and it involves the
bombardment of the formations along the borehole with neutrons. Then, the secondary gamma
ray activity generated by the bombardment is measured.
Since the variations in gamma ray activity observed on neutron logs are a result of the hydrogen
content of the formations, they offer a measurement of porosity. Therefore, porosity determination
has become one of the most important applications for the neutron log.
Early data obtained in gamma ray and neutron logging were of a qualitative, not quantitative,
nature, and there was no zero line in the diagrams. However, improvements in the late 1940s
incorporated zero lines into the logs, and numerical scales of gamma ray and neutron intensities
were added.
The continued rapid and intensive development of both resistivity- and nuclear-logging
techniques resulted in a movement from qualitative interpretation of logs to quantitative
interpretation with the 1942 publication of a paper by G.E. Archie. The paper detailed his
discovery of a relationship between electrical resistivity and formation-water saturation. Archies
work inspired an intensive investigation of data that were obtained from logs of subsurface

surveys and their relation to fundamental reservoir properties, such as porosity, permeability,
water salinity, and reservoir limits.8
Formation-Density Logging
Another nuclear-logging technique introduced in the 1960s was the formation-density log. The
device, which uses a gamma ray source and detector, is placed in contact with the borehole wall
to measure the bulk densities of formations in situ.
Field applications have demonstrated that measurement of formation density is a useful and
revealing technique for determining the porosity, lithology, and fluid content of formations in
conditions that hamper other logging methods, such as the logging of empty or gas-filled holes.
Sonic Logging
Unlike nuclear logging, sonic or acoustic logging operates on the principle that sound waves
(elastic waves) travel through dense rock more quickly than through lighter, more porous rock.
The technique, which resembles electric logging, uses a transmitter and receivers combined in
one downhole tool to measure, in microseconds, the time differences required for sound pulses to
traverse formation beds.
Some of the first experimental sonic logs were made in the early 1930s, but the first commercial
logs werent made until 1954. Sonic logs provided a more accurate porosity interpretation of
formation fractures and water table.
Pressure-Transient Testing
Instruments for measuring pressures in oil and gas wells were developed during the 1920s, and a
study by Pierce and Rawlins in 1929 reported a relationship between bottomhole pressure and
potential production rate. The study encouraged the development of improved instruments and
recording devices. By 1933, there were more than 10 different kinds of pressure-measuring/recording instruments in use.
Early measurements were static and were acquired by lowering a pressure-measuring device to
the bottom of a well that had been shut in for between 24 and 72 hours. However, engineers soon
recognized that, in most formations, the static pressure reading was a function of shut-in time and
mainly reflected the permeability of the reservoir rock around the well. What engineers wanted
was another basic measurementthe pressure-transient reading, in which the pressure variation
with time is recorded after the flow rate of the well is changed.
Morris Muskat was first to offer an extrapolation theory. Muskats theory related the change in
pressure with time to the parameters of the reservoir. In 1937, he presented a method for
extrapolating the measured well pressure to a true static pressure, and he stated at that time that
his method was only a qualitative application because it did not take into account the important
aspect of fluid compressibility.
The first comprehensive treatment of pressure behavior in oil wells that included the effects of
compressibility was that of C.C. Miller, A.B. Dyes, and C.A. Hutchinson in 1950. The following
year, D.R. Horner presented a somewhat different treatment. The two papers still furnish the
fundamental basis for modern theory and analysis of oilwell pressure behavior.9
One of the greatest contributors to the field of pressure-transient analysis was the late Henry J.
Ramey Jr. His 1970 paper investigating wellbore storage and skin effect in transient liquid flow
ushered in the modern era of pressure-transient analysis.
And, despite the development of other tests, most engineers agree that the standard for
pressure-transient testing will always be the pressure-buildup test. It is the most direct method of

obtaining an average pressure for reservoir analysis, it is operationally simple, and the theory is
well developed, stated a reservoir engineer when asked about the importance of the pressurebuildup test.
Drillstem and Formation Testing
It has long been realized that the sampling of fluids and pressure in the porous strata of the
formation being drilled can provide valuable information on the formation and its ability to yield oil
and/or gas. However, early methods of obtaining these data required the setting of casing. This
was expensive and, therefore, made testing expensive.
Working in El Dorado, Arkansas, in the late 1920s, E.C. Johnston and his brother M.O. Johnston
developed the first drillstem tester in 1927 and subsequently refined it in the early 1930s. The test
is a measurement of pressure behavior at the drill stem and is a valuable way for an engineer to
obtain important sampling information on the formation fluid and to establish the probability of
commercial production.
In the 1950s, Schlumberger Co. introduced a more advanced method for testing formations. The
Schlumberger formation- testing tool, placed in operation in 1953, fires a shaped charge through
a rubber pad that has been expanded in the hole until it is securely fixed in the hole at the depth
required. Formation fluids flow through the perforation and connecting tubing into a container
housed inside the tool. When filled, the container is closed, sealing the fluid sample at the
formation pressure. The tool then is brought to the surface, where the sample can be examined.
The information obtained by this method is often better than results obtained from drillstem
testing.10
The Future
The future of logging and testing is bright indeed. Most of the basic logging and testing tools and
techniques developed during the past 70 years have been refined and, in some cases, reinvented
to perform the logging and testing function of formation evaluation. However, even newer and
more ingenious technology is being introduced thanks to advances in electronics miniaturization
and in computer hardware/software that can go downhole rather than remain at the surface.
These tools have made real-time logging while drilling possible.
This is a very fundamental change from the past, when it was necessary to halt drilling while logs
were run periodically to obtain measurements. In a sense, the continuous-logging version of the
original realtime continuous mud-analysis technique sought by John Hayward in the 1930s has
been attained.
Integration is revolutionizing logging tools as they are packaged into logging-tool strings that
weigh half as much, perform multiple functions in fewer trips, and are easier to use.
Interpretation-software advances are making the information that logging tools collect more useful
to operators who are looking for tools that provide more powerful solutions in difficult-to-interpret,
more-problematic zones. Higher resolution in the tools is making it possible to identify pay zones
that previously were overlooked due to complex lithology.

Subsurface Equipment/Artificial Lift


Maximizing Production from the Well
In October, 1859, Colonel Edwin Drake rigged up a pump to produce an oil and water mixture a
distance of ten feet to the surface. It was the worlds first commercial oil well and the first use of
artificial lift to commercially produce oil. Today, 140 years later, pumps are employed in more
than 80% of all artificial-lift wells.
When oil or gas is being produced the reservoir pressure reduces. At a certain point in time it can
happen that the pressure in the reservoir becomes too low for production and artificial lift can be
required. Artificial-lift methods fall into two groups, those that use pumps and those that use gas.
Beam Pumping
The walking beam pump, was an idea borrowed from the water-well industry. One end of a heavy
wooden walking beam set on a pivot was attached by a stiff rod to a steam engine. Attached to
the other end of the beam was a string of long, slender sucker rods, which were connected to a
pump at the bottom of the well. The engine cranked the rod up and down and actuated the pump
to pump oil to the surface. Since the introduction in 1925 the Trout-designed pumps have become
the dominant artificial-lift beam-pumping unit. Over the years developments focused on
improvement of the reliability of the pump parts and design methods. For example the sucker rod
material changed from wood to fiber glass steel and plastic reinforced. The design methods were
significantly improved by the Gibbs sucker-rod diagnostic technique. The technique uses
mathematical equations to model the elastic behavior of long sucker rod strings, says Gibbs.
Electrical Submergible Pumps (ESPs)
The first electrical submergible pumping unit was developed in Russia in 1917 by Armias, who
later migrated to California. Although initially not very successful, the use of ESPs in the oil
industry was assured by the help of Frank Phillips of Phillips Petroleum Co. in Bartlesville,
Oklahoma. Since that time, the concept has proved to be an effective and economical means of
lifting large volumes of fluid from great depths under a variety of well conditions. Todays ESPs
are essentially multistage centrifugal pumps that employ blades, or impellers, attached to a long
shaft. The shaft is connected to an electrical motor that is submerged in the well. The pump
usually is installed in the tubing just below the fluid level, and electricity is supplied through a
special heavy-duty armored cable.
Subsurface Hydraulic Pumps
There are two types of hydraulic pumps for artificial lift. One is fixed-pump design; the other is
free-pump design. In fixed installations, the downhole pump is attached to the end of the tubing
string and run into the well. Power fluid is directed down an inner tubing string, and the produced
fluid and the return power fluid flow to the surface inside the annulus between the two tubing
strings. Free-pump installations allow the downhole pump to be circulated into and out of the well
inside the power-fluid tubing string, or they can be installed and retrieved by wireline operations.
Jet pumps are a special class of hydraulic subsurface pumps and are sometimes used in place of
reciprocating pumps. Unlike reciprocating pumps, jet pumps have no moving parts and achieve
their pumping action by means of momentum transfer between the power fluid and produced
fluid.
Gas Lift
This method artificially injects gas in the well through gas lift valves. The gas reduces the weight
of the liquid column which in a reduction of bottomhole pressure. The effect is an increase of
liquid production. Since its development in the 1930s, several important developments took place.
Modern gas lift installations have valves that can easily be retrieved installed in side pocket
mandrels.

The Future
Artificial-lift technologies of the future will involve software, electronics, sensor technologies and
data transfer and data management. This will require an effort of developers to explore the limits
of technology.

Subsea Completions
Enabling Early Production From Deepwater, Remote, and Marginal Fields
While subsea-well completions occupy a small niche in the offshore petroleum industry, their
evolution has attracted a lot of attention because they offer a means of producing field extremities
not reachable by directional drilling from existing platforms. Also, they offer production options
where field economics do not justify the installation of one or more additional platforms.1
During the past four decades, subsea-well-completions technology has grown from untested
engineering theory to viable, field-proven equipment and techniques that are accepted by the
petroleum industry and the governments of producing countries. In the 37 years since the first
systems were installed, approximately 1,100 subsea wells have been completed. Two-thirds of
those wells are still in service. Among these completions is a variety of configurations that
includes single-satellite wells, which employ subsea trees on an individual guide base; subsea
trees on steel-template structures with production manifolds; and clustered well systems, which
are essentially single-satellite wells connected to a nearby subsea-production manifold. All of
these configurations typically are tied back to platforms, floating production and storage vessels,
or even to shore.
Subsea-Completions Technology Evolves
Historically, water depth and cost have consistently challenged operators engaged in exploration
and production in the worlds offshore areas. In an effort to handle both of these challenges,
producers deemed subsea completions their most economical choice.
For years, water depth alone was the driver of the development of subsea equipment due to the
physical limitations that become increasingly difficult as depth increases. However, in more recent
years, cost has become an additional driver as lower oil prices have mandated that companies
receive more value for their deeper-water investments. A third historical driver has been the
speed with which subsea completions can be installed to establish a stream of revenue for
operators.
The greater emphasis on using subsea completions has been to produce marginal fields to
existing platforms, stated industry expert Harvey Mohr in 1989 and 1991 trade-journal articles
on subsea-well completions.2 With emphasis on producing in deeper and deeper water, the
need to bring fields on stream economically greatly enhances the attraction of subsea
completions.3
During the 1960s, wellhead components that enabled operators to get their newly drilled subsea
wells on stream were among the first pieces of subsea-completion hardware to emerge from
supplier drawing boards. During this period, the first subsea trees were placed on the floor of the
Gulf of Mexico (GOM). Of the 68 subsea completions installed in that decade, virtually all were in
U.S. waters. Wells were tied back to fixed platforms in maximum water depths of 400 ft (189 m).
Subsea trees took on a strange appearance as through-flowline (TFL) technology was developed
to provide a means of sending downhole tools into the completion.
The 1970s and early 1980s saw subsea production activity increase in all parts of the world.
Rising crude prices led to a frenzy of offshore-development projects. Investments in production

facilities reached huge proportions. During this period, the first subsea-tree system was installed
totally below the seabed. It was part of a caisson completion system that involved the installation
of a master-valve block within a caisson below the sea floor to protect the well and its
components from icebergs. TFL technology improved, and completions were extended to 650-ft
(221-m) water depths during the period. Also, the pull-in flowline-connection technique was
developed to allow completions to produce back to remote facilities. During the late 1980s and by
the early 1990s, advancements in the technology necessary to economically develop deepwater
oil and gas fields using floating production systems and subsea satellite installations were
developed. Also, the first horizontal tree was installed and a modular approach to design
emerged, as operators pushed suppliers to develop interchangeable modules that used fieldproven components to bring more cost-effectiveness to early-production projects.
Diver/Diverless Installation Techniques Evolve
Even though deepwater exploration successes were yet unknown, much of the early
development of subsea-completion technology focused on diverless techniques, as operators
anticipated future deepwater requirements. Meanwhile, operators wasted little time in using
industry-proven diver-assist technology for installing their subsea-completion equipment in
shallow-water fields. By using surface hardware adapted to diver-assist underwater use, subseafield completions progressed off North America from 1961 to 1970. Gradually, subsea technology
evolved as refinements and improvements were made on the basis of field experience.4
In the 1970s, an offshore pilot test of a deepwater (170 ft, 52 m) subsea system was conducted
on West Delta Block 73 in the GOM. While it was still accessible to divers, this test project
demonstrated the capabilities of diverless technology to install, operate and maintain a remote,
deepwater production system from field development through abandonment. In 1971, four
subsea-well completions producing to a jackup rig were installed using diver-assist technology in
250 ft (76 m) of water in the Ekofisk field. This marked the beginning of subsea-well completions
in the North Sea.
Wet vs. Dry Environments
During the 1970s and 1980s, both wet- and dry-environment technologies were developed. Wet
technology was developed and installed first, since it was easy to take off-the-shelf equipment
and install it in a subsea environment. However, the wet environment exacted a maintenance toll
that led to the development of dry-environment technology. This technique employs steel
chambers to provide a dry, 1-atmosphere environment for standard oilfield equipment.
Maintenance is performed by transporting men from a surface support vessel to the seafloor
chamber in a service capsule with a lift line and a life-support umbilical. Of the two technologies,
operators eventually opted for the wet environment. Since that time, it has been the only method
used.
Completions in U.S. Waters
In the mid-1950s, initial work began on the first remote underwater drilling- and completionsystem project for the GOM. This marked the start of what was to become the petroleum
industrys first subsea-wellhead completion. Installed in 1961 in the GOMs West Cameron Block
192 in 55 ft (16 m) of water, the completion set the stage for future production in deeper offshore
waters.
At about the same time, the first full-field subsea development occurred when 20 subsea satellite
wells with multiple-zone completions were installed and connected to a platform at Californias
Conception field. Over the years, numerous projects have produced technical milestones in the
evolution of subsea-completion technology. Many of these projects are well known because of
the publicity that has surrounded them from their inception. U.S. achievements include the
following.

Mensa
During late 1997, work began on the Mensa project. Extreme water depth, high flow rates, and
erosion-resistance requirements made this project a pioneer in subsea-tree design and
installation equipment/technique.
Located on Mississippi Canyon Block 687 some 147 miles southeast of New Orleans, the Mensa
gas-well-development plan initially used three satellite wells with 10,000-psi working pressure and
guidelineless, diverless subsea trees, which produce to a subsea manifold 5 miles away. A single
63-mile flowline (worlds longest offset from a host platform) carries the commingled production
from the manifold to a shallow-water platform.5
Troika
Clustered subsea-completion developments arrange wells around, but keep them separate from,
a central manifold structure. Such systems employ the drilling rig to install the inherently smaller
system components.
The Troika system is a subsea cluster-type development that was installed in 1997 in 2,700 ft
(823 m) of water in the GOM. The manifold is tied back to, and controlled from, Shell Oils
Bullwinkle platform (approximately 14 miles distant) by means of two 103/4-in. flowlines. Among
other things, this project accomplished the cost-effective installation of a subsea cluster-system
module (combined template/ manifold) using the rigs drillstring. By maneuvering the carrier under
the rigs moonpool, lifting the module off the boat using slings attached to the drillstring, and then
lowering it onto preinstalled piles on the sea floor, the module was set in less than 12 hours.
Gyrfalcon
In March 1997, a gas well was abandoned in the mudline beneath 883 ft (269 m) of water on the
GOM Green Canyon Block 20. The well tapped marginally economical reserves, but completing it
was deemed uneconomical because its shut-in surface pressure exceeded 12,000 psi. Such
pressures dictated a structure for which the capital cost would exceed the value of the expected
reserves. Also, as of that time, no high-pressure subsea completions had ever been
accomplished.
Two years later, in 1999, another operator is attempting to breathe new economic life into the
well. The resulting completion will be the worlds first 15,000-psi subsea well completion.
Expected to come on stream in mid-2000, the achievement will be known as much for
overcoming the previous operators economics-killing costs as for its high pressures.
North Sea Development
Historically, the most active area for subsea completions has been the North Sea. Some 40% of
all subsea-tree installations worldwide have been done there.6 Both the U.K. and Norwegian
sectors have seen numerous subsea completions, but the most ambitious projects traditionally
have been in Norwegian waters.
Ekofisk
In 1971, North Sea subsea completions originated with the Ekofisk field early-production system.
It consisted of four subsea satellite wells producing to a jackup drilling rig modified for production
processing with offloading to shuttle tankers.7 This was the first North Sea field development
using subsea trees and the first use of subsea-well completions. Situated in 230 ft (70 m) of
water, the wells were completed with diver-assist technology that was well established by this
time.
Argyll
The development of the Argyll field in 250 ft (76 m) of water marked the worlds first application of
a floating production system (FPS) and the first production of oil from the U.K. sector of the North

Sea. The field began in 1975 with four satellite subsea wells flowing to a subsea riser base
beneath an FPS vessel. Soon, more wells were added. Eventually, two additional fields were
produced over the projects life. The project was abandoned in 1992.
Buchan and Balmoral
In 1981, the Buchan field was developed in 390 ft (119 m) of water using an FPS and an
arrangement of satellite and template subsea wells tied to a subsea manifold. The Balmoral field
came on stream in 1986 using an FPS and subsea-well system that included satellite and
template wells producing to multiple subsea manifolds.
Snorre and sgard
In 1992, the Snorre field was developed in 1,100 ft (335 m) of water using subsea completions to
produce to a tension-leg platform approximately 4 miles away. The sgard field, developed in the
late 1990s, featured a total of 59 subsea completions grouped together in 17 standardized fourwell templates connected and tied back by pipeline bundles to floating production and processing
vessels in 984 ft (300 m) of water. The sgards SO3 pipeline bundle includes closed-circuit, hot
water heating lines to ensure that hydrates and paraffins do not form. This technology was first
applied in the early 1990s in the Britannia gas field in the North Sea.
Numerous other subsea-well systems have been completed in the North Sea since these fields
were developed, and a variety of designs and configurations have emerged. Traditionally, U.K.
water depths have favored diver-assist technology, but some developments in the deeper waters
of the Norwegian sector required diverless technology.
Offshore Brazil
In addition to North Sea and U.S. offshore fields, much of the historical development of subseacompletion systems has been offshore Brazil, mostly in the Campos basin. Petrobrs, the state
oil company, is the most active operator worldwide in terms of the total number of subsea
completions, with 329 installations so far and another 250 planned for the 19992004 period.
In 1974, Brazil found itself in an ironic situation. The nations daily production was decreasing in
spite of an increase in reserves from new discoveries in the Campos basin. To correct this, earlyproduction systems were planned to reduce the time to initial production, to better define the
reservoir conditions, and to improve cash flow.8
Subsea completions provided a means of achieving these needs, said Ricardo Juiniti, Senior
Staff Petroleum Engineer for Petrobrs. The first completion was installed in 1977 in the Campos
basins Enchova field. This completion was located in 384 ft (117 m) of water and produced from
a single satellite well through a subsea test tree to the semi-submersible Sedco 135D, the first
drilling vessel in Brazil to be converted to a floating production facility. Two years later, the first
subsea tree was installed at 620 ft (189 m). From 1979 to 1981, seven early-production systems
with wet trees were installed to accelerate Campos basin production, while seven fixed platforms
were being built.
With discoveries in water depths deeper than 656 ft (200 m) came the routine use of floating
production vessels as economical and feasible alternatives to fixed platforms. Initially built to
accelerate production, many of these temporary-use vessels became permanent installations.
Dry-environment technology was used in the Garoupa and Namorado fields beginning in 1979,
Juiniti states. Initially, dry chambers were installed on eight wells in 394 to 525 ft (120 to 160 m)
of water, but the technology was deactivated in 1986 due to the high risk associated with
performing well interventions through the chambers and excessive operational costs associated
with using a dedicated vessel. Those wells are still producing with wet trees. The development of
all other Brazilian fields used wet-environment technology.

By the end of 1982, 32 non-TFL, 4-in. 2-in.5,000-psi wet trees had either been installed, were
being installed, or were on order. Four different manufacturers were used to allow a performance
comparison of the different tree designs. The first subsea manifold also was installed during
1982, and it introduced a variety of new options for subsea layouts. The manifolds were diverassist installations and could accommodate up to eight wells.
By 1984, movement toward deeper waters necessitated the installation of subsea trees in depths
that exceeded the limits of divers. This forced us to go to diverless installation methods, Juiniti
recalls. But problems with the pull-in of flowlines prompted a decision by management to pull in
tree flowlines in waters up to 985 ft (300 m) deep using diver-assist only, when feasible. Remote
flowline pull-in would be used only when diving was not possible or was too expensive.
Discoveries in the 1300-ft (400-m) Marimba, 1,900-ft (600 m) Albacora, and 3600-ft (1100-m)
Marlim fields caused Petrobrs to develop the Lay Away System for pulling in tree flowlines and,
later on, the guidelineless (GLL) subsea tree.
We used this technology to install the first GLL tree in 1991 in 2,366 ft (721 m) of water and
subsequently to develop the Marlim field, Juiniti continues. The Marlim field development, which
will comprise 148 subsea wells producing to six floating production units, when completed,
contributed to the development of a standardization program for GLL subsea trees as well as
more effective and less costly diverless flowline pull-in. In 1994, the first installation of a GLL tree
in 3,370 ft (1,027m) of water was made, a world record at the time.9
Petrobrs also set the world record for a subsea-tree installation when it installed a subsea tree in
early 1999 on a well in the Roncador oil field in the Campos basin. The subsea tree was set in
6,080 ft (1853 m) of water and produces via a rigid riser to a dynamically positioned floating
production, storage, and offloading vessel.10 All these achievements were obtained after
massive investments in research through ProCAP 2000, Petrobrs Technological Innovation
Program on Deepwater Exploitation Systems, which aims at steep reductions in production costs
and increased productivity in deepwater fields while enabling oil production at water depths
greater than 3,281f (1000)m.
According to Ronaldo Dias, head of the Campos Basin Drilling and Completion Div., the
investment made by Petrobrs in subsea completions allowed the company to develop offshore
fields in a very profitable way by reducing the time to initial production.
The standardization of Christmas trees played an important role in terms of reduced cost and
project optimization, says Dias. Nature didnt give us much choice. We had to go for the oil,
which was much deeper than we would have liked. Subsea completions seemed to be the best
solution, although we had a hard time making them work properly sometimes. However, I think it
was worth the effort.
The Future
The DeepStar project, an R&D consortium operated by Texaco that seeks the development of
low risk methods of producing oil and gas in the deepwater GOM, is espousing what could
become the deepwater-completions philosophy of the future. If development of deepwater fields
is to proceed, operators have to be convinced that they have commercially viable development
options, said Texacos Steve Wheeler, who has been involved with DeepStar since its inception.
These options must maximize the operators ability to avoid large capital commitments prior to
his verification of acceptable reservoir performance.
DeepStars consortium of 21 operating companies and 40 supplier organizations are cooperating
to find solutions to the challenges that face them in developing the deepwater Gulf of Mexico.11
The project seeks to utilize partnering to jointly explore and research deepwater production

technology, hardware and software, innovative tools, centralized processing facilities, productionsharing operations, and other innovative concepts, stated Wheeler.
We believe that subsea completions will play a key role in helping manage risk in future
deepwater-field development. Since about 60% of the cost of a subsea development is built into
the well cost, they can be scaled up or down quickly. Therefore, they offer operators a way of
managing their costs in new fields where reservoir performance, production rates, and size are
unknown. They cant do that with other field development concepts, such as high-cost deepwater
FPSs, Wheeler said. Weve named this the inchworm philosophy. Unlike a lot of deepwater
field-development philosophies, our research indicates that operators should progress slowly by
drilling and completing only a few wells initially in a new deepwater field. Next, they should place
them on production using subsea completions and tiebacks to existing GOM infrastructure in
order to establish an early-production revenue stream. Once the size of the reservoir and other
important performance parameters have been determined, the operator can then expand to the
level of development that is deemed appropriate. This philosophy protects capital by lowering
overall risk until the fields parameters are fully known. Also, Wheeler and his DeepStar
participants are pushing for the standardization and modularization of subsea-completion
components because it speeds up field development and often reduces costs.
Most operators agree that subsea-completion standardization is an irreversible trend. Like
Wheeler, they believe standardization of interfaces will, for example, allow the replacement of a
damaged subsea tree with a new one off-the-shelf without lengthy interruptions to the wells
production, thereby saving revenue that would be lost while waiting on a tailor-made tree.
We want standard interfaces between vendor components that will allow us to prebuild subsea
trees, Wheeler said. Then we will stock them so they are ready when we need them. Thats
cost-effective for us, and it allows our vendors to better manage their manufacturing efforts.
Traditionally, the decision to develop an economically marginal oil or gas field and the choice of a
production system has been governed largely by the presence of available infrastructure, existing
technology, and the cost-effectiveness that can be obtained by marrying both. However,
emerging technology is now playing a role equal to or greater than existing infrastructure and cost
in the development of 5- to 20-million bbl fields. Looking to the future of subsea completions
means looking back at the technology that has been proven during the past four decades and
then finding better, more cost-effective ways of applying this technology to solve new, more
complex challenges.

Completions
The Reservoir/Wellbore Connection
Well completions are as old as the petroleum industry itself. In fact, on 27 August 1859, the oil
from Colonel Drakes 691/2-ft well in Titusville, Pennsylvania, had to be pumped to the surface.
Pumping that oil was an example of the outflow phase of well completionsthat is, methods of
transmitting fluids to the surface. This article traces the evolution of inflowthe phase of
completion operations that deals with opening the wellbore to the producing formation. (Outflow
will be discussed in a later installation of this series.)
In 1859, the petroleum industry consisted of two oil wells in the U.S. producing a total of 2,000 bbl
and having a combined value of U.S. $40,000. Suman1, in his 1921 book Petroleum Production
Methods discussed the vastness of an industry that had expanded to 35,000 new wells the
previous year in the U.S. alone, at a cost of approximately U.S. $575 million. In the preface, he
wrote, It is quite probable that, as time goes on, the production of petroleum per well in the U.S.
will gradually decline to the point where operators will become very much interested in doing
things in a more efficient manner. That the time would come when operators would give more
than a little attention to economy and efficiency was unquestionable. But Suman and those who
worked alongside him probably never could have imagined that need would be driven by a global,
low-price market for the worlds chief energy source.
There is no question that economy and efficiency drive todays completion operations. Neither is
there any question that creativity, perseverance, and some risk-taking by completions engineers
(both in developing new technologies and in continually finding new ways to apply existing
techniques) have enabled --- and will continue to enable --- the petroleum industry to produce oil
and gas efficiently while lowering the cost per unit and raising net present value. Recently,
completion operations have become more specialized, and perception has evolved from the idea
of completion equals plumbing (i.e., seals, tubulars, valves, and packers) to completion equals
well optimization.2 This change certainly is evident in inflow technologies.
Completion Basics
The economic success of a well depends in large part on making the optimum connection
between the wellbore and the reservoir system. That optimum connection must perform three
functions.
1. Let oil into the well, where it can then flow or be pumped to the surface.
2. Keep over- or underlaying water out of the well.
3. Keep the formation out of the well.

Although completion has never been universally defined, this concept is its basis. Neither is
there universal agreement on the point at which completion begins. Probably the most widely
held view is that completion begins when the bit first makes contact with a productive formation.
Because formation damage that affects later productivity begins at this point, completions
engineers stress the importance of planning wells as the steps that lead to a successful well are
complex and interconnected. A multidisciplinary team working cooperatively and interactively can
avoid expensive misunderstandings and environmental problems that could result from
improperly executed operations. Completion design is a function of numerous reservoir
characteristics, such as permeability, porosity, saturation, pressure, stability, and
compartmentalization. According to King3, a noted authority on completion, the key to a good
initial completion is to collect and assess as much data as possible that are relative to these
interrelated characteristics at the earliest possible time.
Porosity and Permeability
Porosity and permeability are the reservoir storage and pathway of flowing fluids. Porosity is the
void space between the grains where fluids can be stored. Permeability is a measurement of the
ability of fluids to flow through the formation. The higher the permeability, the more easily a fluid
can flow through the rock matrix. Most productive formations are between 0.001 and 1,000 md.
Porosity does not always relate directly to permeability. Materials, such as shales and some
chalks, for example, may have very high porosities but low permeability because they lack
effective connection of the pores. When evaluating a reservoirs economic potential, a porosity or
permeability cutoff level often is used to establish minimum pay requirements. This level can be
determined from porosity logs and flow tests.
Saturation
In almost every porous formation, there is at least a small amount of water saturation. The
remaining fraction of the pore space that contains oil or gas is the hydrocarbon saturation. In
general, the most productive parts of a reservoir usually are those with the higher hydrocarbonsaturation values. Water saturation also may be a key determinant of pay because extremely high
water saturation could indicate hydrocarbon depletion or movement of an aquifer into the pay.
Closely related to porosity and saturation are recoverable hydrocarbon volumes. Not all oil in
place can be recovered. The amount of oil that will flow from a rock depends on the size of the
pore spaces, the oil saturation and type, and the amount of energy that is available to push the oil
toward the wellbore.
Pressure
Reservoir pressure --- the pressure that the reservoir fluids exert on the well at the pay zone --dictates how much fluid ultimately is recovered. Reservoir pressure varies throughout the
productive life of a reservoir. Initial reservoir pressure is the pressure at the time of discovery, but
there are other forces involved. These forces, or drives, include solution-gas drive, gas cap, and
waterdrive. While many pressure regimes are present and important during the life of a well,
pressure differential toward the wellbore is essential for fluid flow during completion and
production.
Stability
Reservoir stability can affect the initial completion as well as repairs or recompletions throughout
a reservoirs life. Many geologically young formations lack sufficient strength for formation
coherency during all phases of production. These younger rocks often require stabilizing, or sandcontrol, types of completions to support the formation while allowing it to flow fluids.
Compartmentalization
Compartmentalization is the division of a reservoir into compartments that are partially or fully
pressure isolated by faults, permeability or porosity pinchouts, folding, shale streaks, barriers, or
other factors. The more that is known about these reservoir characteristics and their interactions

with one another, the better the chances of selecting the optimal pay, deciding where to place the
wellbore, and establishing the critical link between the wellbore and the formation.3
Types of Completions
There are three primary inflow completion types: natural, stimulated, and sand control. Natural
completions are those in which little or no stimulation is required for production. Sandstone and
carbonate systems with good permeability and mechanical stability are prime candidates for
natural completions. Stimulated completions generally are applied to improve the natural
drainage patterns of hard, low-permeability formations or to remove barriers within the formation
that prevent easy passage of fluids into the wellbore. Acidizing and hydraulic fracturing are
examples of stimulated completions. Sand-control completions are performed in young,
unconsolidated or less mechanically competent sandstones to support the formation while
allowing it to flow fluids.
Letting Oil In: The First Priority
Originally well completion was thought to mean nothing more than drilling into the pay and letting
it flow. However, it quickly became apparent that oil does not have any inherent ability to expel
itself from a reservoir, but rather must be displaced from a porous formation to a wellbore.4 Thus,
the concept of creating and stimulating paths of least resistance to the wellbore evolved.
Nitroglycerin Shooting
As early as the 1860s, hard, tight oil sands in Pennsylvania were being shot with gunpowder,
then nitroglycerin, to rubblize or shatter the rock at the bottom of the wellbore. The practice of
shooting explosives increased flow, but the increase was often temporary, and the wellbore was
often destroyed. The process was also dangerous. Nonetheless, explosive fracturing continued to
be the basic method of stimulating wells until the 1930s.
Early Acidizing
Acid was first used for well stimulation in 1895 by the Ohio Oil Co.6 Hydrochloric acid (HCL) was
pumped into the microscopic flow channels of limestone formations to dissolve the rock and
enlarge the passages.7 The treatment was effective, but the well casing was severely corroded.
Acidizing declined in popularity until the 1930s, when inhibitors were added to the acid to protect
tubulars and treating equipment.
Perforating
Perforating creates a direct link between the wellbore and the producing formation by placing
holes through the casing and the cement sheath that surrounds it.
In the early 1900s, mechanical puncturing methods were tried. These included the single-knife
casing ripper, which involved a mechanical blade that rotated to puncture a hole in the casing.2
The first perforating mechanism used on a large scale was the bullet gun in 1932.3 In bullet
perforating, a hardened-steel bullet is fired from a very short barrel. The resulting perforations
cause little damage to the cement sheath and casing, however, the perforation depth is generally
short.
Today, shaped-charge, or jet perforating is the accepted industry standard. In this method, a
pencil-like jet of gas formed by detonating explosives in a cone-shaped charge penetrates the
casing and cement at high velocity and provides clear access to the producing formation.
Modern Perforating
Today, shaped-charge-perforating programs are tailored to completion types and evaluated
based on how effectively they accommodate well geometry and reservoir properties. Determining
factors for success include the proper differential between reservoir and wellbore pressure and

gun selection, which determines shot geometry. Shot geometry is characterized by perforation
length and diameter, density (i.e., shots per foot), and phasing (angular separation).
Natural, stimulated, and sand-control completions each have their own perforating requirements.
Custom-built guns are often designed for special completion objectives.9
Underbalance and Extreme Overbalance
Perforating produces a zone of reduced permeability, referred to as a crushed zone, around the
perforation. In the late 1950s, Kruger et al. proved the effectiveness of underbalance perforating
(i.e., with the pressure in the wellbore is lower than that in the formation) for removing the
crushed zone and improving flow channels.3 Investigation of underbalanced perforating continued
for 20 years, then boomed in popularity in the 1970s, when it was tied to innovative designs for
tubing-conveyed perforating.

Tranditional Energy Resources

Energy, Resources and Fuel


Energy:Capacity to do work, to cause change in a system.
Resources:Items that can be employed for a useful purpose.
Fuel:Energy resource that consists of matter that stores energy in a usable
form (e.g. wood, oil)

Sources of Energy in General


o Energy directly form the sun: solar energy can be converted to
electricity (through photo-voltaic solar panels) or be used to heat water or
a house (through solar collectors).
o Energy directly from gravity/tides: the moon (and to a lesser extent the
sun) causes tides on Earth due to gravitational pull. The daily incoming
and outgoing tides in some bays or estuaries are used to drive turbines.
o Combined solar and gravitational energy: Wind drives sails, wind mills
and wind turbines. Water condenses in cooling air eventually causing rain
to fall. Water at the Earth's surface flows and drives turbines and water
wheels.
o Photosynthesis: The chlorophyll in plants uses light to produce sugar as
an energy resource. Burning plant material uses up O2 and creates CO2,
H2O, C (soot), other gases and heat.
o Chemical reactions: some inorganic chemical compounds burn to
produce light and heat (e.g. dynamite).
o Fossil fuels: oil and coal are made of organisms that lived a long time ago.
Burning fossil fuels has the same effects as burning plants.

Nuclear fission: atoms of radioactive elements spontaneously split to


create energy.
o Earth's internal heat: The Earth's cooling and radioactive decay of some
elements produce heat near the surface. Water is heated underground and
can be used as hot water directly or to produce steam and drive turbines.
o

The most widely used energy resources are fossil fuels (oil, gas, coal), nuclear
power (nuclear fission) and moving water where hydrocarbons (oil and natural
gas) provide more than 50% of the world's energy (60% in the U.S.). The first two
are nonrenewable energy resources, while moving water, wind and Earth's
internal heat are renewable energy resource.

Sources of Energy that we currently use


Oil and Gas
Oil and gas are hydrocarbons, ring or chainlike molecules of C and H, and
are organic chemicals.

How Do Oil and Gas Form?


o the primary source are dead algae and plankton
o dead organisms sink and settle on lake, lagoon or ocean bottom
o with clay in quiet environment (no rivers!) to create muddy ooze
o oxygen-poor water (anaerobic environment) so that material does not
decay quickly
o lithification to black organic shale (eventually being the source rock for
oil)
o
o at rising T (100 C) organic material transforms to kerogen (waxy
molecules)
o kerogen turns into a mixture of tar, oil and gas, where the shale is now
called oil shale
o
o
o at T>160 C, any remaining oil breaks down to form gas and at T>250 the
remaining organic matter transforms into graphite
The Oil Window
The oil window is the range of temperature conditions and depth at which
hydrocarbons form. In regions with a geothermal gradient of 25oC per km,
oil occurs only at depths of less that about 6.5 km. Gas can be found at
greater depths. The length of hydrocarbon chains decreases with
increasing depth.

Where Are the Largest Oil Fields?


Large oil fields can be found in the Canadian Arctic Ocean, Gulf of
Mexico, North Sea, Kara Sea/Northern Russia and the Persian Gulf
(largest oil fields).
With 7 Mio barrels per day (1 barrel = 42 Gallons), the U.S. is the largest

consumer worldwide, using about 25% of the produced oil (and


contributing 25% of the world's CO2 production). U.S. oil reserves
account for only 4% of the world's reserves. So the U.S. must import more
than 50% of its used oil.

Making an Oil Reserve


Three items are required to make an oil reserve:
o
o
o
o

source rock
reservoir rock
oil trap
seal rock
Oil traps are very important because they accumulate oil so to make
production economically feasible (extracting oil directly from oil shale is
too expensive!)
Possible oil traps:

anticline traps
salt-dome traps (salt is less dense than surrounding rock and rises in
diapirs)
o fault traps
o stratigraphic traps
o
o

Natural Gas
More abundant than oil; volatile, short-chain hydrocarbons (methane,
ethane, propane, buthane). Gas burns more cleanly than oil. Burning gas
produces only CO2 and water, while burning oil produces more complex
organic pollutants.

Oil Exploration and Production then and now


THEN: Until the first half of the 19th century, the only available oil came
from seeps as rock oil. Oil was found by sheer luck and not by systematic
search. Oil was used in lamps, to grease wheels and for medical purposes.
NOW: Nowadays, oil companies use the reflection seismic method to
find oil reservoirs. Vibrating trucks or explosives are used to generate
seismic waves that travel through the ground. They reflect off internal

interfaces (boundaries of buried layers) and show up as enhanced signals


at a seismic receiver along a line. The waveforms at a line of seismic
receivers can tell us about the geometry of the buried layers. This is a very
expensive method but not as expensive as drilling a possibly "dry" hole (a
hole costs $10 Mio!).
Simply pumping the oil out of the ground retrieves only 30% of the oil in
the reservoir rock. Secondary recovery techniques (e.g. force oil to
migrate to a well by pumping steam into the ground) produce up to 50%
more oil (i.e. more than 50% is left in the ground!). Crude oil goes into
destillation columns in refineries to crack the oil by heating it. The residue
(heavier molecules) goes into the plastic industry.

Petroleum History
The petroleum industry has a fascinating history. Oil and gas drive the world economy, which
relates to politics and power. The availability of fuel, or lack thereof, played a major role in both
World Wars. The political story is a backdrop to the enormous technological advances inspired by
the industry technologies that have resulted in advances far beyond the oil and gas industry
itself.
Except for the Dardanelles/Gallipoli campaigns, the extensive combat operations in the Middle
East during World War I have been largely overlooked in documentary programs. Given the
historical significance of the Ottoman Empires demise in 1918, and the ongoing importance of
Middle Eastern oil reserves to Western economies, a close study of this conflict provides two
important lessons:
1. The Treaty of Versailles, agreed to by the Western Powers in 1919, paved the way for military
and political chaos in the Middle East, which continues to this very day.
2. Oil reserves in the Middle East became an important strategic concern for Western Powers,
helping to justify their economic, diplomatic and military interference in the region.
After the end of World War I, most of the Ottoman Empire was carved up into spheres of
influence, controlled mostly by the British and French. The remaining territories became the
modern state of Turkey in 1923 after a five-year struggle by Turkish nationalists against
Western domination.
With little regard for cultural, historical, religious and demographic considerations, the West
sponsored the creation of several new nations: Iraq, Syria, Lebanon, Palestine, Jordan and Saudi
Arabia. Thus, a tinderbox was built from Western greed, igniting a multitude of wars, revolts,
coups and military occupations that truly have made the defeat of the Ottoman Empire little more
than a hollow victory. The factual accuracy of this work may be questionable, but the depiction of
Colonel T.E. Lawrences exploits in the Arabian Desert during World War One left an indelible
impression. Study the great war against the Ottoman Empire, and the subsequent creation of
artificial spheres of influence by France and Great Britain.And realize that a direct relationship
exists between U.S. troops fighting and dying in Iraq today, and the political aftermath of World
War I in the Middle East.

While many outstanding programs about the Great War have already been produced, they
usually focus on the Western Front and the terrible waste of humanity in the trenches of France.
When fighting in the Middle East is mentioned, the Gallipoli campaign and the exploits of
Lawrence in the Arab Revolt are the main topics covered.
However, the Middle East struggle takes in an expansive and complex theater of operations,
ranging from the Dardanelles Straits to the oil fields in Baku, on the Caspian Sea. The battles
military and political feature several intriguing key players: Turkish Minister of War Enver
Pasha, British General Edmund Allenby, German General Liman von Sanders, Arabian Prince
Feisal, and Turkish General Mustafa Kemal. The Ottoman Empire became the target of invasion
not only by the British, but also French, Russian, Greek and Armenian forces. While desperately
fighting off the invasion at Gallipoli, the Ottoman Army also faced Russian invaders from the east,
and British-East Indian troops in both Palestine and Iraq. How the Turks with fewer men,
artillery and resources managed to hold out over four years of intensive combat is truly a
remarkable story.
When the battles stopped on the Western Front in November 1918, the war in the Middle East
went on another four years of brutal combat, fought in temperatures ranging from 150 degrees
in Iraq to 30 degrees below zero in the Caucasus. A Turkish nationalist movement, led by
Mustafa Kemal, rejected the Anglo-French plan to carve up the Ottoman Empire among
themselves and their allies. A new Turkish Army rose from the ashes of defeat. First, it drove
Armenian forces out of eastern Turkey, and then turned back French and Armenian troops in the
south. Finally, Kemal launched a counter-offensive against a Greek Army invading from the west
all of this, while Europe began to recover in its newfound peace.
Turkey fought back to reclaim its homeland, much to the surprise of Europe. But France and
Great Britain found other lands to dominate with post-war politics. New nations were created,
their borders dictated by European greed for land and oil. Without much regard for the regions
history, culture, religion and ethnicity, artificial states emerged: Palestine, Syria, Lebanon, Iraq,
Iran, and Saudi Arabia. These nations secured the interests of France and Great Britain, but not
the interests of the Muslim inhabitants: Sunnis, Shias, Arabs and a host of others.
Thus, the stage was set for political instability and violent struggle in the Middle East that
continues to the present day. Western interests continue to collide with Muslim factions that are
fueled by hatred toward the West. It is difficult to ignore some parallels between the distant and
recent past. Places such as Basra, Baghdad, Mosul and Gaza are the scenes of struggle and
foreign occupation, just as they were nine decades ago. The civil war that now rages in Iraq is
reminiscent of Muslim revolts against British troops in 1920, and again in 1925.
The West continues to intervene in the Middle East, to support friendly governments and ensure
the flow of oil to European and U.S. economies. Most recently, the United States sent troops to
Iraq, but the same thing happened back in November 1914. When Britain declared war against
the Ottoman Empire, the very first thing it did was to land troops near Basra to protect the oil
fields in nearby Iran. Later in the war, the British captured Mosul just as U.S. forces did in 2003
to make certain that rich Iraqi oil reserves were covered by the Union Jack.
To understand more clearly as to why the Middle East remains embroiled in strife, we only need
to examine the historical record. Blood and Oil chronicles the immensity of a horrific military
struggle and its tremendous impact on the entire world. The seeds of discontent in the Middle
East were sown 90 years ago, via military conquest and political domination from Europe.
Unfortunately, those seeds have grown into a fearful harvest that continues to feed radicals,
fanatics and terrorists in the Muslim World.

The Country of Qatar


A small country on the Arabian Peninsula rich in natural gas deposits, Qatar is pouring billions
into colleges, business parks and recruiting world experts who will, ideally, help it evolve from
a petrodollar nation governed through patronage and clan ties to an energetic, self-sufficient
member of the tech economy.
The effort is being watched closely throughout the Arab world and beyond, as each year brings
heightened concerns that the region's historical profits from oil and gas may one day dry up.
That prospect is driving many Midwestern nations, such as Dubai in the United Arab Emirates,
to try to develop the technological prowess needed to make a successful transition to other
economic engines.

Qatar, officially State of Qatar, independent emirate (1995 est. pop. 534,000), 4,400 sq
mi (11,400 sq km), on a largely barren peninsula in the Persian Gulf, bordering Saudi
Arabia and the United Arab Emirates (S). The capital is Doha. The economy of Qatar is
dominated by oil and natural gas, which accounts for 70% of export income. Oil and gas
revenues have been used to diversify the economy, including the development of
chemicals, steel, cement, and fertilizer industries and banking. A minority (20%) of the
population are Qataris (Arabs of the Wahhabi sect of Islam); the rest are largely other
Arabs, Pakistanis, Indians, and Iranians. Arabic is the official language, but English is
also widely spoken. The country is a monarchy.
History
Qatar was ruled by Bahrain from the 1700s until the mid-1800s, when Great Britain and
the Ottoman Empire began vying for control of the peninsula. It was a British
protectorate from 1916 until 1971, when it became independent. In the 1980s and 90s
Qatar had territorial disputes with Bahrain and Saudi Arabia. During the Persian Gulf
War (1991) international coalition forces were deployed on Qatari soil.
The present emir, Sheikh Hamad bin Khalifa al-Thani, came to power in 1995 after
ousting his father. In the late 1990s Sheikh Hamad eased press censorship and promoted
ties with Iran and Israel. Since 2001 Qatar has allowed U.S. use of the Al Udeid air base,
and the headquarters for the U.S. invasion of Iraq (2003) were in the country.
Qatar's Oil & Gas Industry:
The Country is a small, but it's got natural gas to burn.
*Qatar has proven reserves of more 14 trillion cubic feet of natural gas, behind only
Russia and Iran.
*By 2010, it will produce 77 million tons of liquefied natural gas (gas is liquefied for
transport), one-third of the world supply.
*By 2015, 20 billion tons of gas-to-fuel for running cars will be coming out of Qatar.
Copyright (c) 2007 Globaldrill Bay Oil & Gas Services in Partnership with Qatar Oil & Gas
Industry.

Vous aimerez peut-être aussi