Vous êtes sur la page 1sur 6

Home Search Collections Journals About Contact us My IOPscience

An interview with board member R. Stanley Williams

This content has been downloaded from IOPscience. Please scroll down to see the full text.

2014 Transl. Mater. Res. 1 020205

(http://iopscience.iop.org/2053-1613/1/2/020205)

View the table of contents for this issue, or go to the journal homepage for more

Download details:

IP Address: 141.114.238.19
This content was downloaded on 21/02/2015 at 20:29

Please note that terms and conditions apply.


Opinion

An interview with board member R. Stanley Williams


George Grüner1 and James Tyrrell2
1
  Department of Physics and Astronomy, University of California, Los Angeles, CA 90095,
USA
2
  IOP Publishing
E-mail: tmr@iop.org

Published 4 December 2014


Translational Materials Research 1 (2014) 020205
doi:10.1088/2053-1613/1/2/020205

In this interview with Translational Materials Research, R. Stanley


Williams, founder of Hewlett Packard’s foundational physical science
initiatives, outlines his team’s progress in applying fundamental research
to the market, and explains why the end of Moore’s law is a great thing for
discovery and invention.

You joined Hewlett Packard (HP) after spending 15 years working as a professor. What
made you move from academia into industry?

There were a number of reasons. At that point in my career at UCLA, I had graduated around 25
PhDs. Some would contact me and tell me what they were doing, but in general I had no idea what
type of jobs most were getting, and I had no idea if the careers that they could find were fulfilling.
I got into this ethical quandary. I started asking myself—can I look young students in the face and
tell them that they should spend the next five years of their lives with me getting a PhD, when I
knew so little about what was out there for them? I really wanted to understand what was available
in industry, and I realized that I had to go out and experience it for myself. My goal was to work on
the other side of the supply and demand curve by creating more science jobs in industry.
I hoped that if I could be successful at Hewlett Packard and put together a research group that
did fundamental scientific research and took it to the point where it made a substantial impact in
the marketplace, then perhaps this could re-ignite industrial research and other companies would
want to emulate what we were doing.
It’s been nearly 20 years since I first started what we then called the Quantum Science Research
Initiative at HP Labs, and in that time we’ve spun out a couple of small companies that have had
success in the marketplace. But what I was doing, to use a baseball analogy, was ‘swinging for
the fences’. I wanted to have something big that would grab people’s attention and get them to
understand that long term fundamental research can be a valuable corporate strategic asset.

Translational Materials Research 1 (2014) 020205


2053-1613/14/020205+5$33.00 © 2014 IOP Publishing Ltd
Transl. Mater. Res. 1 (2014) 020205 Opinion

Two of the major breakthroughs that have come out of this effort are the memristor and
your group’s work on silicon photonics for interconnects. Can you tell us more about
these discoveries?

A memristor is an electronic circuit element that has a resistance that changes depending upon
the history of the system. It’s a mathematical discovery that Leon Chua had back in the 1960s,
but for whatever reason it took a very long time for anyone to understand that there were physi-
cal systems that had exactly this property.
Once I was able to marry a physical system with Chua’s insights, we were able to mathemati-
cally model, for the first time, physical devices previously called resistive switches. We then put
those models into electronic circuit simulators in order to design and test large circuits utilizing
memristors, which really kick-started things for us. Thanks to these validated mathematical
models, we can now think seriously about building extraordinarily large memories based upon
memristors.
The end of Moore’s law is causing us to completely rethink the architecture of computers, and
I think that memristors will play a huge role in the future of computation. If you have an inex-
pensive way of storing and rapidly recalling all of your data, instead of putting it on magnetic
hard disks, then you can dramatically improve the energy efficiency of a computing system, as
well as improve the throughput, or the speed at which a computation can be performed.

I wanted to have something big that would grab people’s attention and get
them to understand that long term fundamental research can be a valuable
corporate strategic asset.

Improving system efficiency is important, and that also feeds into our work on silicon
photonics.
Something like 80% of the energy used in modern data centers is expended in moving the data
around rather than in performing computation. There are thousands of miles of copper wires in
modern data centers, and it’s been clear for a long time that the application of photonics could
both speed up the movement of data and also decrease energy dissipation.
The science has been understood for 50 years. The problem has been cost. To get photonic
interconnects to the point where they could be utilized in computer systems, we had to reduce
the cost by a factor of a thousand. The way to do that was through integration.
We wanted to use the same processes and materials as the electronics industry so we could
ride a Moore’s law curve for photonics, but there were a lot of issues to understand. Initially, we
looked at photonic crystals and plasmonics, but it was difficult to make uniform structures, even
using modern fabrication techniques, and so we ran into some roadblocks.
Success came when we simplified our approach. We’re now making structures that consist of
rings of silicon with waveguides that pass alongside. If there’s a resonance between the wave-
length of light running along the waveguide and the ring, or in other words if there is an integer
number of wavelengths going around the ring, we can pull the light off of the waveguide onto
the ring or we can push light from a ring onto a waveguide—it’s essentially a photonic switch.
We are working with 3–10 micron diameter rings, but we can tune the effective size of the rings
by heating them or letting them cool, which makes the structure expand or contract, or by building
a capacitor into the configuration and using charge to change the dielectric constant of the ring.

2
Transl. Mater. Res. 1 (2014) 020205 Opinion

This allows us to shoot as many as 64 different wavelengths down a single waveguide and
perform dense wavelength division multiplexing by using the ring resonators to make the wave-
length selection.
We’re able to put rings that respond to particular wavelengths at a certain location on a chip
and talk to that position. Also, the bandwidth or the frequency at which we can push data in
each of the channels is much higher than you can do electronically, and they dissipate very little
energy.
In terms of cost, we are now getting the system to a point where it is becoming competitive with
the cost of standard electronic interconnects, but with orders of magnitude better performance.

What advantages do you think your industrial lab has over a university facility?

I’m extremely proud of the world class research enterprise that we have put together. Our ana-
lytical and fabrication capabilities put us on the scale of a large engineering school in terms of
the magnitude of the support and the types of things that we can do. But it goes beyond just hav-
ing the latest scientific equipment. Often we’ll start modifying an apparatus almost immediately
to provide it with aspects that nobody else has, or very few people have, to give us a leg up.

From a scientific point of view, I think that the end of Moore’s law is a posi-
tive event because it finally cuts us loose from the tyranny of predictability
and forces us to discover and invent.

In an industrial organization, I’ve been able to hire some incredibly talented people to make
sure that all of the instruments are operating at the peak of their abilities, which allows the
researchers to focus on making the measurements and interpreting the data. This is something
that can be difficult to arrange in a university situation, where maybe it’s the students or even the
professor doing the equipment maintenance, which is what I did when I was back at UCLA—I
spent a lot of time fixing equipment that my students broke.
Users still break things here, but the folks I have can fix things very rapidly. We don’t have,
for the most part, service contracts with manufacturers, because we often modify equipment to
such an extent that we violate the warranties. My team is very capable, and this has been a huge
advantage for us in terms of the quality and the quantity of the research that we can perform.

You spoke earlier about the impact of the end of Moore’s law. Can you expand on this?

It was clear 20 years ago that eventually Moore’s law would end, although at the time few would
have guessed how far it would go. However, the end of transistor size scaling provided much of
the motivation for our work. As I mentioned, from a scientific point of view I think that the end
of Moore’s law is a positive event because it finally cuts us loose from the tyranny of predict-
ability and forces us to discover and invent.
A lot of companies are now realizing that they have to invest more in innovation, and invest
more broadly as a means of risk reduction. With Moore’s law everyone knew what was going to
happen and when it was going to happen, but it required nearly all of the resources of the entire
semiconductor industry to maintain the pace. Now that’s changed, and hopefully the various

3
Transl. Mater. Res. 1 (2014) 020205 Opinion

federal funding agencies will figure out that there are a lot of technologically important things
out there that should be supported besides graphene and carbon nanotubes. I think we are enter-
ing the most exciting epoch in the computer era, because radically new inventions are needed
and will be accepted.

What are the building blocks that underpinned your discoveries?

In the beginning, we were doing very fundamental work but with the understanding that we had
a 10–15 year timeframe to demonstrate that our research would be important for HP.
One of the things that we had to do as a prerequisite for everything else was learn how to make
and analyse very small things, which is how I became involved in nanotechnology. We did a lot
of work in the area of self-assembly, trying to understand what the forces were and capabilities
were, and used techniques such as scanning tunnelling microscopy to visualize the structures
we made.
This effort evolved to the point where we could make ≤3 nm structures in the early 00’s that
were very regular. We also found that imprint lithography was an interesting way of making
structures that were much smaller than either electron beam or optical photolithography could
deliver at that time. And by analyzing the small objects that we had made, we witnessed that a
lot of non-linear responses emerged at the nanoscale, which we flagged up as being potentially
important for computation or communication.
We were also looking at extremely sensitive nanoscale detectors, because I had thought that
the internet was essentially going to become a sensory and response organ. Our work in this area
was focused on creating ‘nerve endings’—various ways of detecting things that could then be
recorded, communicated through the internet, turned into information and correlated. Again, it
was a technology that we thought would be important to the Hewlett Packard Company in the
future for what is now known as the internet of things.

You were heavily involved in the National Nanotechnology Initiative (NNI)3, what are
your thoughts on that program?

The NNI was needed to get some new research funding into the scientific community, but it also
created a lot of excitement and attracted smart young people into the sciences again. A few years
after the NNI began, I saw that the quality of the people we were interviewing for jobs was going
up significantly. In my lab, I am blown away by how good the young scientists are, and I would
say that every single one whom we hired in the past decade was touched by the NNI in some
way. My research group as it is today would not exist without the NNI, without the infrastructure
created in the DoE Labs and the people who were educated to do the things that we’ve been able
to achieve over the past 10 years.
That said, I think it’s time to celebrate the success of the NNI and nanotechnology in general,
and move on. The main issues for nanotechnology, as far as I was concerned, were to develop
the skills, the understanding and the equipment necessary to build materials structures at the
nanometre scale, analyse them and understand them. And for the most part, that’s now being
done routinely. Rather than concentrating so much on what are now almost mundane issues, we

3
http://www.nano.gov/about-nni/what

4
Transl. Mater. Res. 1 (2014) 020205 Opinion

should be moving on to new challenges in heterogeneous integration and how to build complex
and reliable systems.

What’s the next phase of development in your lab?

Right now, we are largely in a commercialization phase with projects such as the memristor and
silicon photonics, but at the same time we have the support and expectation of HP management
to continue our discovery and invention for the future.
We are working on the neuristor, which is a new type of active electronic device. It’s some-
thing that potentially could replace or augment transistors in certain types of circuits to acceler-
ate tasks such as recognition or learning, where we have the opportunity to borrow understanding
from how brains compute.
Another area that we are exploring is quantum information based upon photonics. Once we
have the means for getting light into the heart of a computer, we can also think about utilizing
correlated pairs of photons and consider some kinds of quantum processing, mainly with the
aim of being able to provide security for computers based upon the ability to recognize quantum
signatures on incoming data.

R. Stanley Williams is an HP senior fellow and vice president of foundational


technologies at Hewlett Packard Labs. For the past 40 years, his primary scien-
tific research has been in the areas of solid-state chemistry and physics, and their
applications to technology. These interests have evolved into the areas of the
physical basis for computation and cognition. He is a founding board member of
Translational Materials Research.

Vous aimerez peut-être aussi