Académique Documents
Professionnel Documents
Culture Documents
November 2006
Table of Contents
5. PROTEOMICS
6. PROTEIN-PROTEIN INTERACTIONS
8. PROTEIN ENGINEERING
9. BIOMARKER DISCOVERY
10 PROTEIN NANOTECHNOLOGY
* Those topics that are in small italics have not been covered in this learning
resource or the reviews have not been submitted to date.
1.1 X RAY DIFFRACTION OF PROTEINS
To perform their basic biological function, proteins has to assume three dimensional
structures in the living system. These molecules are crystallized and are analysed by
x ray diffraction method. The data from such structural analyses becomes the
foundation stone of all the modern developments in the area of biochemistry,
biophysics, pharmaceutical development and biotechnology. It is the simplest tool
available for biological research to obtain the three dimensional image of protein.
Electrons around the nucleus of an atom are capable of scattering the x rays in
proportion to the density of electrons. The positions of individual atoms or molecules
are determined by analysing the diffraction pattern of the rays. And a three
dimensional image of the biomolecule is made from this.
There are seven types or classes of crystals. The pictures are given below.
The perfect crystal, i.e., the crystal of a protein which gives the perfect diffraction
pattern is prepared by simple controlled vapour diffusion, by using different types of
crystallization matrices. Then the x ray diffraction is carried out and the diffraction
pattern is captured on a film. By using this, a three dimensional image of the protein
is made by using modern computer programs in bioinformatics is made. The overall
process is as follows.
1.1.2.1 Synchrotrons.
There have been significant advances in recent years in the field of macromolecular
crystallography. Synchrotron rays, CCD detectors and cryocrystallography help the
scientists to make the data collection easy and efficient. We can collect the complete
data of more than one protein in a single day when we use synchrotron rays for
analysis, crystals as small as 60 µm can be used to determine protein structures.
Since most of the protein s form small crystals, this help us to analyse more and
more proteins. We have the technology to analyse more than hundreds protein
crystals per year, using a dedicated beam line on a third-generation synchrotron. The
most time-consuming step is, therefore, likely to be electron density map
interpretation. Methods to automate model building and refinement from electron
density maps are in the early stages of development, and additional support for
research in this area could significantly enhance high-throughput structure
determination.
Source: www.synchrotron.vic.gov.au
Cryo-freezing.
The first one, cryo-freezing is the method by which the protein crystals produced is
cooled and stored in liquid nitrogen. This helps to prevent the damage caused to the
crystal by the powerful radiations. [14]
MALDI
Source: www.esrf.fr
Apart from this, the advancement in the technologies for the evaluation and
manipulation of the data like modern bioinformatics programs and other
developments in computational chemistry made the usage of this technique less time
laborious.
NMR or Nuclear Magnetic Resonance is a technique that is more recent than x ray
crystallography used in the structural analysis of protein and is very widely used now
a days. The basic principle behind this technology is that some atomic nuclei are very
highly magnetic and it can attain different stages of energy in a magnetic field. This
energy variation can be measured to depict a spectrum. The magnetic properties of
the nuclei are affected by the chemical bonds and the short distances between the
molecules. These properties of the molecules are used in the technique of Nuclear
Magnetic Resonance Spectrometry of protein.
NMR can measure distance between atoms without considering the spatial
orientation. This avoids the need to crystallize the protein. So NMR forms a more
straight forward approach than crystallographic techniques.
Normally, NMR is used to study small molecules. But due to the recent advancement
in resolution of the procedure, the upper limit of molecules screened by this
technique came up to 100 daltons from 20 daltons.
2. Able to force the protein to another form of crystallization by the change of its
solvent.
1. The crystal structure is necessary only that proteins which can be crystallized
are examinable.
5. Can get only one parameter-set so we are able to observe only one
conformation.
8. The hydrogen in the molecules are not examinable since it has only one
electron.
Advantages of NMR.
3. have enough strength of the magnetic field (the resolution is the function of
that) than we can handle all of the atoms “personally”
5. There are lots of possibilities to collect different data-sets from different types
of experiments for the ability to resolve the uncertainties of one type of
measurements.
8. Can investigate the influence of the dielectric constant, the polarity and any
other properties of the solvent or some added material.
Disadvantages of NMR.
2. This is good for the more accurate determination of the structure, but not for
the availability of higher molecular masses.
3. The resolving power of NMR is less than some other type of experiments
(e.g.: X-ray crystallography) since the information got from the same material
is much more complex.
5. There are lots of cases when from a given data-set - a given type of
experiment - we could predict two or more possible conformations, too.
Source: www.cryst.bbk.ac.uk
Most recently, the technology is applied in understanding the sub cellular level and
nucleocytoplasmic transport. [20]
The highly automated analysis methods available make it easy to analyse the vast
bulk of data generated by the genome and proteome analysis. Computational tools
have a major role in that.
2. http://epswww.unm.edu/xrd/resources.htm
3. http://www.bruker-biosciences.com/
4. www.intute.ac.uk/sciences/cgi-
bin/browse.pl?limit=50&id=52&type=%25&sort=record.title
5. http://www.icdd.com/resources/websites.htm
1. http://www.faxitron.com
2. http://www.bruker-axs.de/
3. www.jyinc.com
4. www.materialsdata.com
5. www.kohzuamrica.com
1.1.7 References
1. www
cambridgephysics.comhttp://wwwoutreach.phy.cam.ac.uk/camphy/xraydiffract
ion/xraydiffraction1_1.htm
2. http://www.phy.princeton.edu/~austin/hf_book/chapter10.pdf.
3. http://wwwoutreach.phy.cam.ac.uk/camphy/xraydiffraction/xraydiffraction12_1
.htm
4. Makin, O.S., Sikorski, P., Serpel, L.C. (2006) Diffraction to study protein and
peptide assemblies, Current Opinion in Clinical Biology,10, 417-422.
5. http://www2.mrc-lmb.cam.ac.uk/mandi.html
6. http://www.brightsurf.com/isearch/index.php
7. http://www.crystalresearch.com/crt/ab34/1120_a.pdf
8. http://fig.cox.miami.edu/~cmallery/150/gene/sf13x1box.jpg
9. http://www.atcg3d.org/PDF/Abola_NSB2000.pdf
10. http://structbio.nature.com
11. http://www.synchrotron.vic.gov.au/content.asp?Document_ID=97
12. http://www.osmic.com/applications_xrd_single_small.asp
13. http://www.nigms.nih.gov/News/Reports/protein_structure.htm
14. http://www-structmed.cimr.cam.ac.uk/Course/Crystals/shooting.html
15. http://www-
structmed.cimr.cam.ac.uk/Course/Crystals/shooting.htmlhttp://www-
structmed.cimr.cam.ac.uk/Course/Crystals/shooting.html
16. http://www.nature.com/horizon/proteinfolding/background/technology.html
17. http://www.cryst.bbk.ac.uk/pps97/assignments/projects/ambrus/html.htm#com
18. Lim, R.Y.H., Fahrenkrog, B. (2006), The nuclear pore complex upclose,
Current Opinion in Cell Biology, 18, 342-347.
20. Ng, J.D., Gavira, A.J., Garzia-Ruiz, J.M. (2003), Protein crystallization by
capillary counter diffusion for applied crystallographic structure determination,
Journal of Structural Biology, 142, 218-231.
Chapter 1.2 Nuclear Magnetic Resonance
Yang xi
1.2.1 introduction
Figure 1.2.1 Energy levers for a nucleus with spin quantum number 1/2
Source: http://teaching.shu.ac.uk/hwb/chemistry/tutorials/molspec/nmr1.htm
In such case, these nuclei in low level can be excited into the higher level by the
radiofrequency signals which is determined by the difference in energy between the
energy levels. When radiofrequency disappear , the nuclei in the higher energy state
return to the lower state with emission of radiation which can be pick up by the
receiver. This process is called relaxation process. The chemical shift which defined
as applied magnetic field is the most basic of measurements in NMR. Chemical shift
is caused by the electrons in the molecule produce local magnetic filed .[5] It is
measured relative to a reference compound . For the nuclei1H, 13C, and 29Si, TMS
(tetramethylsilane) is commonly used as a reference.
Proteins are much larger than the small organic molecules such H, C , however the
same NMR theory implicated . The basic 1D spectra become crowded with
overlapping signals to an extent where analysis is impossible as the increased
number of each element present in the protein. So multidimensional (2, 3 or 4D)
experiments have been devised to deal with this problem. To facilitate these
experiments, it is desirable to isotopically label the protein with 13C and 15N as the
predominant naturally-occurring isotope 12C is not NMR-active, whereas the nuclear
quadruple moment of the predominant naturally-occurring 14N isotope prevents high
resolution information to be obtained from this nitrogen isotope. The most important
method used for structure determination of proteins is NOE experiements which
measures distances between pairs of atoms within the molecule. Then the obtained
distances are used to generate a 3D structure of the molecule using a computer
program [6] . NMR spectroscopy can determine structure in several following phases,
each using a separate set of highly specialized techniques. The sample is prepared,
resonances are assigned, restraints are generated and a structure is calculated and
validated.[7] NMR Spectroscopy consists of experiments that try to identify the
physical relationships between atoms in the molecule, such as distances, angles,
and orientations, and use them to search for three dimensional structures that satisfy
the most of the physical constraints
One of the distinctive ability that NMR spectroscopy has is to regain information
about interactions of proteins with other macromolecules or small molecules, which is
recently increasingly used to determine the 3-D structure of protein . Furthermore
NMR methods have been applied to drug design by the identification and
characterization of small chemicals that restrain protein function. [8]
Some proteins are unstructured by themselves and only fold upon forming specific
complexes with other polypeptides or even small-molecule cofactors , so the
structures of such proteins are not determined just by their amino acid sequence, but
require other molecules in order to adopt a well-defined tertiary structure. Obviously,
NMR can deal with such problem [8].
The example of a protein that folds upon binding is the eukaryotic initiation factor 4G
(eIF4G), which is the core of a multicomponent complex that controls translation
initiation. Among other components, such as the RNA helicase eIF4A and the 40S-
associated eIF3, eIF4G binds the cap-binding protein eIF4E and thus recruits the 5′
end of mRNA to the small ribosomal subunit. It had been reported that the eIF4E-
binding domain of eIF4G is natively unstructured, but folds upon binding to eIF4E
[22]. A recent structural study of the yeast eIF4E–cap–eIF4 complex (Figure1.2.2)
revealed that an 80-residue segment of eIF4G folds upon wrapping around an
otherwise unfolded N-terminal segment of eIF4E [23]. Thus, complex formation
involves a mutually induced folding event.
Figure 1.2.2 Ribbon representation of eIF4E (yellow) in complex with a fragment
ofeIF4G (residues 393–490) (blue). The bound cap analog, m7GDP, is drawn in rod
representation (purple). Upon binding eIF4E, eIF4G folds into a ring-shaped structure
around the N-terminal tail, distal to the cap-binding site.
The basic of NMR experiments to determine the structure shows as below without
any specific details . Two-dimensional NMR involves using a complex pulse
sequence to disturb the nuclear spins. Typically there are four different periods
involved: preparation, mixing, evolution and detection. The two most common basic
techniques are COSY (homonuclear (J-) correlated spectroscopy) and NOESY
(Nuclear Overhauser Effect spectroscopy). The first gives distances through covalent
bonds, while the latter through space. Different types of pulse sequences are used
for the different types of 2D (and 3D,4D) spectra. The diagonal corresponds to a 1D
spectrum. COSY spectra are the simplest, the cross-peaks arise from Hs on adjacent
Cs or C and N. COSY spectra, are used to determine the amino acid residue identity
NOE interactions are short-range effects and only show atoms closer than 4.5 - 5 A.
Usually they are broken down into three classes, strong NOE's (atoms closer than
2.5A), medium (2.5-3.5A) and longer (3.5-5A). The medium and long range NOEs
are of most value, since the shorter range ones correspond to neighboring covalently
bonded atoms.. [25]
Typically one uses concentrations of protein in the 1-3 mM range (i. e. very
concentrated for protein solutions). 2D spectra may take from ½ to 3 or 4 days to
collect. They are only useful at 500 MHz and higher fields. Today there are a number
of 750 MHz (and even two 800 MHz) instruments in operation. [25]
The example used here is the human ASC PYRIN domain (apoptosis-associated
speck-like protein containing a caspase recruitment domain) which can be
determined according to the basic process , however , with individual details . All the
figures gotten by Fabiola Espejo and Manuel E. Patarroyo of Colombia .[26]
The first indication of secondary structure was obtained by CD spectra. The far-UV
CD spectrum of ACS2 showed two characteristic α-helix minimums at 208 and
224 nm (Fig. 1.2.3).
Fig. 1. 2.3 Far-UV CD spectra of ASC2 doamin. The form of the curve in the samples
shows the typical behaviour of α-helix.
Structure calculation
To weigh that strong NOEs were not observed that form the helix 3 from this protein
when carrying out the structure calculation, one can observe that it is formed a helix
among the amino acids 38 and 43 (Fig 1.2.6).
Fig. 1.2.6. The solution structure of ASC2 domain structure. (A) Superposition of the
backbone atoms in the 25 conformers representing the NMR structure of the ASC2
pyrin domain. (B) A ribbon diagram of the Cα trace of the averaged minimized
structure.
1. http://matematicas.udea.edu.co/~carlopez/index2a.html
2. http://www.bmrb.wisc.edu/
3. http://www.chemlin.de/chemistry/nmr_spectroscopy.htm
1.http://www.oxinst.com/wps/wcm/connect/Oxford+Instruments/Companies/Oxford+
Instruments+Molecular+Biotools/
2. http://www.janis.com/nmr2.html
1.2.7 References
6 http://en.wikipedia.org/wiki/NMR_spectroscopy
7 Liu G, Shen Y, Atreya HS, Parish D, Shao Y, Sukumaran DK, Xiao R, Yee A,
Lemak A, Bhattacharya A, Acton TA, Arrowsmith CH, Montelione GT, Szyperski
T.(2005) NMR data collection and analysis protocol for high-throughput protein
structure determination. Proc Natl Acad Sci U S A. 30,10487-92.
8 Koh .T. and Gerhard .W.(2006) NMR studies of protein interactions Current
Opinion in Structural Biology. 16. (1), p 109-117.
11 V. Raghunathan, J.M. Gibson, G. Goobes, J.M. Popham, E.A. Louie, P.S. Stayton
and G.P. Drobny, (2006) Homonuclear and heteronuclear NMR studies of a statherin
fragment bound to hydroxyapatite crystals, J Phys Chem B Condens Matter Mater
Surf Interfaces Biophys 110, pp. 9324–9332.
12 J. Leppert, C.R. Urbinati, S. Hafner, O. Ohlenschlager, M.S. Swanson, M. Gorlach
and R. Ramachandran, (2004) Identification of NH.N hydrogen bonds by magic angle
spinning solid state NMR in a double-stranded RNA associated with myotonic
dystrophy, Nucleic Acids Res 32, pp. 1177–1183.
13 G.L. Olsen, T.E. Edwards, P. Deka, G. Varani, S.T. Sigurdsson and G.P. Drobny,
(2005) Monitoring tat peptide binding to TAR RNA by solid-state 31P-19F REDOR
NMR, Nucleic Acids Res 33, pp. 3447–3454.
14 Gerhard .Z. and Norbert. M.(2006). Cogwheel phase cycling in common triple
resonance NMR experiments for the liquid phase, J Magnetic resonance
181(2),pp244-253.
18 M.H. Levitt, P.K. Madhu and C.E. Hughes, (2002) Cogwheel phase cycling, J.
Magn. Reson. 155, pp. 300–306.
22 P.E.C. Hershey, S.M. McWhirter, J.D. Gross, G. Wagner, T. Alber and A.B.
Sachs, (1999) The cap-binding protein eIF4E promotes folding of a functional domain
of yeast translation initiation factor eIF4G1, J Biol Chem 274, pp. 21297–21304
23 J.D. Gross, N.J. Moerke, T. von der Haar, A.A. Lugovskoy, A.B. Sachs, J.E.G.
McCarthy and G. Wagner, (2003) Ribosome loading onto the mRNA cap is driven by
conformational coupling between eIF4G and eIF4E, Cell 115, pp. 739–750.
24 http://www.chemistry.ucsc.edu/~fink/200lecture/8-97.htm
M Sabir Patel
1.4.1 Introduction
E = hν = hc ergs
λ
where ν is the frequency, λ the related wavelength and h = Planck's constant
(6.624 x 10-27 ergs/seconds).
Fluorescence Sensing
Lifetime based sensing which is an active area of research has a major advantage
due to the fact that the fluorescence lifetime is independent of signal intensity due to
the external factors like scattering and absorbtion.
Lifetime based sensing has been recently applied in the high throughput screening
(HTS) mode using two-photon excitation of Calcium Green.
The more recent advancements like modulation and polarization sensing have a wide
application in tissue and medical sensing, environmental sensing, bioimaging assays
and HTS.
The methods are calibrated using reference film placed adjacent to the sample.
These sensing methods are generic and can be used with any fluorophore displaying
analyte-dependent change in intensity.
Complex Glucose-Glucokinase
Fluorescence Lifetime Imaging:
Fluorescence Lifetime Imaging Method was developed by CFS. FLIM uses the
fluorescence lifetime at each point in the image and derives an image contrast using
the fluorescence microscope. Certain analytes like Mg2+ , Ca2+ , Cl, K+ or pH etc. alter
the fluorescence lifetimes of many fluorophores. However, there lifetimes are not
dependent on photobleaching and the local probe concentration. Also, FLIM is
independent of the wavelength ratiometric probes. Using visible-wavelength
illumination, FLIM also allows quantitative Ca2+ imaging.
CFS is currently working on the development of a new FLIM instrument and which
will also be available to external users of CFS.
Ionophore and weak base treatments perturbed the cytosolic pH of CHO cells.
Probe Chemistry:
The CFS is focused on the design and synthesis new fluorescent probes, novel
conjugatable emissive transition metal-ligand complexes and lanthanide compounds,
as well as donor-acceptor assemblies to meet the needs for expanding the
applications of fluorescence to not only biochemical and biophysical research, but
also biotechnology, drug discovery and cell biology.
Below are some of the projects involving probe chemistry that CFS is currently
working on:
Light Quenching:
A pulse is given to excite the sample for measuring the time resolved fluorescence
which is followed by measurement of time dependent emission. In the case of light
quenching additional pulses follow the excitation pulse in order to modify the excited
state population. This occurs upon illumination with longer wavelength non-absorbing
light which depletes part of the excited population by stimulated emission.
In reality, the fluorphores are not in a quenched state but only appear to be so
because the observation of the residual population was actually made at right angles
to the quenching beam. The "quenched" part of the emission is not observed since it
travels parallel to the quenching beam. Therefore in time resolved light quenching
experiments the emission observed is only before and after the quenching pulse. The
instantaneous change in the intensity and/or anisotropy decays reflects in frequency-
domain as characteristic oscillations.
Multi-Pulse Fluorescence
The basic idea of MPF is to perturb the sample with one light pulse and to start the
time-resolved measurement at a delay time t with a second excitation pulse. The
time-resolved data will be correlated with time-dependent structural changes in the
protein following the perturbation pulse. This approach will be applied to proteins
which undergo conformational changes in response to light, including hempglobin,
rhodopsin and phytochromes. Both steady state and time-resolved measurements
will be performed.
In Analytical chemistry the filed of atomic spectroscopy employs three most common
techniques. Namely,
• Atomic Absorbtion
• Atomic Emission
• Atomic Fluorescence
Since , atomic fluorescence technique incorporates properties from the other two, we
shall go on to explain this technique and its prime advantage over the other two. The
atoms are excited in the flame source by a beam of light that is focused in to the
atomic vapor. The intensity of this "fluorescence" increases with increasing atom
concentration, providing the basis for quantitative determination.
However, the lamp in this technique is mounted at a specific angel to the that of the
optical system and thus the light detector only sees the fluorescence in the flame and
not the light of the lamp intead. By this arrangement, lamp intensity can be
dramatically increased and in turn raise the number of excited atoms which is a
function of the intensity of the exciting radiation.
In conclusion, even though atomic absorbtion is the mostly widely used of the three,
yet , particular benefits are attained using fluorescence spectroscopy technique.
Unlike HPLC analysis, this method has no physical separation process and has a
good spatial resolution determined by the optics which is great advantage over
previous methods. This technique enables us to study and gain a better insight into
the nature and function of biochemical pathways in a living cell using fluorescence
tagged molecules.
Radiative Decay Engineering(RDE) is more recent of the core projects at CFS and is
currently under the development stage.
Fluorophores when placed close to metal surfaces or colloids, display a varied set of
spectroscopic effects. This effects are due to numerous factors or mechanisms like
quenching with d-3 dependence, enhancement in the local field and radiative decay
rate are responsible for the quantum yield.
For ellipsoidal particles the maximum enhancement in the magnitude of the local field
is about 140. With increase in radiative rate the total radiative decay rate increases
and so does the quantum yield. There are two limiting cases. If the dye has a high
quantum yield (Q0 —>1), the additional radiative decay rate cannot substantially
increase the quantum yield.
In the case of low quantum yield chromophores the enhancement can be as large as
1/Q0. For this reason it is of interest to study fluorophore-metal interactions with low
quantum yield fluorophores. While the actual mechanism is complex, one can
imagine the particles serve as an antenna to radiate faster than knr. This suggests the
emission from weakly fluorescent substances can be increased if they are positioned
at an appropriate distance from a metal surface or colloid.
Flouorescence spectroscopy has now found in roads and applications with newer
more rapid and non-destructive methods in the food industry.The emission spectra
can be quite complex, arising from a number of known, as well as unknown
molecules, and the total fluorescence is sensitive to the local environment,
apparently reducing the methods robustness. But with constant improvement and
better sensor technology and excellent optics fluorescence spectroscopy shows
many more promising applications in varied fields of life sciences.
Fluorescence has proved higly effective method in measuring fat and connective
tissue in meat.
The FSL boasts of state of the art spectrometers that have the capability to measure
the steady-state and life time decay fluorescence spectra for fluorophores .
The lab uses femtosecond Ti-Sapphire laser that can characterize even the near
infrared fluorophores.
Multi-Photon Excitation
Emission intensity of p-terphenyl with NIR (750 nm), UV (375 nm) and combined (375 and 750 nm) excitation
Emission spectra of p-terphenyl with 250 nm excitation and with 2C2P excitation at 375 and 750 nm.
MAIN AIMS
For further reading and learning resources, these are of some of the
web site links which may be useful for further reading.
• http:// www.wikipedia.org.
• http://cfs.umbi.umd.edu/
• http://chemistry.rutgers.edu/grad/chem585/lecture2.html
• http://www.risoe.dk/
• http://www.wam.umd.edu/~toh/models/Fluorescence.html
• http://www.alschemex.com/learnmore/learnmore-techinfo-
principles-analyticalmethodologies.htm#X-
ray%20Fluorescence%20Spectroscopy%20(XRF)
• http://www.umb.no/?viewID=698
Instrumentation:
INSTRUMENT SPECIFICATIONS
• Argon Ion (Coherent) mode locked laser, 1W at 514 nm, fwhm = 120 ps.
• Ti:Sapphire femtosecond laser system (Spectra Physics), 750-920 nm, can
be frequency doubled (375 - 460 nm) or frequency tripled (250 - 310 nm);
fwhm = 90 fs
• Ti:Sapphire, regenerative amplifier, optical parametric amplifier system
(Coherent) 120 fs fwhm; tunable from UV-IR.
• Argon Ion air cooled, CW, 488, 514.5 nm.
• HeCd air cooled, CW, 442 nm (Liconix)
• HeCd air cooled, CW, 325 nm (Liconix)
• Modulated CW laser diodes and light emitting diodes
DYE LASERS
PHOTODETECTORS FOR FD
EXPERIMENTAL CAPABILITIES
COMPUTER SPECIFICATIONS
• Three Silicon Graphics Indy workstations running under IRIX 5.3 operation
system
• PC's running under Windows'98, Windows 2000 operating system (see CFS
Network).
• Internet access to all programs. Reflection 4+ for Windows graphics terminal
emulator recommended
• Data transfer between computers possible via Internet (FTP). Data and/or
programs are also available on diskettes (MS-DOS, LS120), or CD-RW
• Terminals adjacent to instrument room
SUPPORTING CAPABILITIES
• Steady state emission spectra (SLM 4800 Spectrofluorometer and SLM AB-2)
• Absorption diode array (190 nm - 1100 nm) spectrophotometer. (Hewlett
Packard)
• Linear dichroism and transition moment determination
• Lab space adjacent to instrument room
• Temperature control, -60 to +90oC
• Ultracentrifuge (Beckman L5-65B), 65,000 rpm
• Gas pressure cell to 100 atm, and a 2 kbar hydrostatic pressure cell
Apparatus:
A FLS920 from Edinburgh Instruments capable of steady state and time resolved
measurements in the 200nm-900nm range. The system is equipped with a 450W
Xe-lamp for steady state measurements. For time resolved measurements two
different light sources can be used. Short lifetimes (<125ns) can be measured
using a fast 40MHz LED light source (excitation around 378nm, 456nm, 501nm or
598nm). For longer lifetimes (up to 50ms) a 40kHz nanosecond flash lamp can be
used.
Accessories:
• Polarisers to measure orientation.
• A cryostat to measure at low temperature (77K).
Use:
Fluorescence spectroscopy can be used to obtain steady state emission and
excitation spectra (200nm-900 nm) and to measure fluorescence lifetimes (0.1 ns-
50 ms) of fluorescent compounds.
,
Simulation of Fluorescence Spectroscopy
Trusha Jhala
1.5.1 Introduction
The Atomic Force Microscope very much known as the “Eye of Nanotechnology” has
proven to be a powerful tool for biological studies [1]. It is a high resolution imaging
technique for surface morphology in various solutions and gas environments that has
allowed researchers to observe biological processes in real time. [10,11] The Atomic
Force Microscopy(AFM) also known as the Scanning Probe Microscopy, has
revolutionized the field of interfacial surface science by allowing observation at
molecular and atomic levels in the native environment at a single molecule level.
[2,10,11] It was first invented by Gerd Binnig and Christoph Gerber in 1985.[13]The
AFM is used in a wide range of technologies in electronics, telecommunications,
biological, chemical, automotive, aerospace and energy industries. It is used for
studies in abrasion, adhesion, cleaning, corrosion, etching, friction, lubrication and is
used to analyze and investigate on a wide variety of materials.[8,13] It is now used in
many fields of nanoscience and nanotechnology providing better understanding of
events occurring at the molecular level. [11] AFM is widely used because it can be
used for imaging any conducting or non-conducting surface unlike STM which is
limited to conducting surfaces.
1.5.2. Principle
The principle on which the AFM works is by the contact of the cantilever tip and the
sample surface to be imaged. The cantilever tips are made of Si3N4 or Si with a radii
of 4-60nm. The cantilever tip is combined with an ultra-sharp probe. The probe is of
significant importance. The shape of the probe defines the resolution of the AFM and
is made of a sharp tip with a specific spring constant. The surface topography is
measured by keeping the force constant while the tip scans the surface and moves
vertically due to attractive or repulsive interaction forces. The piezo-electric scanners
keep the tip at a constant force when it has to obtain information regarding the
height. It keeps the height constant when it has to obtain information on the force.
Most AFM scanners are in the range of 90 x 90µm in the x-y plane and 5 µm for the z
–direction.[13]The cantilever tip bends upwards due to the ionic repulsive force from
the surface. This bending is measured by a laser beam reflected onto a position-
sensitive photodetector which measures minute sensor deflections. This is used to
calculate the force and allows visualization of the surface topography. [1,13]In a
nanoscope AFM, there is an optical detection system which comprises of the tip
attached to the base of the reflective cantilever. On the back of the cantilever is a
diode laser. When the tip scans the sample surface, the laser beam deflects into a
dual element photodiode which the photodetector measures and then converts to
voltage. The computer software on gaining this input from the photodetector controls
and maintains a constant force or height above the sample surface. [13]
The constant force mode measures the height deviation in real time through the
piezo-electric transducer. The constant height mode measures the deflection force
through the tip to be inserted in the sensitivity of the AFM head. The tip needs
calibration parameters during force calibration of the microscope. Few AFM’s use
200mm wafers, which measure surface roughness of 5nm lateral and 0.01nm vertical
resolution. Scanners measure the local height of the sample by interpreting the
sample under the cantilever or the cantilever over the sample. 3-dimensional
topographical maps can be constructed using the sample height and the probe tip
position. [13,15] The cantilever tip obeys Hooke’s Law and can thus find the
interaction force. The piezo-electric ceramics scanners carry out the movement of the
tip or the sample and can measure resolutions in x-, y- and z-directions.[14]
(a) Contact mode: This is the most common operational mode of AFM where the
probe is in permanent contact with the sample surface. In fluids with biological
specimens the probe force is kept below 100 pN. This mode requires minimal sample
preparation and can operate in air and fluid environment. It provides information on
the elasticity, adhesion, hardness, friction of the sample surface. [1, 14]
(b) Tapping mode: This mode uses a probe that oscillates at a constant frequency
and the probe force is dominated by changes in the resonance frequency of the
cantilever. To evade damage to the sample surface, the probe gently taps the
sample surface rather than scraping the surface. The advantage of this mode is that
it can detect surface contaminants that are not seen in height images. [1, 14]
(c) Non-contact mode: This mode is used for samples which are delicate in both air
and fluids and there is no damage to the material because the probe is placed in the
attractive force region. The Van der Vaal forces and electrostatic potentials are used
to measure the force gradients. [1, 14]
AFM has revolutionized the field of interfacial surface science by allowing direct high
resolution visualization of surface topography and its ability to being performed in
various environments. There are a lot of new advances in the instrumentation,
simplifying sample preparation, molecular imaging at a single molecular level has
been achieved from the native environment of the proteins with better resolution that
generates topographs of native proteins with the obtained resolution at nanometer
scale that exhibits the supremacy of AFM. The operational mode of AFM with
advancement of force –spectroscopy is now used extensively by researchers to
study ligand-receptor interactions which aims to measure the forces at single
molecular level. [1,2]
The use of Mac Mode AFM has opened prospects in drug development, especially in
structural imaging of drug carriers such as liposomes and lactose crystals. Structural
imaging revealing the shape and size of these drug carriers is of peak importance, as
they are used widely in the pharmaceutical research industry using AFM. [2]
The shape and size of dimerystic phosphotydalcholine (DMPC) liposomes in
phosphate buffer can be directly observed using MAC Mode AFM.
Figure 1.5.6 Surface imaging of lactose crystals at varied humidity levels achieved at a
scan size of 5 µm x 5 µm.
Source: www.molec.com/media/PDFs/B-Biology_App_notes/B2-Bio_Review.pdf
ADVANTAGES:
Interest in AFM in the study of proteins arose because it could resolve the surface
features of heterogenous samples under different conditions and provide direct
observation of protein complexes in real time. As it could image non-conducting
surfaces, it was be used to analyze a wide variety of biological samples. [1,13]
The main advantage of AFM is that it can provide easily achievable high resolution
and 3-dimensional images of surface topography of biological specimens in various
environments and temperatures. [1,10,11,12,13] Recent advances in this method
have enabled surface imaging at a single molecular level at a resolution all the way
to the nanometer scale. [1] Due to better sample preparation techniques and superior
control of probe-sample interactions it is probable to now analyze protein folding. [1]
AFM methods require little sample preparation. [13] AFM has also been used widely
for imaging individual proteins and other molecules like collagen. Immobilization of
IgG1 antibodies was attained using AFM, where the low affinity of IgG molecules
towards mica was surmounted by cloning a metal-chelating peptide into the carboxy
terminus sequence of IgG. The purified IgG had binded in a regiospecific manner to
the nickel-treated mica. [13] AFM combined with bright-field, fluorescence and other
optical techniques can be used for identification of structures and simultaneously
providing nanometer-resolved images of the sample surface. [13] AFM can measure
intermolecular forces in the nanonewton range in protein synthesis, DNA replication
or drug interaction which enables it to analyze ligand-receptor interactions. [13] An
important part of the study of biological systems is the electrical properties of their
surfaces, where the use of AFM comes in. AFM can image electrical surface charge,
binding forces and electrostatic forces, micromechanical properties, elasticity and
viscosity of live cells and membranes. [13]
On comparing AFM with other microscopy techniques, AFM shows many advantages
over other techniques making it the preferred method used among the researchers.
The table below shows how AFM is advantageous compared to methods like SEM,
TEM and optical microscopy techniques as far as cost, flexibility in various
environments and ease of sample preparation is concerned. [11]
Limitations:
AFM has enormous applications in the field of Molecular and Microbiology but at the
same time several limitations and difficulties also exists. The most vital aspect of the
technique is sample preparation which can bear the force applied by the scanning
probe for that appropriate solid substrate and well attached sample is needed. When
the sample consists of living cells the immobilization by means of adsorption is not
appropriate because of the limited contact area and as the substrate is very small it
can lead to detachment by the scanning probe. The better alternative developed was
to use porous membranes. This approach can minimize the denaturation but this
method can basically work well for spherical cells not for rod-shaped cells. The other
problem encountered is in relation to the resolution and the image interpretation
which is solely based on the imaging force and probe geometry. Large forces acting
between the sample and the probe during imaging may significantly reduce the
resolution power of the images generated and can also cause molecular damage.
When imaging the samples in the air, a layer of water condensation or other
contaminations cover up both the samples as well as the probe which often leads to
sample damage or makes high resolution images due to strong attractive force. In
order to optimize imaging environment pH and ionic strength are also the factors that
affect image quality. Another difficulty that may arise is due to shadowing or
multiplication of small structures generated due to multiple probe effects generated
from contamination on the probe. [3]
AFM was used to reveal structural details of the membrane protein surface and also
the electrostatic potentials generated by the protein. At low electrolyte concentrations
when the charged probes were used, it provided structural imaging of the surface of
membrane protein and also allowed mapping of electrostatic potential of the OmpF
porin. The obtained results corresponded with the electrostatic calculations based on
the atomic OmpF porin in a lipid bilayer at the same concentrations. This method
opens the door to the electrostatic potential of the native protein surfaces with better
resolutions.[5]
AFM used for analyzing the reaction of endothelial cells to histamine treatment:
Atomic force microscopy was used for the investigation of the cellular response
generated to histamine, which is one of the key inflammatory mediators that causes
endothelial hyperpermeability and vascular leakage. The probes used were labeled
with fibronectin and used for measuring the binding strength between 5ß1 integrin
and fibronectin for the quantification of the force needed to break down single
firbonectin-integrin bonds. The cytoskeletal changes, adhesion force and binding
probability on endothelial cells were monitored before and after histamine treatment.
The AFM was used to record changes on live endothelial cells. Cell topography
measurements revealed that histamine provokes cell shrinkage, stiffness and
increases binding probability. To measure stiffness of cell surface, AFM was used in
force mode to measure force adhesion between the AFM tip and the cell surface.[4]
Figure 1.5.7 Contact mode images of endothelial cells before and after histamine
treatment.
Source: http://www.biophysj.org/cgi/content/abstract/89/4/2888
The upper row shows deflection images and the lower images show contrast height
images in contact mode of AFM with the arrows showing cell shrinkage after
histamine treatment. Panels A and B in the graph represent two topographical
profiles.[4]
• Agilent Technologies
www.agilent.com/about/newsroom/presrel/2005/29nov-ep05114.html
The list of the key industry suppliers of Atomic Force Microscope’s are as below:[10]
• Asylum Research
www.asylumresearch.com
• BioForce Nanosciences
www.bioforcenano.com
• EXFO Burleigh
www.exfo.com/en/burleigh.asp
• JEOL USA
www.jeol.com
• JPK Instruments
www.jpk.com
• MikroMasch
www.spmtips.com
• Molecular Imaging
www.molec.com
• Nanofactory Instruments
www.nanofactory.com
• Nanoink
www.nanoink.net
• Nanonics Imaging
www.nanonics.co.il
• Nanosurf
www.nanosurf.com
• Nanoworld
www.nanoworld.com
• Novascan Technologies
www.novascan.co
• NT-MDT
www.ntmdt.ru
• Olympus
www.olympus.co.jp/en/insg/probe
• Omicron NanoTechnology
www.omicron.de
• Pacific Nanotechnology
www.pacificnanotech.com
• Photometrics
www.photometrics.net
• Quesant Instrument
www.quesant.com
• RHK Technology
www.rhk-tech.com
• Shimadzu
www.shimadzu.com
• Surface Imaging Systems
www.sis-gmbh.com
• Triple-O Microscopy
www.triple-o.de
• Veeco Instruments
www.veeco.com
REFERENCES:
1. Silva, LP. (2002) Atomic Force Microscopy and Proteins, Protein and Peptide
Letters, Vol.9, No.2, pp.117-125. Bentham Science Publishers Ltd.
4. Trache, A., Trzeciakowski, J.P., Gardiner, L., Sun, Z., Muthuchamy, M., Guo,
M., Yuan, S.Y., Meininger, G. A. (2005) Histamine Effects on Endothelial Cell
Fibronectin Interaction Studied by Atomic Force Microscopy, Biophysical
Journal 89:2888-2898. The Biophysical Society.
5. Philippsen, A., Im, W., Engel, A., Schirmer, T., Roux, B., Muller, D,J.(March
2002) Imaging the electrostatic potential of transmembrane channels: atomic
probe microscopy of OmpF porin. Biophysical Journal 82(3): 1667-1676
7. Jiang, F., Khairy, K., Poole, K., Howard, J., Muller, D.J. (2004) Creating
Nanoscopic Collagen Matrices Using Atomic Force Microscopy. Microscopy
Research and Technique 64:435-440
8. Karrasch, S., Hegerl, R., Hoh, J.H., Baumeister, W., Engel A. (1994) Atomic
force microscopy produces faithful high-resolution images of protein surfaces
in an aqueous environment. Proc Nat1 Acad Sci USA. 91(3): 836-838.
9. Roes, S., Mumm, F., Seydel, U., Gutsmann, T., (2006) Localization of the
Lipopolysaccharide-binding Protein in Phospholipid Membranes by Atomic
Force Microscopy. The Journal of Biological Chemistry Vol.281, No.5,
pp.2757-2763. The American Society for Biochemistry and Molecular Biology,
USA.
10. Wright-Smith, C., Smith, C.M.(2001) Atomic Force Microscopy. The Scientist
15.2:p23.
15. Muller, D.J., Aebi, U., Ángel, A. Imaging, measuring and manipulating native
biomolecular systems with the atomic force microscope
www.mih.unibas.ch/Booklet/Booklet96/Chapter3/Chapter3.html
Priyadharshini Sivakumaran
1.6.1 Introduction:
With the relative ease of operation of present day instruments, electron microscopy
has become very popular in the investigation of biologic structures. In many cases,
one can obtain images that are fairly faithful records of the detail in the specimen
within only a few minutes. The two main draw backs of the method are (1) the
relative transparency of proteins to electrons and (2) the disruption of protein
structure as the specimen dry out in the evacuated microscope and are bombarded
by the electron beam. The usual method of increasing the contrast is by adding
heavy metals in one way or another. The simplest way of doing this, the method of
negative staining, is fortunately also the most successful in preserving specimen
order. With the effective resolution limited to 20 Å in electron micrographs of proteins,
little, even in terms of the shape of the molecule, can be deduced from images of
average–sized, isolated monomers. Electron microscopy has therefore been most
successful in the determination of the quaternary structure of assemblies of protein
molecules.
The Transmission Electron Microscope (TEM) is the only instrument that allows the
analysis of biological samples on different scales, ranging from µm down to
Angstrom resolution. Transmission electron microscopy of sections of cells or the
recently developed method of electron tomography allows whole cells or organelles
within cells to be studied. On the other end, electron crystallography employs the
TEM to resolve the atomic details of proteins that are arranged in two-dimensional
crystals.
Figure 1.6.1
In recent years, digital image processing has evolved to the point where it is now
possible to more fully exploit the high resolution potential of the transmission electron
microscope (TEM). A system has been developed for semi-automatic specimen
selection and data acquisition for protein electron crystallography, based on a slow-
scan CCD camera connected to a transmission electron microscope and control from
an external computer. The slow-scan CCD camera has been shown to be a valuable
accessory to an electron microscope for direct data acquisition as used in on-line
electron optical adjustments and electron tomographic applications. Use of a slow-
scan CCD camera for the acquisition of diffraction data in an electron crystallographic
application would allow for a fast evaluation and an immediate, subsequent
numerical analysis of the data, contrary to the imaging plate, which too has been
used for the acquisition of electron diffraction intensities. Furthermore, the quality of
the acquired data is higher, since this camera performs better than the photographic
film in terms of linearity, background noise and dynamic range. Its applicability to
protein electron crystallography at 400 kV has resulted from a number of engineering
changes, which were made to the slow-scan CCD camera, such as minimization of
the spurious X-ray signals picked up by the camera.
Because of the differences in the way TEM and SEM work, each has its own distinct
advantages. With TEM, for instance, it is able to view a sample at a magnification
approximately 10 times that of an SEM (objects as small as three to 10 angstroms for
TEM). Also, because of its ability to transmit through samples, it can not only
characterize particle surfaces, but it can also reveal the sample’s internal structure.
One advantage of SEM is that it provides a better overall visual image of the sample.
This is because as it scans over a sample line by line, it gives the image a depth of
field, almost making the object three-dimensional. In a TEM image, no depth of field
can be seen on the image. Another advantage of SEM is that it is more flexible in the
type of samples it can view because the sample does not need to be nearly as thin
as with TEM. Therefore, SEM can analyse samples such as larger wear debris
particles and distressed machine surfaces.
The advantage of colloidal gold labelling is that the intracellular complexes may be
more precisely located because of the significant improvement in resolution provided
by backscatter electron (BSE) imaging in the SEM. BSE imaging confirmed the
presence and subsarcolemma localization of myoglobin in cardiomyocytes directly
isolated from fresh biopsies.
Electron microscopes are expensive to buy and maintain. As they are sensitive to
vibration and external magnetic fields, suitable facilities are required to house
microscopes aimed at achieving high resolutions. The samples have to be viewed in
vacuum, as the molecules that make up air would scatter the electrons. Recent
advances have allowed hydrated samples to be imaged using an environmental
scanning electron microscope.
Scanning electron microscopes usually image conductive or semi-conductive
materials best. The samples have to be prepared in many ways to give proper detail,
which may result in artefacts purely the result of treatment. This gives the problem of
distinguishing artefacts from material, particularly in biological samples. Scientists
maintain that the results from various preparation techniques have been compared,
and as there is no reason that they should all produce similar artefacts, it is therefore
reasonable to believe that electron microscopy features correlate with living cells.
1.6.4 Application of the technology:
Electron microscopic techniques have been used in studying the structure of many of
the proteins. Some of them are discussed below.
The structure of the individual membrane-bound protein components have been well
characterized, especially by X-ray diffraction studies. During the past years
increasing evidence that the respiratory chain components are organized into
supercomplexes, particularly by the application of Blue-native polyacrylamide gel
electrophoresis. Single particle electron microscopy is used for a structural
characterization of some of the respiratory chain supercomplexes. The study
revealed that the supercomplex composed of monomeric Complex I and dimeric
Complex III from Arabidopsis and more recently the dimeric complex of ATP
synthase. The ATP synthase is substantially kinked in its membrane-embedded
domains and this allows also for the first time to define a functional role for these
supercomplexes. The dimeric ATP synthase complex appears to be responsible for
the folding of the inner mitochondrial membrane.
A novel ways to characterise all the larger (membrane) protein complexes from a
specific type of membrane by transmission electron microscopy (EM) in combination
with proteomics was developed. In general, structural studies on proteins are heavily
based on isolated, highly purified protein samples. For X-ray diffraction studies this is
a necessity and for EM, based on crystals, the same is true. Single particle EM is a
well-developed technique to average the individual projections of large protein
complexes. It makes use of sophisticated classification programs to sort out different
projections. It is found that this is also possible in complex mixtures of proteins. In
this way, the projection structures of all larger proteins from disrupted membranes
can be analysed. Proteomics methods can be used to assign the EM projection
structures to specific proteins. This will be achieved by comparing the frequencies of
EM structures with intensities of electrophoresis spots arising from proteins of
complete membranes. As a study object the photosynthetic membrane from
cyanobacteria and the peroxisome membrane from yeast were selected. It is
expected that by combining EM and proteomics major new discoveries can be
achieved because it is a fully novel approach.
With 2D protein crystals it allows the user to mark at low magnification areas in the
specimen that may contain crystalline protein domains or other interesting features,
evaluate the (crystalline) quality of the domains by recording a very low dose image
(at intermediate magnification) in the so called 'Preview Mode' and calculating and
displaying the FFT. If an area is judged 'good' by the operator, a high resolution (low-
dose) image can be recorded on the CCD camera or on film (or both) in the
Exposure Mode. In the case of non-periodic specimens the very-low-dose preview
image can be used itself for selection instead of the FFT. The system can also be
used to just collect images from the positions marked at low magnification, without
any preview.
1.6.6 References:
1. Baldwin, J. & Henderson, R. (1984). Measurement and evaluation of electron
diffraction patterns from two-dimensional crystals. Ultramicroscopy 14, 319-
336.
4. Chiu, W. & Jeng, T.-W. (1980). Electron diffraction study of crotoxin complex
at 1.6 Å. In: Electron Microscopy at Molecular Dimensions. State of the Art
and Strategies for the Future. Berlin, Springer-Verlag. 137-142.
5.
Frederic Zenhausern, Marc Adrian and P. Descouts. Solution Structure and
Direct Imaging of Fibronectin Adsorption to Solid Surfaces by Scanning Force
Microscopy and Cryo-electron Microscopy. J. Electron Microscopy 42(6):
378-388 (1993)
All other methods of chromatography involve the sample binding to the support, but
size exclusion chromatography separates different molecules based on their
molecular weight. In size exclusion chromatography, larger molecules are eluted
faster from the column than smaller molecules, giving rise to the name ‘size
exclusion’.
Underlying Principles
As the sample is eluted, one or more detectors are used to measure specific
properties of the mobile phase/sample. This information is displayed via a
chromatogram. [1], [3]
The length of the column and the sample size will also affect the resolution of the
results. If the column is longer, it allows more time for better separation of the
‘medium sized’ molecules, which gives better resolution on the chromatogram.
Sometimes a series of columns is needed to get adequate results, but this increases
the time required to run the process. [6]
Prepacked columns are commercially available with different volumes and mediums.
The table below shows some typical examples of medium used in columns for size
exclusion chromatography.
Table 1.1 Properties of typical commercial column packings for size exclusion
chromatography. Source: www.cem.msu.edu/~cem333/week16.pdf
The sample is dissolved in the mobile phase. As the mobile phase elutes through the
column, the molecules within the sample will either permeate through the pores (if
they are able to fit) or travel down the column via the mobile phase solution.
Careful selection of the mobile phase is also very important.
Usually more than one type of detector is used with size exclusion chromatography.
The most common detectors used in size exclusion chromatography are UV,
fluorescence, refractive index and evaporative light-scattering detectors (ELSDs).
Size exclusion chromatography systems can also be linked to a Mass Spectrometer
for higher selectivity, but this is quite expensive. [5] [1]
Refractive index detectors measure changes in the refractive index of the sample in
contrast to the mobile phase. This method of detection is sensitive to temperature
changes, and the solvent must remain the same throughout the process.
For use of ELSDs, the liquid sample is converted into a fine spray and then
evaporated to remove any solvent. The remaining sample molecules are then
subjected to a light source and the light scattered by the molecules is detected by a
photodiode. The amount of light scattered is relative to the mass of the sample.
ELSD is commonly used for samples that do not respond well to UV detection.
The table below shows the properties of the chromatography detectors outlined.
RI UV/VIS Fluor MS
Response Universal Selective Selective Selective
Sensitivity 4 microgram 5 nanogram 3 picogram 1 picogram
Linear Range 10 10 10 10
Flow Sensitive Yes No No Yes
Temp. Sensitive Yes No No No
Table 1.2 Properties of SEC detectors.
Source: http://www.waters.com/WatersDivision/ContentD.asp?watersit=JDRS-
5LTGBH#detectors
The largest molecules elute in the void volume (vo). These molecules are too large to
permeate the stationary phase. The smallest molecules that permeate the most
pores in the stationary phase are eluted at the total column volume (vt). Molecules of
intermediate molecular weight are eluted at various times depending on their size. Ve
is the elution volume of each molecule.
As shown in the illustration below, the void volume is the volume of the mobile phase,
the total column volume is the volume of the mobile phase plus the stationary phase,
while the volume of the stationary phase is determined as Vt-Vo. [6] [7]
Kav = (Ve-Vo)/(Vt-Vo)
There is a relationship between the Kav value and the logarithm of the molecular
weight of similar molecules.
Selectivity curves are created for stationary phase matrices by plotting the Kav value
of a set of standard proteins against the log of their molecular weight.
Figure 1.4 Selectivity curves of some commercially available gel filtration medium.
Source: [6] Amersham Biosciences, Gel Filtration Principles and Methods
Prepacked columns can be purchased and the matrix should be selected carefully so
that the predicted molecular weight of the sample falls within the linear part of the
calibration curve. If the target sample does not have the same molecular shape as
the calibration standards, the result may deviate from the calibration curve. [6] [7]
One of the biggest drawbacks of using SEC is that it can take hours to get results.
Recently there has been a growth in interest in Fast size exclusion chromatography.
Initial studies into Fast-SEC techniques have suggested that reducing the process
time also reduces the resolution to an unacceptable level. Popovici and
Schoenmakers investigated different ways of increasing the speed of size exclusion
chromatography. [8]
An experiment was performed using columns of different lengths: 50mm x 4.6 i.d.
(internal diameter), 100mm x 4.6mm i.d., 150mm x 4.6mm i.d. A sample with
standards of known molecular weight (1700, 10,900, 117,000, and 1,260,000 Da)
was run through the columns at the flow rates 0.3, 0.6 and 0.9ml/min respectively.
The results are shown in the chromatograms below.
Figure 1.5 Results from various Fast-SEC columns. Source: [8] Fast size exclusion
chromatography – Theoretical and practical considerations.
It was concluded that better resolution can be obtained via increase flow rate and
column length. [8]. Improvements are continually being made by experimenting with
different types of columns and media in order to increase the process time while not
reducing the quality of chromatograms produced.
Large suppliers of SEC column media have developed specific stationary phases
that will maximise the efficiency of the procedure.
The aim of the Capture step is to isolate, concentrate and stabilise the protein. Size
exclusion chromatography would not be recommended for this process, as it usually
involves a high volume of sample and would be very time consuming. Ion Exchange
Chromatography (IEX) and Affinity Chromatography are commonly used for this
process as they are faster, higher capacity techniques than size exclusion
chromatography. Column conditions are selected to maximise the binding of the
target protein from the sample and avoid the binding of unwanted contaminants.
Speed is an important issue in this first stage, as the sample often includes impurities
that may cause proteolysis of the target protein, or other denaturing effects.
During the Intermediate Purification, the chosen technique will need to have a high
selectivity for the target protein as the other unwanted components remaining in the
sample after the initial capture stage will be more similar to the target. One or
methods may be used for this process. For each method used a balance must be
found to provide adequate capacity at an acceptable resolution.
The final polishing stage is used to achieve a high purity of the target protein. Size
exclusion chromatography is usually used at this stage as it provides high resolution
results. The volume of the sample is reduced by the first 2 stages, so the process
time of SEC is decreased. [7]
The resolution of the results can be increased by adding more columns, and a series
of columns with different pore sizes is often used. However, this means that the
process run time will increase.
Each different type of chromatography has its advantages and disadvantages. They
can be used in combination, and the materials or procedures altered to gain the best
result in relation to resolution, speed, recovery and capacity. The table below lists the
different protein properties and the techniques that use those properties during
purification.
The primary application for size exclusion chromatography was for the determination
of molecular weight or molecular weight distribution of polymers. It is now used for a
variety of purposes, including group separation (for example desalting, buffer
exchange), separating monomers from dimers and polymers, determining molecular
weight, final ‘polishing’ purification, and facilitate the refolding of proteins. [6] [5]
Gel filtration is ideal for the cleanup of proteins before purification. Commercially
prepared desalting columns remove the protein from salts and other contaminants of
low molecular weight and can transfer the protein to a new buffer, all in a single
process. Sample volumes can be up to 40% of the column volume.
Table 1.3 Commercially available prepacked columns for sample cleanup
Source: [7] Amersham Biosciences, Protein Purification Handbook
The capacity and speed of this procedure makes it efficient to prepare the sample via
desalting and buffer exchange, in readiness for further purification. The figure below
shows a chromatogram of the desalting of a His6 fusion protein. Using a UV and
conductivity detector facilitates optimisation of the separation. [6].
Size exclusion chromatography is used for profiling protein samples and can be used
for relatively small volumes of sample.
Parini et al used an automated size exclusion chromatography system to analyse
triglyceride and cholesterol content in lipoproteins. [9]
Traditionally lipoprotein separation was done via ultra-centrifugation separation,
which requires large sample volumes.
Samples of 1-10µL were injected into the column and travelled at a flow rate of 40 µL
min-1. The absorption was measure by a UV-VIS detector at 500nm. The run time for
cholesterol analysis was 75 minutes, while the run time for triglyceride analysis was
90 minutes. The chromatogram below shows the cholesterol profiles generated by
running SEC on 5 different sample volumes. The inset shows the lipid content
calculated as a percentage of the peak area. The proportion of VLDL, LDL and HDL
cholesterol remained the same as the sample volume varied.
Figure 1.8 Cholesterol profiles at varied sample volumes. Source: [9] Parini et al,
Lipoprotein profiles in plasma and interstitial fluid analysed with an automated gel
filtration system.
Due to the difficulty in being able to analyse low levels of lipoproteins in small
volumes, the analysis of lipoproteins in Interstitial Fluid has not been widely studied.
Using SEC, Parini et al evaluated the levels of lipoproteins in IF and plasma. The
resulting chromatograms shown below illustrate the levels of cholesterol in plasma
versus IF in 5 healthy volunteers.
Figure 1.9 Cholesterol levels in IF and plasma of 5 healthy individuals. Source: [9]
Parini et al, Lipoprotein profiles in plasma and interstitial fluid analysed with an
automated gel filtration system.
The researchers also applied their method to determining the profile of triglyceride
lipoproteins. Again, injections of increasing volume from 1 to 10µL were run through
the SEC system and the following chromatogram was produced. Once again, the
proportion of the VLDL, LDL and HDL triglyceride remained constant.
Figure 1.10 Triglyceride profiles at varied sample volumes. Source: [9] Parini et al,
Lipoprotein profiles in plasma and interstitial fluid analysed with an automated gel
filtration system.
The fourth peak shown was thought to be free glycerol in plasma. To test this
hypothesis, a reference sample of free glycerol was injected into the SEC system,
along with a plasma sample from a healthy volunteer and from a patient with
hyperglycerolaemia. As is shown in the chromatogram below, the hyperglycerolaemic
patient had a high fourth peak, and this was eluted at the same time as the free
glycerol.
Recently size exclusion chromatography has been used to refold denatured proteins.
Neely et al successfully used SEC for the refolding of the active calcium channel β1b
subunit. [10] Studies of this subunit have been difficult due to its tendency to form
aggregates when expressed in bacteria. Dialysis and fast dilution were initially used
to try and refold the protein via denaturant removal but these techniques failed.
The SEC method was developed which exchanges the buffer, removes aggregates
and promotes refolding of the protein all in the one process. The sample was loaded
onto a column that was calibrated with the refolding buffer and eluted at a rate of
2.5ml/min.
A UV detector was used, and the eluted fractions were further analysed by reducing
SDS-PAGE. The figure below shows the peak at the void volume which was thought
to be aggregates of the subunit. The peaks I and II are thought to have been the β1b
subunit refolded into two different states.
Figure 1.12 Protein refolding of β1b subunit. Source: [10] Neely et al, Folding of active
calcium channel β1b subunit by size exclusion chromatography and its role on
channel function.
The molecules from Peak I were recovered and loaded onto the column again. The
resulting chromatogram showed a high peak again at the void volume, indicating that
the protein had aggregated over time.
The molecules from Peak II were recovered and loaded onto the column again. The
resulting chromatogram (shown below) indicates that the protein has refolded
correctly and remains in a stable condition. The integrity of the Peak II conformation
was confirmed by amino acid analysis.
Figure 1.13 Analysis of Peak II subunit. Source: [10] Neely et al, Folding of active
calcium channel β1b subunit by size exclusion chromatography and its role on
channel function.
The recovery of the β1b subunit using this method averaged at 50%, and the protein
is stable at ph 8-10. [10]
2.2.7 References
2.3.1 Introduction:
Proteins are considered to be building blocks of all living beings. The word “Protein”
originates from a Greek word Protos which means primary. The meaning indicates
that proteins are an essential component for all living beings. They constitute that
part of the living system without which survival is difficult. . They are molecules which
are formed of a large amino acid groups Proteins are mainly made up of 20 amino
acid groups. They form polymers with a combination of amino acids. [1][2][3] [7]
There are a variety of uses with proteins and hence those applications may utilise a
purified active form of protein in the shortest time possible. The proteins are isolated
from a mixture of proteins or any complex mixture. The basis of purification of
proteins can broadly divided into two major categories.
Proteins Purification
The above two purposes for protein purification have a variety of applications in their
own sphere of work.The analytical purpose has an application more in research, to
identify a strand of protein while the preparative method targets the production part,
to produce large proteins for various applications. The biotechnological aspect of
protein is a fast and a rapidly upcoming field. They possess a great diversity in the
drugs and pharmaceutical fields. [4].
The techniques discussed below using the chromatography technique is Ion
Exchange Chromatography. This technique was first adopted for the separation of
proteins by H.A.Sober and E. A.Peterson in the mid 1950’s. [5]
The important factor for any separation using this technique is the selection of resin
and its associated factors. The major factor for resin selection is the overall charge
and the charge distribution of protein. The overall charge is proportional to the pH
and amino acid composition in the protein while the charge distribution varies on how
much is the charge carried by the amino acid. Isoelectric point is also a big
contributor for separation. Generally though, proteins are separated in the pH range
of 4-8, hence a pH range is selected in accordance with the resin and analyte
characteristics for purification or separation [6]
Source
http://images.google.com.au/imgres?imgurl=http://www.steve.gb.com/images/scienc
e/ion_exchange_chromatography.png&imgrefurl=http://www.steve.gb.com/science/m
olecular_biology_methods.html&h=283&w=915&sz=15&hl=en&start=32&tbnid=fdEE
amsdanN71M:&tbnh=45&tbnw=147&prev=/images%3Fq%3Dcation%2Bchromatogra
phy%26start%3D18%26ndsp%3D18%26svnum%3D10%26hl%3Den%26lr%3D%26
sa%3
Monolith polymers are used for separation as a separating media for a reasonable
period of time. These polymers were initially not very effective with the separations
but, recent studies indicate that monolith polymers, on sulphonation, yield better
efficiency and high capacity. Latex coated monoliths were used initially but yielded
unfavourable results as there was no great efficiency in separation seen.
The polymers were studied by “Joseph Hutchinson and colleagues, from the
University of Tasmania “.Studies were then focused on BuMa-co-EDMA-co-AMPS
(butyl methacrylate-co-ethylene glycol dimethacrylate-co-2-acrylamido-2-methyl-1-
propanesulfonic acid), PS-co-DVB (poly (styrene-co-divinylbenzene)), or GMA-co-
EDMA (poly (glycidyl methacrylate-co-ethylene glycol dimethacrylate)). The initial
analysis was with PS-co-DVB as a polymer to make monoliths, which did not
respond as expected and had a very low efficiency and capacity of separation. With
the second one, GMA-co-EDMA, the research was with three ways to increase the
ion exchange capacity. “The first two methods (involving suflonation with 4-
hydroxybenzenesulfonic acid and thiobenzoic acid respectively), while
increasing capacity, turned out not to be suitable to separating anion mixtures.
The third method, which made use of sodium sulphite, increased capacity of
the monolith by 30-fold and also increased its chromatographic efficiency.”[10]
This research paved way for an effective ion exchange separation. [10],[11]
There have been various evolutions over the past in the various use of the matrix for
ion exchange chromatography. S-Zephyr is found to have a better performance
compared to Mono-S in the cation exchange chromatography for separations. The
conclusions are derived keeping some parameters in view such as the retentivity
time. S-Zephyr shows better results for all types of separations.[12]
In Gene transfer , the choice of vector for the gene transfer to occur ,is recombinent
adenoviruses, wit the the emphasise more the no so popular serotypes and
chimeras- a human engineered protein. With this boom , a purified form of alternative
serotypes is required. Generally , for the purification of adenovirus , the anion
exchange chromatography is the best choice for separation. There may be a change
in the retention times as there are differences to how the capsid proteins are
exposed. With a thorough knowledge of the behaviour of virus , the retention time
can be influenced and an efficient basis for chromatography method can be deduced.
Study reveals , hexon protein was found to be altering the retention difference in the
anion chromatography. An analysis was carried out with with different groups of
steroisotopes. The sterioisotopes were found to bind the anion column well and
Sodium chloride solution was used to elute. The retention times were related with
good accuracy to properties of hexon proteins. Such an analysis helps in establishing
a good chromatography gradients for different serotypes [13]
An application that is sorted for the separation is by high temperature, at about 900C.
A Dionex CS12A column with the help of suppressed conductivity detection is used
for the to separate ions. The temperature of operation being high mitigates the
viscosity and enables to maintain a better flow rate compared to a normal flow rate
utilised and had a better effect on the separations as well. The drawback of such a
method was the retention time was reduced of cations. This led the column being
operated at 600C, which still yielded a better result than the regular methods of
operation. [14]
As the nutritional value plays an important role in the modern world, the detection of
sugars is a great find by ion exchange chromatography. The results that are found to
be obtained out of these results prove genuine and proper. For this analysis, the
exchanger uses a mobile phase which is predominantly a basic material (NaoH), and
the medium or the column which is used is an amine resin column. This analysis,
with pulsed amperometric detection is a big tool in measuring glucose, fructose and
sucrose. The detection was found to be better and also took lesser time compared to
the other method, HPLC. [15]
A new system is devised for the separation of cations and anions which utilises only
one pump, eluant and detector. The columns are connected to each other side by
side and the columns are utilised by changing the valve, which is useful for
separation in a single run. The two columns do not operate simultaneously. When
one column is in operation , the other column is on the standby for analysis. When
such a system was tested for using 2.4 mM 5-sulfosalicylic acid resonal. This
showed an acceptable separation level. On placing an injection valve, the required
separations of the ions targeted were achieved having better detection and capacity.
[19]
A recent study by a group of Turkish scientists reveals that a new column can be
developed using particles of monodispersed polymers which can be instrumental
in designing an ion exchanger. The particles, because of their characteristic of
being monodispersable, account for better separations. Latest research reveals a
chromatography column developed using atom transfer radical polymerisation
method.(ATRP) using poly glycidyl methacrylate-co-ethylene dimethacrylate, or
poly(GMA-co-EDM) with ion exchange ligands. The combination was seen to be
a major hit for protein separations. An initiator is used with the poly(GMA-co-
EDM). On further and final analysis of the polymers and when the system was
tested for a run , the conclusions drawn from them was effective separation of
proteins like lysosome, myoglobin etc. The column utilised the optimum column
height and also had very low run to run reproducibility values.[20] [21]
Ion exchange chromatography is the most preferred technique for protein separation
as about 57 % proteins separated are via this technique. Large protein volumes are
analysed by this method. The column has a very good efficiency and life period and
can sustain giving a good output. This technique uses nominal chemicals possible
and the samples prepared for analysis need not be more. Basically, with the
presence of the charge in it, this technique has good binding capabilities compared to
the other techniques. A distinctive feature that separates ion exchange
chromatography from the others is it more ions may be evaluated or analysed at
once. The additional advantage that it carries is its reliability, the strength of the
technique and the assurance of results, which make it stand out comparing other
separation methods. They can also be useful for scaling up. [5] [23][24]
Example: Measurement of ammonium nitrite is not simple. Ammonium nitrite
analysis, by other analysis is particularly difficult while ion exchange chromatography
has a distinctive feature of sensitivity towards it. [24]
There are some basic issues regarding ion exchange chromatography. For
separation, setting up the column is an issue as it takes a lot of time doing the same.
Normally the extract out from the surrounding cannot be directly fed into the column.
Generally, separation of materials having different charges is difficult here.
There needs to be a continuous vigil to the column to check if everything is operating
properly and to ensure that everything is alright. It is difficult to optimize.[25]
Ion exchange Chromatography has gained significant interests in the different fields
from environmental studies, commercial separations , industrial applications etc. The
most utilised area of ion exchange chromatography is the medical, bio molecular
biomedical, lower molecular weight substances and pharmaceutical applications.
Ran GDP has a big role to play in the transportation of essential substances and
in cell division. This reveals the affinity and the binding of Ran to an ion exchange
column which is very responsive to the concentration of magnesium chloride. At
moderate concentrations of magnesium chloride, Ran was found to elude and
cause an acceptable level of separation. When there was a further reduction the
concentration levels of magnesium chloride, the purity levels of the Ran ,further
seem to enhanced resulting in the further purification , which was confirmed on
testing with a High performance Liquid Chromatography.[28]
Plasmid DNA is gaining a great importance for genetic vaccination and gene
therapy. For pDNA to be utilised for the above application, a purified form is very
essential. The crunch step of pDNA production utilises a chromatographic technique.
Monoliths are on the top priority list for separation of pDNA, as they have a very
strong binding affinity towards the pDNA. The best choice of the purification is by
anion exchange chromatography as the polynucleotides are negatively charged and
do not depend upon the buffer levels.. The various natures of ligands, their
respective densities and their optimisation levels were understood. The
scaling up was done using a convective interactive media monoliths operating at low
pressure levels. Hence the they were successful in the production of pDNA (the
intermediate step). Such a technique gained a lot of prominence in the
pharmaceutical industry.[30]
In the baking industry, potassium bromate has a major application as it has a very
good keeping and conditioning property of dough. Potassium bromate but, has a
significant health concerns and hence was banned by the World Health Organisation
as a food additive. Recent studies have shown that this problem can be overcome
using ion exchange chromatography. Initially there were a lot of other techniques that
were tried for detecting the same like HPLC, GC but they were not so successful in
separating bromate. Hence, ion exchange chromatography was considered. The
column used for separating the bromate “Ion Pac AS19”. The separation of bromates
took place with an elusion of potassium hydroxide in a linear column. In order to
calculate the bromate levels, some predominant parameters like the separation time,
the temperature of separation with other parameters were considered. Analysis
showed that after sonication, peaks were formed after 30 mins and there was no
significant contribution by temperature for the separation. The detection limit was
0.5µg/L. When this was actually tested on flour oriented products, the test was
successful had had a very high bromate recovery levels. [31] [32]
10)Secko, D.,(2006), Monolith Polymers reveal High Capacity for Ion Exchange
http://www.separationsnow.com/coi/cda/detail.cda?id=12786&type=Feature&chId=5
&page=1
13)Konz,J.O.,Livingood,R.C.,Bett,A.J.,Goerke,A.R.,Laska,M.E.,Sagar,S.L.(2005),Ste
reotype specificity of adenovirus purification using anion exchange chromatography.
Human Gene Therapy 16(11) pp 1346-1353.
14) Chong, J., Hatis,P., and Lucy, C.A.(2003). High-speed ion chromatographic
separation of cations at elevated temperature. Journal of Chromatography.A. 997(1-
2) pp161-169.
http://www.separationsnow.com/coi/cda/detail.cda?id=14057&type=Feature&chId=5
&page=1
17) Li,J., Zhu, Y., and Guo, Y.(2006) Fast determination of anions on a short coated
column . Journal of Chromatography .A. 1118, pp 46-50.
18)Fritz, J.S, Yan, Z., and Haddad, P.R.(2003) Modification of ion chromatographic
separations by ionic and nonionic surfactants. Journal of Chromatography .A. 997
Pp 21-31.
19)Amin ,M., Lim,L.W and Takeuch, T. (2006) Tunable separation of anions and
cations by column switching in ion chromatography .Journal Article.
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6THP-4KR3J5N-
1&_user=10&_coverDate=08%2F24%2F2006&_alid=472156754&_rdoc=10&_fmt=s
ummary&_orig=search&_cdi=5288&_sort=d&_docanchor=&view=c&_acct=C000050
221&_version=1&_urlVersion=0&_userid=10&md5=8ad429c69a9b17de141228aa6b
15d18
21)Unsal , E., Elmas, B., Caglayan, B., Tuncel,M., Patir, M and Tuncel, A.(2006)
Preparation of an Ion-Exchange Chromatographic Support by A "Grafting From"
Strategy Based on Atom Transfer Radical Polymerization. Journal of Analytical
Chemistry. 78(16) pp 5868-5875.
22)Rucevic, M., Clifton, J.G., Huang, F., Li,X., Callanan, H., Hixson,D.C and
Josic,D.(2006). Use of short monolithic columns for isolation of low abundance
membrane proteins. Journal of Chromatography A 1123(2) pp 199-204.
http://www.chem.ubc.ca/courseware/121/tutorials/exp3A/columnchrom/
http://www.resonancepub.com/chromtutorial.htm
http://www.laboratorytalk.com/news/mea/mea144.html
[26] Jacob , L., Schluter , H.,(2006) Bio processing and Biopartnering. (article)
[29] Ítalo, D., Teixeira,A., Melo,L.M., Gadelha, C.A.A., Silva da Cunha, R.M., Bloch
Jr,C., Rádis-Baptista,G., Cavada,,B.S.,and José de Figueirêdo Freitas ,V. (2006)
Ion-exchange chromatography used to isolate a spermadhesin-related protein from
domestic goat (Capra hircus) seminal plasma
Journal of Genetics and Molecular Research (Online Journal)
http://www.funpecrp.com.br/gmr/year2006/vol1-5/gmr0193_full_text.htm
Access date : 25th September 2006.
[30] Urthaler, J., Schlegl, R., Podgornik , A., Strancar , A., Jungbauer, A., Necina,
R.(2005) Application of monoliths for plasmid DNA purification development and
transfer to production Journal of Chromatography. A. 1065(1) pp 93-106.
[32] Shi, Y., Liang, L., Cai, Y., and Mou, S. (2006). Determination of Trace Levels
of Bromate in Flour and Related Foods by Ion Chromatography. Journal of Food
Chemistry 54(15), pp 5217-5219.
Chapter 2.4
Affinity Chromatography
Aekta Chhabra
2.4.1 Introduction
Underlying Principle
Source: http://www.shodex.com/english/da0401.html
The next important step is the matrix selection that marks the success for affinity
chromatography. Matrix materials function to hold the active ligands and provide a
pore structure to increase the surface area thereby increasing the binding capacity of
the molecules. This binding requires that the matrix be activated and then react with
the ligands to bind them tightly onto the matrix. During the entire process, the ligand
must remain active towards the target molecule. Amino, hydroxyl, carbonyl, and thio
groups, are easily activated and can serve as the sites on the matrix to which the
ligands attach. The matrix, in addition to requiring activation, should also be resistant
to contamination when purifying pharmaceutical compounds. Decontamination is
performed by a column rinse with sodium hydroxide or urea. Different matrix
materials have a tendency to be stable in different pH ranges, which essentially
forms the third aspect of selection for affinity chromatography.[2]
Fig 2: The ligand attached to the matrix binds to the protein of interest and is
separated from other proteins.
Source: http://pubs.acs.org/journals/chromatography/chap5.html
At its introduction of the technique in the late ’60s, affinity chromatography was used
to purify classes of proteins which were dependent on their properties or function —
antibody binding, hormone binding, enzyme inhibition, etc. In the last few years
development of affinity chromatography has enabled monoclonal antibody
purification through the special and unique affinity of Protein A (a protein originally
from Staphylococcus aureus), with the constant region of antibody. In the recent
years, computational chemistry, molecular modelling and combinatorial chemistry
have provided opportunity for chromatographic adsorbent development to enable
purification by design. In this particular mode, a specific adsorbent is constructed to
the target biopharmaceutical moiety in a customized program between the
biopharmaceutical company and the adsorbent.[3]
In order to increase the efficacy of purification, the concept of spacers between the
biologically active ligand and the polymer was introduced. 95% of all affinity
purification methods apply the same general principles using spacers.
Immunoaffinity chromatography continues to be instrumental for the isolation of
proteins produced by genetic engineering. Recent years have showed that affinity
columns can be used to remove toxic compounds from blood. Nowadays, chemical
reactions can be combined with the affinity concept to demonstrate and study the
phenomenon of biological recognition. A perfect example of this kind of approach is
affinity labeling, which can identify the residues in the binding-active site of proteins.
The involvement of amino acid residues with active sites of enzymes can be
examined using this technique. Affinity therapy is a biorecognition-based approach,
which selectively delivers a cytotoxic drug or toxin to the targeted or infected cell. The
cell-associated target molecule can be a surface antigen, a surface receptor or any
other type of biomolecule which is specific for a given antibody, hormone, or nutrient.
The cell-associated target molecule can be a cell-surface antigen, a surface receptor
or other type of biomolecule bearing specificity for a given antibody, hormone, or
nutrient. The drug counterpart can comprise the corresponding antibody or hormone
to which a cytotoxic compound (e.g., selected from chemotherapeutic drugs, radio
nucleotides, or toxins from different origins) has been chemically attached in a
synthetic process. This approach was popularly known as affinity therapy, which was
later changed by others to the misnomer, immunotoxins. [1]
In protein engineering, the tasks of generating and testing a large number of variants
of a molecule and optimization of expression conditions for one individual molecule
create the need for purification methods possessing ability to handle a large number
of samples simultaneously. A simple affinity chromatography system can be used for
the parallel purification of 24 protein samples, yielding sufficient quantities after
purification to be used for biochemical and functional analysis. This system
employing affinity chromatography has certain advantages over existing systems: the
costs of this system are minimal as compared to other chromatography systems,this
system allows gentler processing of the samples and is therefore beneficial for
proteins prone to be easily damaged. [6]
Affinity chromatography shows its true power in intricate biochemical analysis. The
challenge of bioseparations lies in identifying and extracting the target protein from its
constituent sample mixture. For example, about 300 proteins in blood plasma have
now been identified and separated among thousands of them present at low
concentration. Thus, affinity chromatography combines appropriate selectivity
alongwith high yield.[3]
http://www.hellers.com/steve/resume/p143.html
• http://en.wikipedia.org/wiki/Main_Page
• http://www.biopharminternational.com/biopharm/data/articlesta
ndard/biopharm/522003/80026/article.pdf
• Prometic Biosciences
http://wwwold.unict.it/dipchi/05Didattica/Corsionline/Coloranti/15_B
iochimica/affinchrom/affinitychromat.htm
http://www.thermo.com/com/cda/category/category_lp/0,,15431,00.html?CA=
columns
• http://www.bio.com/store/product.jhtml;jsessionid=HZ5RIFVN2LBXRR3FQLM
SFEWHUWBNQIV0?id=prod1440010
• http://pharmalicensing.com/intelligence/reportsearching.php?action=toc&prod
uctID=854939
• http://www.bioscienceworld.ca/ChromatographyLiquid
• http://www.jenabioscience.com/?gclid=CI-V4I6VnIgCFRw6GAodjDaELw
• http://www.drugdiscoveryonline.com/Content/ProductShowcase/product.asp?
DocID=%7BC840DD69-1408-4BD0-8AC3-
7E7D4B5FD4EC%7D&VNETCOOKIE=NO
• http://www.gmi-inc.com/BioTechLab/perceptivebiocad.htm
• http://www.versamatrix.com/page/en/1007.aspx
• http://wwwold.unict.it/dipchi/05Didattica/Corsionline/Coloranti/15_Biochimica/
affinchrom/affinitychromat.htm
• http://www.iscpubs.com/bg/il/prod/prod1180.html
• Biocentrum Ltd.
http://awe.mol.uj.edu.pl/biocentrum/kolumny_e.pdf
• Prometic Biosciences
http://www.ump.com/prometic.html
• http://www.srlchem.com/products/product_tree.php?entryId=21
• http://www.patricell.com/products.html
http://www.proteinscience.org
http://pubs.acs.org/journals/chromatography/chap5.html
http://www.bioscienceworld.ca/BioseparationThePowerofAffinityPurification
4. Ito. A, etal (2002) Recent advances in basic and applied science for the
control
Of taeniasis/cysticercosis in Asia In The Southeast Asian journal of tropical
medicine and public health, vol. 33, SUP3 (dissem.), pp. 79-82
http://cat.inist.fr/
http://www.gallusimmunotech.com)
http://peds.oxfordjournals.org/cgi/content/full/15/10/775
http://www.tu-harburg.de/vt2/chrom/Chromatography05_Part8.pdf
9. Safarik, I & Safarikova, M(2004) Magnetic techniques for the isolation and
purification of proteins and peptides
http://www.pubmedcentral.gov
http://www.leelab.org/research/papers/AnalBiochem324-1.PDF
http://en.wikipedia.org/wiki/Affinity_chromatography
http://www.patentstorm.us
15. Lim, K., Mohamed, R., Embi, N & Nathan, S. (2005) Mediated Affinity
Chromatography
http://www.msmbb.org.my
16. Valsasina, B., Kalisz, H & Isacchi A(2004) Kinase selectivity profiling
by inhibitor affinity chromatography In Expert Rev Proteomics. Oct ;1:303-
15
http://lib.bioinfo.pl/pmid:15966827
El-Wazani Montaser
2.6.1 Introduction
Reversed –phase HPLC has found a central role in protein studies because
of its versatility, sensitive detection and its ability to work together with
techniques such as mass spectrometry.
Most of all, however, reversed phase HPLC is widely used because of its
ability to separate proteins of
nearly identical structure [1],[2].
Fig.2.6.1.a Fig.2.6.1.b
2.6.1.1 Mechanism of Protein/Peptide Retention
In reversed phase HPLC the particle surface is very hydrophobic due to the
chemical attachment of hydrocarbon groups to the surface. Proteins are
retained by the adsorption of a face of the protein
(Termed the “hydrophobic foot”) to the hydrophobic surface [3].
Fig.2.6.1.1.a Fig.2.6.1.1.b
Fig.2.6.1.1.c
Fig.2.6.1.2.b
c) Column length
Column length is not important in protein separations and short columns
separate proteins as well
as long columns
Fig.2.6.1.2.C1 Fig.2.6.1.2.C2
d) Column Diameter
See reference [6]
Fig.2.6.1.2.d
b) Gradient elution
Proteins and polypeptides are almost always eluted using a solvent gradient
where the relative
concentration of organic solvent is slowly increased during the separation [7].
Fig.2.6.1.3.b1 Fig.2.6.1.3.b2
A reduction of the gradient slope to improve resolution must be tempered with
the need for keeping
analysis time as short as possible. However, adjusting the gradient slope is
important in optimizing
resolution of proteins and peptides [7].
c) Ion-Pairing Reagents
1. Trifluoroacetic acid
TFA added to the mobile phase at concentration of ~0.1% results in good
peak shape on most
columns
Fig.2.6.1.3.C1
Fig.2.6.1.3.C2
3. Ion suppression
The major benefit of ion suppression is the elimination of mixed mode
retention effects.
At low pH, carboxylic acid groups will be protonated and only slightly polar.
Increasing the mobile phase
pH to 6-7 will cause the carboxylic acid groups to ionize, making the proteins
and peptide less
hydrophobic. This reduces the retention of all peptides.
Fig.2.6.1.3.C3
2.6.1.5 Temperature
Temperature can have a profound effect on reversed phase chromatography
[9],[10].
Fig.2.6.1.5a Fig.2.6.1.5.b
2.6.1.6 Reversed –phase HPLC- MS
Reference [11],[12].
Fig.2.6.1.6a
Fig.2.6.1.6b
2.6.2 Recent Advanvces
2.6.2.1 Columns
Column developments in HPLC have benefited protein and peptide
separations in a number of ways
a) Atlantis TMdC18 columns
Are a fully LC/MS compatible line of universal C18 columns that offer the
perfect balance of retention for
both polar and non-polar compounds. Atlantis TMdC18 columns are compatible
with aqueous mobile phases, provide enhanced low pH stability and are
available in a wide variety of column configurations ranging from nano-scale
to preparative
Fig2.6.2.1a1 Fig2.6.2.1a2
Little or no - Re-run using separate - Polar compounds are retained longer with Atlantis
TM
retention of methods for polar dC18 columns
polar compounds - One Atlantis TMdC18 column and method can be us
compounds - Increased method for polar and non-polar compounds
development time and labor - Decreased labor cost
Fig2.6.3.a1
Method requires - Loss of retention is - Atlantis TMdC18 packing material is tested with high
100% aqueous observed polar analytes in 100% aqueous conditions, thereby
mobile phase ensuring its utility in aqueous conditions
for desired
separation
Sudden loss of - Run organic modifier - Atlantis TMdC18 column don’t lose retention in 100%
analyte through column to rewet aqueous mobile phases
retention and regenerate column - Less time spent rewetting columns resulting in low
observed when labor costs.
using highly - Increased labor and - Increased throughput
aqueous mobile solvent costs
phase - Decreased throuput Fig2.6.3.a2
- Reproducibility issues
Short column -High cost due to frequent - The proprietary difunctional bonding chemistry of
life time in column replacement Atlantis TMdC18columns results in low pH stability an
acidic mobile longer column lifetime
phases -Increased instrument - Decreased costs associated with column replacem
downtime and instrument maintenance
Retaining polar - Multiple columns are - One Atlantis TMdC18column and method can
compounds on required to separate used for polar and non-polar compounds
a conventional analytes with a wide - Easier and faster method development
C18 column range of polarities - Increased throughput
results in - Increased method
increased or development time, Fig2.6.3.a4
infinite retention labor and column
of non-polar costs
compounds - Decreased
throughput
Severe peak - Method fails system - Atlantis TMdC18columns are optimally endcap
tailing for polar suitability guidelines and provide excellent peak shapes using MS
bases is for peak tailing compatible mobile phases
observed - Increased method - Easier and faster method development
development time
Fig2.6.3.a5
Fig2.6.3.a6
2.6.2.1 Columns
c) Column packing processes (The future of preparative chromatography)
Introduced by Phenomenex, AXIA™ is an advanced column packing and
hardware design that eliminates
bed collapse as a source of failure in short prep columns.
Using Hydraulic Piston Compression (HPC™) technology patent pending,
several fundamental problems faced daily by preparative chromatographers
have been solved.
Fig.2.6.2.1.c1
-Peak distortion
or asymmetry –
reducing the
return on each
purification cycle
Fig.2.6.2.2.a
b) Features
Fig.2.6.2.2.b3
Traceability - ACQUITY UPLC console with calculator for eases the
and method development and transfer from HPLC to UPLC
Intelligence - Connections INSIGHT services to monitor system
operational characteristics
- EmpowerTM or MasslynxTM Software for control and data
management
- eCordTM Technology electronically stores all of the
information to track your experiment
The waters nanoACQUITY Ultra Performance LCTM system has been designed to
achieve separation at nanoflow rates without flow-splitting, offering significant
improvements in robustness, reproducibility and simplicity of operation over
conventional nanoflow separations technologies. The nanoACQUITY UPLC
system directly benefits from holistic design of ACQUITY UPLC system. The
optimized fluidic design of the nanoACQUITY UPLC System, together with the
proprietary nanoACQUITY 1.7µm BEH chemistry, enables significant
improvement for enhanced analysis of the lowest abundant peptide.
Fig.2.6.2.3
Fig.2.6.3.e2
Agilent's HPLC-Chip is the first microfluidic chip-based device that can carry
out nanoflow high performance liquid chromatography (HPLC). The center
piece of Agilent's new HPLC-Chip technology is a reusable microfluidic
polymer chip. Smaller than a credit card, the HPLC-chip seamlessly
integrates the sample enrichment and separation -columns of a nanoflow LC
system with the intricate connections and spray tip used in electrospray mass
spectrometry directly on the polymer chip. The technology eliminates 50% of
the traditional fittings and connections typically required in a nanoflow LC/MS
system, dramatically reducing the possibility of leaks and dead volumes and
significantly improving ease of use, sensitivity, productivity and reliability
during analysi
Fig.2.6.2.4.1
The second component of the HPLC-Chip technology is the HPLC-Chip/MS
interface. A chip is inserted into the interface, which mounts on an Agilent mass
spectrometer. The design configuration guarantees that the electrospray tip is in the
optimal position for mass analysis when the chip is inserted. Replacement of the chip
is simple and can be completed in a few seconds as opposed to much longer times
required to change out nanoLC columns. The HPLC-Chip interface will be available
as a standard module within the Agilent 1100 Series LC system.
Fig.2.6.2.4.2
Fig.2.6.3.f1
Fig.2.6.3.f2
www.chem.agilent.com (for more information)
2.6.4. Application
This study report the first time a method to express in E.coli and purify LL-37.
LL-37 is a 37 residue cationic, amphipathic α-helical peptide. Factor Xa was
used to cleave a 4.5kDa LL-37 from the GST-LL-37 fusion protein and the
peptide was purified using reverse-phase HPLC on a Vydac C18 column with a
final yield of 0.3 mg/ml. The protein purified using HPLC was confirmed to be
LL-37 by the analyses of Westren blot and MALDI-TOF-Mass spectrometry.
The concentration of LL-37 was determined by comparing its peak area with
that of the chemically synthesized LL-37[14].
Fig.2.6.4.1
2. Enabling significant improvements for peptide mapping with UPLCTM
Fig.2.6.4.2.b
To show how UPLC can resolve the same number of peaks in a peptide map
as HPLC but in less time, the separation
of an enolase digest was done both with flow rate of 100µl/min. The UPLC
separation shows the same number of peaks and a similar overall elution
pattern as the HPLC separation, but in half the time. UPLC offers the potential
to reduce cycle times for peptide maps.
Fig.2.6.4.2.c
UPLC is particularly important when the peptide map is used to detect
modified peptides. Higher resolution ensures that modified peptides are
resolved from the unmodified form, as well as from other peptides in the
digest. UPLC should be the technique of choice for detecting all the peptides
in a sample.
Fig.2.6.4.2.d
The separation of several peptides with formic acid is compared with TFA on
a UPLC column with MS detection. With formic acid, the peak heights are
about 3-fold higher. This result indicates that the UPLC columns perform
extreme well under conditions that are best for ESI-MS
Fig.2.6.4.2.e
Fig.2.6.4.2.f
3. Nano-HPLC for determination of proteins in infant formula
Fig.2.6.4.3.1
Fig.2.6.4.3.2
Fig.2.6.4.4.a Fig2.6.4.4.b
Fig.2.6.4.4.e
2.6.5 Relevant web sites
www.chem.agilent.com
www.waters.com
www.axiaprep.com
www.polymerlabs.com
14 Expression and purification of a recombinant IL-37 from Escherichia
coli
(online at www.sciencedirect.com (2006))
15 Enabling Significant Improvements for Peptide Mapping with UPLC.
Jeffrey R,Thomas E,Wheat,Beth L.Gillence-Castro, and Ziling Lu.
(www.waters.comWater Corporation,2005).
16 Determination of proteins in infant formula
(online at www.sciencedirect.com(2006))
17 Comparison of HPLC-Chip/MS with conventional nanoflow LC/MS for
proteomic analyses.Martin vollmer, Christine miller and GeorgesL.Gauthier
(online at www.agilent.com/chem/hplc-chip (2005))
2.6.7 References
2.8.1 Introduction
Membrane proteins are protein molecules that are attached to, or associated with the
membranes of mitochondria, chloroplasts, cells and organs. Membrane proteins have
a crucial role in many cellular and physiological processes. Usually, they are
essential mediators of the transfer of material and information between cells and their
environment and between compartments within cells. [10] Based on their attachment
to the membrane, membrane proteins can be classified into two groups: Integral
membrane proteins and Peripheral membrane proteins. Integral membrane proteins
(IMP), also called intrinsic proteins, is a protein molecule (or assembly of proteins)
that in most cases spans the biological membrane with which it is associated
(especially the plasma membrane) or which, in any case, is sufficiently embedded in
the membrane to remain with it during the initial steps of biochemical purification. In
general, IMPs can be divided into three groups: transmembrane, membrane-
associated and lipid-linked. On the other hand, Peripheral membrane proteins, or
extrinsic proteins, do not interact with the hydrophobic core of the phospholipid
bilayer. Instead they are usually bound to the membrane indirectly by interactions
with integral membrane proteins or directly by interactions with lipid polar head
groups. (Source:
http://www.nlm.nih.gov/cgi/mesh/2006/MB_cgi?mode=&term=Membrane+proteins)
This review will describe some recent advances, compare and evaluate these
techniques and discuss three specific applications.
With the development of genomic and proteomic technologies, they are opening a
new field of vision on purification of membrane proteins. In an indian study, the gene
encoding an outer membrane protein designated ompTS was amplified by PCR
excluding the region coding for signal peptide, cloned in pQE 30-UA Vector and
expressed using induction with isopropyl thiogalactoside (IPTG) [6]. In a study of the
conformation of the CadF protein which was very hard to purify from Campylobacter
(the most common bacterial agent of gastroenteritis) membranes, Mamelli and his
colleagues developed a novel strategy to produce significant quantities of a
recombinant N-terminal domain of the CadF protein (46.5µg/mg of bacterial dry
weight) and of the native CadF protein (3.5µg/mg of bacterial dry weight).The
nucleotide sequence encoding the N-terminal domain of the CadF protein was cloned
in a pET-based expression vector. The recombinant protein was further produced in
Escherichia coli, purified from inclusion bodies, and refolded. [10] Moreover, recent
studies in this area had also provided new insight into the response of host cells to
membrane protein expression and into the mechanism of membrane insertion [4, 16].
Experimental procedures for handling and isolating integral membrane proteins are
generally more challenging than their soluble counterparts, since the former requires
purification in detergent. A simple and cost-efficient detergent screening strategy is
most important premise of a large-scale protein production and crystallization.
Recently, studies in Sweden were on extracting and purify the recombinant
ammonium/ammonia channel, AmtB, from Escherichia coli. 26 detergents, 4 types of
chromatography columns and various buffer conditions had been screened using a
96-well plate format. Large-scale protein purification and subsequent crystallization
screening resulted in AmtB crystals diffracting to low resolution with three detergents:
UDM, DDM and Cy6. The researches suggested that excluding detergents that were
not useful for high-yield extraction of a specific protein (e.g. due to destabilizing the
protein) might be very helpful to minimize the number of detergents during
crystallization screening. The fact that no crystals of AmtB were grown in the
presence of OG and LDAO might be such an indication. [14]
Although it actually has made some progresses to improve the techniques, there still
exits many problems on the purification of membrane proteins to be solved. This part
will mainly discuss the difficulties of membrane protein purification technologies and
how to face those hard problems.
There are several difficulties of techniques on the investigation and separation of
membrane protein complexes originate from their nature as membrane proteins.
Because (1) they are very hydrophobic and have single or several transmembrane
parts, or closely associate with the membrane; (2) in the functional form, many of
them comprise (homologous or heterologous) multi-subunit complexes; (3) such
membrane protein complexes contain many cofactors and, inevitably, lipids; (4) some
membrane protein complexes have several peripheral proteins which are functionally
important but easily detached during the isolation process [5].
The first difficulty is to choose the right detergent for an efficient purification of the
membrane protein of interest due to the hydrophocity of membrane proteins. There
are dozens of different detergents that are commonly used that are less
characterized but still probably useful and many novel detergents under
development. It has also been reported that some compartments of the cell
membrane show resistance towards certain detergents. Moreover, mixtures of
detergents are sometimes used during purification and crystallization. Altogether, the
size of the detergent parameter space becomes very large. On the other hand,
screening fewer detergents may result in poor yield, unstable protein and/or no
protein crystals, and is not recommended. Thus, the presented strategy allows the
screening of tens of detergents, for their efficiency of pure protein production and
crystallization, easily and simultaneously, producing reliable and reproducible results,
at very low cost [14]. For instance, a detergent such as sodium dodecyl sulfate (SDS)
can be used to dissolve cell membranes and keep membrane proteins in solution
during purification; however, because SDS causes denaturation, milder detergents
such as Triton X-100 or CHAPS can be used to retain the protein's native
conformation during purification [20].
This section will discuss three recent applications; two of them are on the purification
of Gram-negative bacteria’s outer membrane proteins, one is on the prostate-specific
membrane antigen (PSMA).
Beis and his co-workers’ study described the development of a two-step purification
protocol for Escherichia coli outer integral membrane proteins, Wza and Osmoporin
C (OmpC) proteins. In this research, the two very different proteins were purified to
homogeneity by anion exchange and size exclusion chromatography and the purity
of the samples was judged by electrophoretic analysis, mass spectrometry, single
particle analysis, three-dimensional (3D) crystallisation and X-ray diffraction. At first,
Bacterial cells were disrupted and a crude extraction procedure that was the same
for both the Wza and OmpC proteins was fractionated by ultracentrifugation. The
membranes were solubilised overnight at room temperature with rolling and insoluble
materials were then removed by centrifugation. Secondly, initially purification of Wza
was performed using anion exchange chromatography and then Wza was purified
using a linear gradient of 0–100% 1M NaCl elution buffer and size exclusion
chromatography.[2] Finally, the purification of the OmpC protein was carried out
using the same purification protocol as for the Wza [1].
Fig.2.8.4.2 (a) SDS-PAGE of OmpC protein. Lane 1 shows the extraction of OmpC from outer membranes, lane
2 the OmpC after the anion exchange chromatography step and lane 3 pure OmpC after size exclusion
chromatography. Molecular weight markers are shown on the left side of the SDS-PAGE in kDa. (b) 3D crystals of
OmpC. Scale bar 100 mm. [2]
Neisserial porins represent more than 60% of outer membrane proteins [12], so the
second application is about the outer membrane protein PorB of Neisseria
meningitides. This protein has been shown to up-regulate the surface expression of
the co-stimulatory molecule CD86 and of MHC class II, be involved in prevention of
apoptosis by modulating the mitochondrial membrane potential and form pores in
eukaryotic cells [11]. As an outer membrane protein, its native trimeric form isolation
is complicated by its insoluble nature and it required the presence of detergent
throughout the whole procedure.
Another case is on PSMA, a membrane protein that has attracted significant attention
as a target for immunioscintigraphic and radioimmunotherapeutic applications for
prostate cancer. For this purification, they optimized the purification of native PSMA
from LNCaP cells using conformational epitope-specific antibody-affinity
chromatography. In contrast to general affinity chromatography, this chromatography
for the purification of native PSMA employs resin bound anti-PSMA monoclonal
antibody 3C6 that reacts with a protein conformational epitope present in the
extracellular portion of human PSMA [21]. As this antibody binds PSMA when
present in a native conformation, only PSMA in a native conformation is retained by
the affinity resin. Then, there are three further methods to purify PSMA and a
comparison of these methods explored is outlined in Table 2.8.4[8]. Western blot
analysis and an HPLC-based enzymatic activity assay were used to compare the
yield and results demonstrated that all three methods provided similar yields of
PSMA. Method A resulted in the least amount of PSMA in a non-native conformation
(0.9%) suggesting that PSMA initially purified by this method was predominately in
an active conformation. These results were consistent with enzymatic activity data
obtained in which PSMA purified by this method exhibited the greatest enzymatic
activity when compared to the other two methods. The ratio of purified PSMA in a
native and active conformation was determined by quantifying the amount of non-
native PSMA not retained in second antibody-affinity isolation. The low amount of
denatured PSMA obtained in Method A when compared to Method B demonstrates
that high pH conditions promote denaturation of PSMA. In comparison to Method B,
Method C confirmed the importance of the essential Zn2+ cofactor during the
desalting step. [8] This result also suggests that Zn2+ may have a stabilizing role in
addition to its functional role in PSMA’s enzymatic activity, In other words, the
addition of both a neutralization step and the inclusion of Zn2+ to the equilibration
buffer in desalting step provides considerable enhancement in the yield of active
PSMA from LNCaP cells.
2.8.6 References
13. Mohanty, A.K. & Wiener, M.C. (2004) Membrane protein expression and
production: effects of polyhistidine tag length and position. Protein Expression and
Purification. 33, 311–325.
14. Niegowski, D., Hedr´en, M., Nordlund, P. & Eshaghi, S. (2006) A simple strategy
towards membrane protein purification and crystallization. International Journal of
Biological Macromolecules. 39, 83–87.
15. Raymond, F., Rolland, D., Gauthier, M. & Jolivet, M. (1998) Purification of a
recombinant protein expressed in yeast: optimization of analytical and preparative
chromatography. Journal of Chromatography B. 706, 113–121.
16. Reinhard, G.(2006)Understanding recombinant expression of membrane
proteins. Current Opinion in Biotechnology. 17, 337–340.
17. Rucevic, M., Clifton, J.G., Huang, F., Li, X., Callanan, H., Hixson, D.C.& Josic, D.
(2006) Use of short monolithic columns for isolation of low abundance membrane
proteins. Journal of Chromatography A. 1123, 199–204.
18. Schwabe, T.M.E., Gloddek, K., Schluesener, D. & Kruip, J. (2003) Purification of
recombinant BtpA and Ycf3, proteins involved in membrane protein biogenesis in
Synechocystis PCC 6803. Journal of Chromatography B. 786, 45–59.
19. Sharov, V.S., Galeva, N.A., Knyushko, T.V., Bigelow, D.J., Williams, T.D. &
Schoneich, C. (2002) Two-dimensional separation of the membrane protein
sarcoplasmic reticulum Ca–ATPase for high-performance liquid chromatography–
tandem mass spectrometry analysis of posttranslational protein modifications.
Analytical Biochemistry. 308, 328–335.
20. Tamm, L.K. & Liang, B. (2006) NMR of membrane proteins in solution. Progress
in Nuclear Magnetic Resonance Spectroscopy. 48, 201–210.
21. Tino, W.T., et al. (2000) Isolation and characterization of monoclonal antibodies
specific for protein conformational epitopes present in prostate-specific membrane
antigen (PSMA). Hybridoma 19, 249–257
22. Wetzler, L.M., Blake, M.S. & Gotschlich, E.C. (1988) Characterization and
specificity of antibodies to protein I of Neisseria gonorrhoeae produced by injection
with various protein I-adjuvant preparations. J. Exp. Med. 168, 1883–1897.
.
Charpter 2.9 industrial Scale Purification of Proteins
Guo lu
2.9.1 Introduction
The expansion of techniques and methods for protein purification has been an
indispensable pre-requisite for numerous of the advancements made in
biotechnology of the large scale purification of proteins. Most of the products of
biotechnology are proteins; moreover, these proteins must be arranged in large
volumes in purified form. Generally, if contaminants can be detected, they must be
detached or proven to be harmless. Beside that, the protein must be purified from
other proteins. However, the nucleic acids, carbohydrates, lipids or any other
materials in the sample should not be ignored.
To purify proteins, their inherent similarities and differences should be used. Protein
similarity is applied to purify them away from the other non-protein contaminants.
Alternatively, the differences are applied to purify one protein from another. Proteins
differ from each other in size, shape, charge, hydrophobicity, solubility, and biological
activity. Additionally, the protein product must preserve its biological activity. It is
noticeable that a method which works deftly in a research laboratory may fail
miserably on the industrial production that must be in large scale and reproduced
exactly. The Three Phase Purification Strategy is used as a support to the
development of purification processes for therapeutic proteins in the pharmaceutical
industry. The overall step is shown in figure 2.9.1.[1]
Ion exchange chromatography (IEX or IEC) has become one of the best-known
methods for protein and peptide purification because it offers scalability, high
specificity, and wide choice of column materials. It relies on reversible charge
interactions between a charged biomolecule (such as a protein or nucleic acid) and
an oppositely charged resin-based matrix.
(http://www.biocompare.com/spotlight.asp?id=255)
With the development of the modern biotechnology industry and its desire for highly
purified pharmaceutical proteins, a advance emphasis has been positioned on entire
processes with admiration to the economy, capacity and the quality of resultant
product. Usually, the separation extend power required is controlled by the need to
resolve the product. It is not only from the background impurities resulting from the
fermentation but also from degradation products and analogues of the drug itself. For
many cases, hydrophobic interaction chromatography (HIC) is an ideal separation
method.
(http://teachline.ls.huji.ac.il/72682/Booklets/AMERSHAM_hydrophobic_interactionMa
nual.pdf)
Alois, Waltraud and Robert believed that the correct folding of solubilized
recombinant proteins is of key importance for their production in industry. Many
recent improvements have been made to the use of of immobilized metal affinity
chromatography and by mimicking the natural folding process with artificial
chaperones. [2]
Gejing and Gautam pointed out that in the affinity-based approach,compounds are
screened based on their binding affinities to target molecules. The interaction
between targets and compounds can be directly evaluated by monitoring the
formation of non-covalent target–ligand complexes (direct detection) or indirectly
evaluated by detecting the compounds after separating bound compounds from
unbound (indirect detection). Various techniques including high performance liquid
chromatography (HPLC)–MS, size exclusion chromatography (SEC)–MS, frontal
affinity chromatography (FAC)–MS and desorption/ionization on silicon (DIOS)–MS
can be applied. [3]
While possibly the most simple column chromatography technique, gel filtration (GF)
chromatography is one of the most flexible since it can be performed under a
diversity of physical and chemical conditions and normally does not involve an
complicated protocol.1,2 GF (also called size-exclusion chromatography) separates
globular proteins due to their molecular weights. The liquid volume can be "seen" by
the column which consists of a mobile phase and a stationary phase. (http://www.the-
scientist.com/article/display/12820/)
In the recent research by Damien etl., they set up a heterologous expression system
in Sf9 insect cells allowing the expression and production of large amounts of a pure
active human protein and use gel filtration to purify the target protein.[4]
EBA uses equipment that is known to most users of standard liquid chromatography.
The column has a flow adapter which is located to suit the specific step of resin
preparation or protein purification. Besides that, a series of pumps and valves,
connected through the adapter and bottom of the column is to control the flow rate
and direction of the buffer and sample loading. Therefore, it is possible to perform
initial EBA trials with a little ingenuity and standard chromatographic equipment.
(http://pubs.acs.org/subscribe/journals/mdd/v04/i12/html/12toolbox.html)
Three anion exchanger expanded bed adsorption (EBA) matrices: Streamline DEAE,
Streamline Q XL and Q Hyper Z were evaluated with the aid of EFGP from an
ultrasonic homogenate of Escherichia coli in the research of Cabanne, etl. Based on
the results, the two other matrices gave a good purification of the EGFP (7–15-fold)
but the Q Hyper Z matrix appeared to give the best results. It is composed of little
size and density beads which lead to a higher exchange surface and then a better
mass transfer. [6]
Despite it is used commonly in the industrial scale of purification, it still has some
aspects that influence the product.Based on the inflexibility of the media, the loss of
wall support in a large scale column will have a smaller or greater impact on bed
compression, with associated deterioration of the flow/pressure properties of the
packed bed. The effect of bed compression can be checked by running a
pressure/flow rate curve such as outlined under ‘‘Packing large scale columns’’. Zone
spreading can also be caused by non-column factors such as increased internal
volumes of pumps, valves and monitoring cells and different lengths and diameters of
pipes or tubing. If all the above aspects of scaling up are taken into consideration,
chromatographic variability is normally not a big issue when scaling-up.
In the study of Alois,Christine and Rainer , they considered that HIC exploits the
hydrophobic properties of protein surfaces for separation and purification by
performing interactions with chromatographic sorbents of hydrophobic nature. In
contrast to reversed-phase chromatography, this methodology is less detrimental to
the protein and is therefore more commonly used in industrial scale as well as in
bench scale when the conformational integrity of the protein is important. [7]
There is one factor to consider when choosing a GF medium is the exclusion limit, or
the molecular weight limit, of the pores; proteins over this limit will be completely
excluded from the pores and will not be separated. It is also important to choose the
type of resin. A diversity of resins exists, ranging from silica-based to polymeric.
Some companies recommend cross-linked resins, which are advantageous for high-
pressure purifications as they do not compress and lose porosity under high-pressure
conditions. Nevertheless, non-cross-linked resins are appropriate for routine
purifications.
It can chose from a variety of column configurations based on the use of the
purification, and the amount of material available. The smaller inner diameter is used
for high resolution separations with very little protein wanted, while large columns
may be demand for industrial protein purifications. Analytical columns with diameters
of up to 5 mm are daily use RPC columns for routine analytical and purification work.
Preparative columns are larger in diameter and can be used for purification of large
quantities of proteins from the range of mg to gram.
The results of the research of Janine, Colin and Robert showed the excellent
potential of one-step RP-HPLC for purification of recombinant proteins from cell
lysates, where high yields of purified product and greater purity are achieved
compared to affinity chromatography. And they suggested that this approach was
also successful in purifying just trace levels (<0.1% of total contents of crude sample)
of TM 1–99 from a cell lysate. [8]
In the more traditional packed-bed methods, the clogging occurs when particulate
matter and cell debris cannot flow around the closely packed resin beads because
the resin is confined between the bottom of the column and the flow adapter.
Compared with the conventional methods, EBA columns are fed from below, and the
adapter is held away from the packed resin level in order to give the resin room to
expand .As a consequence creates spaces between the beads.
2.9.4 Applications of the Technology
IEX separates proteins by the uses of differences in charge to give a very high
resolution separation with high sample loading capacity. It is relay on the reversible
interaction between a charged protein and an oppositely charged chromatographic
medium. Conditions are then altered thus bound substances are eluted differentially.
Increasing in salt concentration or changing in pH can perform this elution. During
binding, target proteins are concentrated and collected in a purified, concentrated
form.
Normally IEX is applied to bind the target molecule, however, it can also be used to
bind impurities if necessary. At different pH values, IEX can be repeated to separate
numerous proteins which have noticeably different charge properties. During a multi-
step purification, this can be used to advantage. [1] In the research of Khandeparkar
and Bhosle, they use this method to Isolation, purification and characterization of the
xylanase and get a expected results. This indicates that this method is effective for
industrial applications. [9]
Tony, Lloyd, and Alfons pointed out that they have developed expedient and reliable
methods to isolate cyclosporin synthetase for in vitro biosynthesis of cyclosporins
which is use the Hydrophobic Interaction Chromatography(HIC). It is said that the
industrial implementation of an in vitro biosynthetic approach could potentially prove
useful for the production of important therapeutic cyclosporins which occur as only
minor fermentation by-products.[10]
Figure 2.9.4.1 typical HIC gradient elution
EBA is a single pass operation in which target proteins are purified from crude
sample, without the need for separate clarification, concentration and initial
purification to remove particulate matter. Figure 44a shows the steps involved in an
EBA purification and Figure 44b shows a typical EBA elution pattern. In the research
of Jian-Feng, Guang-Ce and Cheng-Kui, they applied several methods to isolate and
purify large-scale of R-phycoerythrin from red alga Polysiphonia urceolata Grev.
However, the results indicate that using the expanded bed adsorption combined with
ion-exchange chromatography or hydroxyapatite chromatography, R-phycoerythrin
can be puriWed from frozen P. urceolata on large scale. [14]
http://www.biocompare.com/index.asp
http://www.biocompare.com/spotlight.asp?id=255
http://teachline.ls.huji.ac.il/72682/Booklets/AMERSHAM_hydrophobic_interactionMan
ual.pdf
http://www4.amershambiosciences.com/aptrix/upp00919.nsf/Content/LabSep_EduC
~LC_tech~AC
http://www.the-scientist.com/article/display/12820/
http://en.wikipedia.org/wiki/Reversed-phase_chromatography
http://pubs.acs.org/subscribe/journals/mdd/v04/i12/html/12toolbox.html
2.9.6 Reference
[14]Niu, J.-F., G.-C. Wang, et al. (2006). "Method for large-scale isolation and
purification of R-phycoerythrin from red alga Polysiphonia urceolata Grev." Protein
Expression and Purification 49(1): 23-31.
Chapter 2.10 Determination of Protein Concentration and
Purity
Xiao zheng Mu
2.10.1 Introduction
In biuret Method, Lowry Method, Bradford Method and BCA Method, chemical
reagents are added to protein solutions to develop a color whose intensity is
measured in a spectrophotometer. A “standard protein” of known concentration is
also treated with the same reagents and a calibration curve is constructed.
Electrophoresis is an analytical tool by which biochemists can examine the
movement of charged molecules in an electric field. There are several
electrophoresis which are very helpful for the analysis of protein concentration and
purity: Polyacrylamide Gel Electrophoreisis (PAGE), Sodium Dodecyl Sulfate-
Polyacrylamide Gel Electrophoresis (SDS-PAGE), Isoelectric Focusing (IEF), Two-
Dimensional Electrophoresis (2-DE), Capillary Electrophoresis (CE) and
Immunoelectrophoresis(IE). The spectrophotometric Method relies on a direct
spectrophotometric measurement. There are two kinds of spectrophotometry which
can be used in protein concentration and purity determination, they are Ultraviolet-
Visible (UV) Absorption Spectrophotometry and Fluorescence Spectrophotometry.
High-performance liquid chromatography (HPLC) is ideally suited for the separation
and identification of amino acids, proteins, nucleic acids and many other biologically
active molecules. The use of nonpolar chemically bonded stationary phases with a
polar mobile phase is referred to as reverse-phase HPLC (RP-HPLC). Amino acid
analysis can also be used to determine amount of protein present by Automated
Edman Degration. None of the methods is perfect because each is dependent on the
amino acid content of the protein. However, each will provide a satisfactory result if
the proper experimental conditions are used and/or a suitable standard protein is
chosen. Other important factors in method selection include the sensitivity and
accuracy desired, the presence of interfering substances, and the time available for
the assay.
2.10.2 Recent Advances
In the area of 2-DE, in recent years, some new methods came out such as
fluorescence 2-DE [1, 2]. Through labeling of samples with one of three specially
different fluorescent dyes, Cyanine-2, Cyanine-3 or Cyanine-5, the labeled samples
are then run in one gel and detected individually by scanning the gel at different
wavelengths. After quantitative analysis by the Phoretix/ImageMaster software, the
different expressed proteins can be obtained. Based on the principle of this method,
however, those proteins without lysine residues can not be labeled and lost. At the
same time, the high cost of the whole system prevents it from spreading out [3]. In
2003, Yuan et al. [3] reported a new IPG strip application, called multi-strips on one
gel method. This new method can not only improve the reproducibility and resolution
power of 2-DE pattern, but also achieve high throughput and economical format
which is helpful to automatic proteomic research.
A new technology has been introduced during the past few years that greatly
increased the speed of spectrophotometric measurements. New detectors called
photodiode arrays are being used in modern spectrometers. Photodiodes are
composed of silicon crystals that are sensitive to light in the wavelength range 170-
1100 nm. Upon photon absorption by the diode, a current is generated in the
photodiode that is proportional to the number of photons. Linear arrays of
photodiodes are self-scanning and have response times on the order of 100
milliseconds; hence, an entire UV-VIS spectrum can be obtained with an extremely
brief exposure of the sample to polychromatic light. New spectrometers designed by
Hewlett-Packard and Perkin-Elmer use this technology and can produce a full
spectrum from 190 to 820 nm in one-tenth of a second [4].
The biuret assay has several advantages including speed, similar color development
with different proteins, and few interfering substances. Its primary disadvantage is its
lack of sensitivity [4].
The obvious advantage of the Lowry assay is its sensitivity, which is up to 100 times
greater than that of the biuret assay; however, more time is required for the Lowry
assay. Since proteins have varying contents of tyrosine and tryptophan, the amount
of color development changes with different proteins, including the bovine serum
albumin standard. Because of this, the Lowry protein assay should be used only for
measuring changes in protein concentration, not absolute values of protein
concentration [4]. Specialist literature contains a multitude of modifications for the
Lowry assay. The principal target is to reduce the high susceptibility to interference.
The Lowry method is adversely affected by a wide range of non-proteins. Additives
such as EDTA, ammonia sulfate or Triton X-100 in particular are incompatible with
the test.
The Bradford method is twice as sensitive as the Lowry or BCA test and is thus the
most sensitive quantitative dye assay. It is the easiest to handle and most rapid
method and has the additional advantage that a series of reducing substances (e.g.
DTT and mercaptoethanol), which interfere with the Lowry or BCA test, have no
adverse effect on results. However, it is sensitive to detergents. The main
disadvantage is that identical amounts of different standard proteins can cause
considerable differences in the resulting absorption coefficients. With a microassay
procedure, the Bradford assay can be used to determine proteins in the range of 1 to
20 µg. The Bradford assay shows significant variation with different proteins, but this
also occurs with the Lowry assay. The Bradford method not only is rapid but also has
very few interference by nonprotein components. The only known interfering
substances are detergents, Triton X-100 and sodium dodecyl sulfate. The many
advantages of the Bradford assay have led to its wide adoption in biochemical
research laboratories [4]. In the study of Giraudi et al. [5], they mentioned that a
disadvantage of Bradford assay is the variability of colour development with different
proteins: the absorbance change per unit mass of protein varies with the nature of
the protein assayed. They believe that the Bradford method should give a lower
protein concentration than the real value due to the lower probability of interaction
between dye molecules and free lysine residue.
In the study of Lucarini and Kilikian [6], the methods of Lowry and Bradford were
compared regarding the level of interference of some substances used for
glucoamylase precipitation by ethanol. The method of Bradford suffers no
interference while the method of Lowry showed protein concentration values 20%
increased in the presence of ethanol and Tris. They also mentioned that despite
these interferences, the Lowry method can evaluate more accurately the increase of
purity during fractionation, due to its sensitivity to low molecular weight (below 6 kDa)
proteins and peptides.
The BCA protein assay is based on chemical principles similar to those of the biuret
and Lowry assays. This assay has the same sensitivity level as the Lowry and
Bradford assays. Its main advantages are its simplicity and its usefulness in the
presence of 1% detergents such as Triton or sodium dodecyl sulfate (SDS) [4]. This
test is easier to carry out and sensitivity can be varied using different temperatures.
Furthermore, the dye complex is very stable. However, this test is highly susceptible
to interference, although on the positive side, its insensitivity to detergents is similar
to that of the Lowry method.
Perhaps the most difficult and inconvenient aspect of PAGE is the preparation of
gels. The monomer, acrylamide, is a neurotoxin and a cancer suspect agent; hence,
special handling is required. Other necessary reagents including catalysts and
initiators also require special handling and are unstable. In addition, it is difficult to
make gels that have reproducible thickness and compositions. Many researchers are
now turning to the use of precast polyacrylamide gels. Several manufacturers now
offer gels precast in glass or plastic cassettes. Gels for all experimental operations
are available including single percentage (between 3 and 27%) or gradient gel
concentrations and a variety of sample well configurations and buffer chemistries.
Several modifications of PAGE have greatly increased its versatility and usefulness
as an analytical tool [4].
Modern IEF techniques, both in soluble and immobilised buffers, have much to offer
to users. Adequate solutions exist to the two most noxious impediments to a well
functioning technique, namely lack of flexibility in modulating the slope of the PH
gradient and protein precipitation at the pI value. The solution is use of spacers and
novel mixtures of solubilisers, comprising sugar and high molarities of zwitterions. In
addition, an important spin-off of the IEF know-how seems to be gaining importance
in zone electrophoretic separations: the use of isoelectric buffers. Such buffers allow
delivery of extremely high voltage gradients, permitting separations of the order of a
few minutes, thus favouring very high resolution due to minimum, diffusion-driven,
peak spreading. As an extra bonus, by properly modulating the molarity of the
isoelectric buffer in solution, it is possible to move along the pH scale by as much as
0.3 to 0.4 pH units, thus optimising the pH window for separation [7].
2-DE is a more sensitive analytical method than either electrophoretic method alone
[8]. It is a standard method for judging protein purity. In addition, this technique is
becoming increasingly valuable in developmental biochemistry, where the increase
or decrease in intensity of a spot representing a specific protein can be monitored as
a function of cell growth [4]. However, Classical 2-DE with pH gradient generated by
a carrier ampholyte was limited in its resolution, reproducibility and protein-loading
capacity [9] because of pH-gradient instability with prolonged focusing time: the pH
gradient moves towards the cathode (cathode drift). Detailed comparisons of carrier
ampholyte-based patterns for the same cell material in separate laboratories were
very difficult, furthermore, limiting to establish collective databases of 2-D gel
information [3].
The IEF technique is most useful for the analysis of protein purity, composition, and
antigenic properties. The basic IEF technique allows only qualitative examination of
antigenic proteins. The advanced modifications, rocket IE and two-dimensional IE
should be used to get to quantitative results in the form of protein antigen
concentration [4].
Biologists often require certain concentration and purity protein in their researches.
The techniques for protein concentration and purity determination can be used in
many areas. The development of techniques and methods for determination of
protein concentration and purity has been essential for many of the recent
advancements in biotechnology research [11]. The purity of a protein is a pre-
requisite for its structure and function studies or its potential application [12]. New
technologies such as SDS-PAGE is used widely in protein purity and concentration
analysis.
Interferons (IFNs) were originally discovered due to their ability to protect cells
against viral infections [13]. However, IFNs have also potent immunomodulatory
effects and antiproliferative activity against malignant cells [14]. According to the
World Health Organization, potency, purity, identity and stability are the most
important properties for the quality control of these cytokines [15]. In the study of
Ruiz et al. [16], the influence of the protein concentration and a formulation vehicle
on the stability of recombinant human Interferon alpha 2b in solution was evaluated.
RP-HPLC was undertaken on a Vydac wide-pore octyl column. Purity was calculated
as percentage of the main peak divided by the total area. The samples were also
analyzed by SDS-PAGE as described by Laemmli [17]. Increasing therapeutic
applications for recombinant human interferon-γ (rhIFN-γ) has broadened interest in
optimizing methods for its production and purification [18]. In the section of Reversed
phase chromatography in the study of Reddy et al. [18], the major peak was collected
in fractions. The fractions were then analyzed by SDS–PAGE. These fractions
proved to be uncontaminated and were pooled for renaturation of the protein. The
eluted fractions were analyzed by SDS–PAGE. Pure fractions were pooled for
renaturation. In the section of renaturation and gel filtration, the eluted dimer peak
was collected and then both rechromatographed on a Superdex-75 column and
assessed by SDS–PAGE to analyze purity. In the section of Cross-linking analysis,
dimer formation was further confirmed by interchain cross-linking of the monomeric
rhIFN-γ using DSS as a cross-linker [19]. The cross-linked dimer was then analyzed
by SDS–PAGE. A single band corresponding to 33 kDa was confirmed by it.
The high resolving power of capillary electrophoresis combined with the specificity of
binding interactions may be used with advantage to characterize the structure-
function relationship of biomolecules, to quantitate specific analytes in complex
sample matrices, and to determine the purity of pharmaceutical and other molecules
[22].
http://matcmadison.edu/biotech/
http://www.proteome.org.au/
http://www.proteome.org/
http://www.ebi.ac.uk/
http://www.proteomesci.com/home/
http://www.biobase-international.com/pages/
http://www.proteome.co.uk/
http://www.hupo.org/
http://au.expasy.org/
http://biotechnology.mq.edu.au/biotechnology.htm
http://www.biology.arizona.edu/
http://www.indstate.edu/thcme/mwking/home.html
http://www.biochemistry.org/
http://www.bioch.ox.ac.uk/
http://www.asbmb.org.au/
http://wbiomed.curtin.edu.au/biochem/
References:
1. Tonge, R., Shaw, J., Middleton, B., Rowlinson, R., Rayner, S., Young, J.,
Pognan, F. et al. (2001). Validation and development of fluorescence two-
dimensional differential gel electrophoresis proteomics technology.
Proteomics 1, 377 396.
2. Zhou, G., Li, H., DeCamp, D., Chen, S., Shu, H., Gong, Y., Flaig, M. et al.
(2002). 2D differential in-gel electrophoresis for the identification of
esophageal scans cell cancer-specific protein markers. Mol Cell Proteomics
1, 117 124.
3. Yuan, Q., An, J., Liu D. G., & Zhao, F. K. (2003). Multi-strips on One Gel
Method to Improve the Reproducibility, Resolution Power and High-
throughput of Two-dimensional Electrophoresis. Acta Biochemica 35, 611-
618.
4. Boyer, R. (2000) Modern Experimental Biochemistry. 3rd edn. Addison Wesley
Longman. 41-43, 116, 121, 130-131, 157.
5. Giraudi, G., Baggiani, C., & Giovannoli, C. (1997). Inaccuracy of the Bradford
method for the determination of protein concentration in steroid-horseradish
peroxidase conjugates. Analytica Chimica Acta 337, 93-97.
6. Lucarini, A. C. & kilikian, B. V. (1999). Comparative study of Lowry and
Bradford methods: interfering substances. Biotech. Tech. 13, 149-154.
7. Righetti, P. G., Stoyanov, A. V. & Zhukov. M. Y. (2001) The proteome
revisited: theory and practice of all relevant electrophoretic steps. Elsevier
Science. Amsterdam. 207, 268, 368.
8. Nelson, D. L., & Cox, M. M. (2003) Lehninger : Principles of Biochemistry. 3rd
edn. New York. Worth Publishers. 115.
9. Klose, J., & Kobalz, U. (1995). Two-dimensional electrophoresis of proteins:
An updated protocol and implications for a functional analysis of the genome.
Electrophoresis 16, 1034 1059.
10. Song, S., Zhou, L., Thompson, R., Yang, M., Ellison, D., & Wyvratt, J. M.
(2002) J. Chromatogr. A. 959, 299-308.
11. Wilchek, M., & Miron, T. (1999). React. Funct. Polym. 41, 263.
12. Altintas, E. B., & Denizli, A. (2006). Monosize poly (glycidyl methacrylate)
beads for dye-affinity purification of lysozyme. Inter. J. Bio. Macromol.. 38, 99-
106.
13. Isaacs, A., & Lindenman, J. (1957). Virus interference-1. The interferons.
Proc R Soc Lond B Biol Sci, 147, 258-267.
14. Bordens, R., Grossberg, S. E., Trotta, P. P. & Nagabhushan, T. L. (1997).
Molecular and biologic characterization of recombinant interferon-alpha2b.
Semin Oncol, 24, S9-41-51.
15. World Health Organizations (1988). Requirements for human interferons
made by recombinant DNA techniques. Technical Series No. 771.
16. Ruiz, L., Reyes, N., Aroche, K., Baez, R., Aldana, R., & Hardy, E. (2006)
Some factors affecting the stability of interferon alpha 2b in solution.
Biologicals 34, 15-19.
17. Laemmli, U. K. (1970) Cleavage of structural proteins during the assembly of
the head of bacteriophage T4. Nature 227, 680-685.
18. Reddy, P. K. et al., (2006) Increased yield of high purity recombinant human
interferon-γ utilizing reversed phase column chromatography, Protein
Expression and Purification, doi:10.1016/j.pep.2006.08.013.
19. Wang, F., Liu, Y., Li, J., Ma, G., & Su, Z. (2006), On-column refolding of
consensus interferon at high concentration with guanidine–hydrochloride and
polyethylene glycol gradients, J. Chromatogr. A. 1115, 72–80.
20. Hanna, C., Gjerde, D., Nguyen, L., Dickman, M, Brown, P. & Hornby, D.
(2006). Micro-scale open-tube capillary separations of functional proteins.
Anal. Biochem. 350, 128-137.
21. Herman, R. A., Korjagin, V. A. & Schafer, B. W. (2005). Quantitative
measurement of protein digestion in simulated gastric fluid. Regu. Toxi.
Phaema. 41, 175-184.
22. Heegaard, N. H., Kennedy, R. T. (1999). Identification, quantitation, and
characterization of biomolecules by capillary electrophoretic analysis of
binding interactions. Electrophoresis 20, 3122-3133.
Chapter 3.1 Amino Acid Analysis and Sequencing of Proteins
Alexander Leahy
Introduction
Amino acid analysis involves the complete hydrolysis of the protein into its
constituent amino acids and then separation of these residues for quantitative
analysis. There are a number of hydrolysis techniques, including acid hydrolysis,
base hydrolysis and enzyme hydrolysis.
Some form of derivatisation, such as with ninhydrin or 6-aminoquinolyl-N-hydroxy-
succinimidyl carbamate is then performed on the amino acids in order to make them
detectable. They are then separated some form of chromatography, such as ion-
exchange or reverse-phase[1]. Capillary electrophoresis is also becoming more and
more common, due to its speed, resolution and sensitivity, and also its ability to
separate enantiomers.[2]
Figure 3.1.1 Perkin Elmer Applied Biosystems Model 420A PTC derivatizer with an
on-line Perkin Elmer Applied Biosystems Model 130A PTC Amino Acid Analyzer.
Source: http://www.biotech.iastate.edu/facilities/protein/aaa.html
Sequence analysis is a far more complicated procedure. There are two main
methods commonly used: Edman degradation and mass spectrometry. In both
cases, however, when sequence an entire protein, it is usually necessary to digest
the protein using a protease like trypsin to break the protein into smaller peptides
which can then be sequenced. However these peptide sequences must then be
recombined in order. To achieve this, the process must be repeated but the protein is
digested with a different protease such as pepsin which breaks peptide links at
different amino acids. The resulting sequences are then compared and aligned
where they overlap. Since the peptides will be of differing lengths, the resulting
alignments can be combined together to result in the overall sequence.
As seen in Figure 3.1.3, yx and bx ions are formed from fragmentation between the
carbon and nitrogen of the peptide bond. Ideally, a peptide will fragment along the
peptide backbone forming these yx and bx ions. The difference in mass between
these ions will indicate sequence data for the fragment being analysed.
Recent Advances
There have been a number of developments in amino acid analysis in recent years.
With the increase in use of capillary electrophoresis as a method of separation of the
hydrolysed residues[2], there has been increased activity in developing better
detection techniques. A number of detection methods exist for this process, such as
UV detection and Laser Induced Fluorescence (LID).
Mass spectrometry has become the method of choice for sequencing proteins and so
there has been a lot of focus on its development within the last 5 years. In particular,
there has been a lot of attention in the manner ions are fragmented to generate the
mass spectrum from which the sequence is derived.
CAD is one of the more commonly used methods for fragmentation in MS/MS.
However, CAD can sometimes fail to get a complete distribution of fragments from
cleavage along the backbone of the peptide, making sequence determination a
complex task. This is often due to Arg residues preventing random protonation along
the backbone, or post translational modifications that provide a lower energy
cleavage than the backbone. An example of this is phosphorylated residues
becoming the preferred site for cleavage, as shown in Figure 3.1.4. Consequently the
mass spectrum is dominated by a single peak of the peptide without the phosphoric
acid moiety, (Figure 3.1.5 A)
Figure 3.1.4 Fragmentation scheme for loss of phosphoric acid from a multiply
protonated phosphopeptide by CAD.
Source: Syka et. al. (2004), Proc. Natl. Acad. Sci. USA 101:9528-9533[5]
Electron Transfer Dissociation, however, is a technique developed by Syka, et. al. [5]
which transfers an electron to the protonated peptides in the mass spectrometer as
shown in Equation 1.
The electron carrying peptide produces fragments in such a way that does not cleave
any of the chemical modifications from the peptide, but instead promotes cleavage
along the peptide backbone. The diagram below shows mass spectra of a
phosphopeptide. The first spectrum was obtained using the more conventional
collisation activated technique. The spectrum is largely dominated by a single peak
resulting in the loss of the phosphoric acid moiety. Figure x.B shows the mass
spectrum of the same peptide after electron transfer dissociation fragmentation. It is
can be clearly seen that the peaks which result from cleavage along the backbone
make the sequence much more easily determined.
The resulting PRM from the alignments of spectra is much simpler to interpret for de
novo sequencing as some of the noise present in individual spectra is removed, with
the added advantage of being able to sequence the complete protein from one
resulting spectra. [8]
Fragmentation patterns are often very complex in mass spectrometry. The location of
fragmentation of a peptide varies depending on the method used. Ideally a
fragmentation consisting mostly of bx and yx ions would be desirable, but this often is
not the case. Other types of ions, such as those formed from fragmentation of the
side chains and further fragmentation of fragments also add to the peaks in the
spectra observed. As not fragmentation techniques are compatible with all types of
mass spectrometers, this makes it difficult for a laboratory to easily select a method
which would best analyse their protein.
However with so many peptides and proteins already sequenced, it is often not
necessary to sequence an entire peptide to determine its identity. Sequence tags are
short sections of the sequence of a peptide which can be used to identify a peptide
sequence in a database. In fact, frequently the spectrum itself can be used to search
databases for protein identification.
One of the problems with performing sequencing by mass spectrometry is being able
to distinguish between leucine and isoleucine. These amino acids are isomers of
each other and so have identical mass. The only way to distinguish between them
using mass spectrometry techniques is by high energy fragmentation. The ions that
are formed often cleave the side chain of the amino acid, forming d, v and w ions.
These ions can be used to distinguish between these two amino acids.
The ions generated with the MALDI source were detected using MALDI-TOF MS and
a profile of the peptides present was obtained as shown below.
Figure 3.1.8 Mass profile of peptides in pars intermedia tissue from the amphibian
X. laevis, obtained by direct MALDI TOF MS analysis under delayed extraction
conditions.
Source: Jesperson, S. et al, Anal. Chem., 71(3), 660-666 [11]
The two most prominent ions present at masses of 1050.4u and 1392.7u were
chosen to perform MALDI-PSD analysis. Analysis of the 1050.4 u peak proved
difficult as the peptide had a disulfide link which severely affected the fragmentation
patterns observed. However, identification of the masses of individual amino acids
assisted in identifying the peptide and the peaks matching fragmentation around the
disulfide link supported the conclusion. The figure below shows the identified peptide
and the matching ions from the spectrum.[11]
This example highlights the need for the breakage of disulfide links in peptides before
sequence analysis.via mass spectroscopy.
Figure 3.1.9 Known sequence of the vasotocin peptide (N-terminal end on top),
including an intrinsic disulfide (S-S) bridge between the two cysteine residues.
Indicated are the a-, b-, and y-type ions observed in the MALDI-PSD fragment
spectrum
Source: Jesperson, S. et al, Anal. Chem., 71(3), 660-666 [11]
.
Figure 3.1.10 MS/MS profile of the fragmentation of the selected ion (511.2 Da
[M + 2H]2+) and the deduced sequence derived from de novo sequencing based on
MassLynks data processing system (Waters, USA). Consistency between the b and
y ion series established the sequence TPPAGPDGGPR.
Source: Higuchi S, et al, Comp. Biochem. Physiol. C. Toxicol. Pharmacol,
144(2):107-21[12]
http://www.ionsource.com/
Waters is a company that makes analytical instruments for protein analysis. They
have a number of references on their site relevant to the development of amino acid
analysis techniques.
http://www.waters.com/watersdivision/ContentD.asp?watersit=JDRS-
5LTHE6&WT.svl=1
Waters is a company from USA which is able to supply many analytical instruments
required for protein analysis. They offer a variety of mass spectrometers, such as
LC/MS, LC/MS/MS, MALDI and GC-MS. They also supply chromatography
instruments, such as the ACQUITY UPLC which can complete an analysis in less
than 30 minutes. They also supply AccQtag which is a derivatisation reagent which
allows amino acid residues to be detected by fluorescence.
References
1. Iowa State University, (2004) Amino Acid Analysis,
http://www.biotech.iastate.edu/facilities/protein/aaa.html
2. Poinsot V, Lacroix M, Maury D, Chataigne G, Feurer B, Couderc F (2006),
Recent Advances in Amino Acid Analysis by Capillary Electrophoresis,
Electrophoresis, 27, 176-194
3. H. Jakubowski (2006), Biochemistry Online, chapter 2 B
http://employees.csbsju.edu/hjakubowski/classes/ch331/protstructure/olcomp
seqconform.html
4. Hunt, D.F., Yates, J.R., Shabanowitz, J., Winston, S., Hauer, C.R. (1986),
Protein Sequencing by Tandem Mass-Spectrometry.
Proc. Natl. Acad. Sci. USA 83:6233-6237
5. Syka et. al. (2004), Proc. Natl. Acad. Sci. USA 101:9528-9533
6. Coon, J, et. al (2005), BioTechniques, 38(4), 519 - 523
7. Jesperson, S., Chaurand P., van Strien F, Spengler B., van der Greef J,
(1999) Direct Sequencing of Neuropeptides in Biological Tissue by MALDI-
PSD Mass Spectrometry, Anal. Chem., 71(3), 660-666
8. Bandeira N, Tang H, Bafna V, Pevzner P (2004), Shotgun Protein
Sequencing by Tandem Mass Spectra Assembly, Anal. Chem. 76(24), 7221-
7233
9. Karen A West, Jeffrey D Hulmes and John W Crabb (1996), Amino Acid
Analysis Tutorial
http://www.abrf.org/ResearchGroups/AminoAcidAnalysis/EPosters/Archive/4c
.html
10. David E. Metzler, (2001), Biochemistry – The Chemical Reactions of Living
Cells, Harcourt/Academic Press, pp 116
11. Higuchi S, Murayama N, Saguchi K, Ohi H, Fujita Y, da Silva N, Bezerra de
Siqueira R, Kahlou S, Aird S (2006), A novel peptide from the ACEI/BPP-CNP
precursor in the venom of Crotalus durissus collilineatus, Comp. Biochem.
Physiol. C. Toxicol. Pharmacol, 144(2):107-21.
12. Savitski MM, Nielsen ML, Kjeldsen F, Zubarev RA (2005), Proteomics-grade
de novo sequencing approach. J Proteome Res. 4(6):2348-54.
13. GE Healthcare Life Sciences (2002),
http://www4.amershambiosciences.com/aptrix/upp01077.nsf/Content/Product
s?OpenDocument&parentid=366147&moduleid=165399&zone=Proteomics
14. Applied Biosystems (2006),
http://www.appliedbiosystems.com/applications/proteomics/protein_sequenci
ng.cfm
Chapter 3.2 Chemical modifications of proteins
Hellan M Luo
3.2.1 Introduction
Protein chemists have long been interested in altering the chemical, physical and
biological properties of proteins by chemically changing their structure. [1] Almost the
very 1st thing that discovered by scientists is ‘’ Protein can be easily changed upon
treatment with chemical reagents’’. [1] Their liability to chemical reagents and
reaction conditions has been a very serious problem foe many purposes. The
application of modern knowledge of proteins, new chemical reagents and more
sophisticated analytical techniques has made chemical modification of protein
molecules become one of the most useful approaches to study/research of their
properties. [1].
The term “chemical modification" refers to formation or cleavage of covalent bonds,
generally with the side chains, though cleavage of peptide bonds and modification of
the a-amino terminal can formally be included. [3] Most of these methods are not
reversible by dilution, gel filtration or dialysis, however, there are few exceptions==
Schiff base formation with aldehydes and some reactions of Arginine are thus
reversible. [3]. some modifications are reversed by changing the conditions, for
instance lowering the pH, which can be very useful, but most are not and Some
modifications are hydrolyzed in 6N HCl, as do before amino acid analysis; Non-
covalent interactions also have their place - competitive inhibitors which bind at
active sites; low dielectric solvents change the UV absorbance of groups exposed to
solvent, fluorescence-quenching molecules similarly quench the fluorescence of
exposed tryptophan residues.[3][4]
Table 3.2.1 Severn (7) main purposes of chemical modifications of protein and their
main strategies, summarized from [2] [3]
Table 3.2.2 Addiction of other groups, target residue and reversibility for protein
chemical modification, source Molecular Genetics, Protein Modification [2]
2. Or by isomerization of residues.
3. During translation (co translational) or after the polypeptide chain has been
completed (post-translational). [2]
There are many recent advances developed thanks to the modern protein
technologies, of which, are widely used in Biopharmaceuticals fields; Proteins are
controlled by a vast and dynamic array of post-translational modifications, many of
which create binding sites for specific protein-interaction domains. The proteome can
be linked to post-translational modifications to cellular organization. The most
common strategies are modification-dependent interactions synergize to regulate cell
behavior. Some recent developed technologies include:
There are chemical modifications of proteins which control biological activity. The
majority of these modifications are co-translational and post-translational reactions,
some of them are considered to occur in a ‘random’ manner, in such modification,
they can not be catalysed by an enzyme, and /or enzymes or subject to environment
of the proteins.
For examples:
Oxidation of proteins occurs in vivo with generally unfavourable consequences. Nitric
oxide is a potent physiologic agent with diverse systemic effects Peroxynitrite which
is formed from nitric oxide is a mediator of some of these physiologic effects of nitric
oxide.
Glycation is the term used to identify the reaction of reducing sugars with proteins.
This involves the initial formation of a Schiff base followed by rearrangement in the
Maillard reaction eventually resulting in advanced Glycation end (AGE) products
Reaction can occur at lysine and arginine residues with resulting crosslink formation.
[5] [21][22]
The modification of proteins and peptides with poly (ethylene) glycol (PEG) is the
most frequent chemical modification used in the manufacture of
biopharmaceuticals. [5][21][14]
Other recent advances include:
7) Modular stop and go extraction tips with stacked disks for parallel and
multidimensional Peptide fractionation in proteomics.
Proteins may be modified in vitro so that they may be used for detection,
purification and assay development. In very general terms, protein modification
reagents can be separated into reagents that add, cleave or reduce. Reagents
that add labels are used for immunoassays, flow cytometry, fluorescence-
activated cell sorter (FACS) analysis and molecular structure and function
studies. Addition reagents also include those used to block particular functional
groups. Reagents that enzymatically cleave can be used for removing amino
acids, producing antibody fragments and releasing peptides from fusion proteins.
Reagents that chemically reduce can be used for protein solubilisation and to
facilitate cross-linking. [24]
Reagent Reactivity
Bolton-Hunter Reagent (SHPP) Primary Amines
Water-Soluble Bolton-Hunter Reagent (Sulfo-SHPP) Primary Amines
Succinimidyl-3-(tri-N-butylstannyl) benzoate Primary Amines
Sulfo-SHB Primary Amines
HPPH Carbonyl/Aldehyde
ß-(4-Hydroxyphenyl)ethylmaleimide Sulfhydryl
ß-(4-Hydroxyphenyl)ethyl iodoacetamide Sulfhydryl
Table 3.2.3 Protein modifications reagents can be Iodinate (source: Pierce net
Protein Chemistry) [24]
The keep reagents that commonly used are: sulfhydryl groups , free thiol groups ,
dithiothreitol (DTT), 2-mercaptoethanol and 2-mercaptoethylamine,
Tris(carboxyethyl), phosphine (TCEP).[24] [8][9]
1) For Pharmaceutical proteins are unstable when injected into the blood
circulation. The half life is short, from several minutes to hours. The
consequence is multiple injection and uncomfortable side effects. Chemical
modification is an effective way to increase the longevity and efficacy of the
proteins.
2) Covalent modification is an important strategy for introducing new functions
into proteins.
3) As engineered proteins become more sophisticated, it is often desirable to
introduce multiple, modifications involving several different functionalities in a
site-specific manner.
4) Proteins are the final link of information chain
5) Help understand protein structure and function relationship
6) To perform specific biological function
7) Mildness, high degree of specificity
8) improve biocompatibility and bioactivity
9) Site specific
1) Usually not residue specific, other amino acids can be modified too
2) Large amount of reagents are required
3) Expensive
4) Time consuming
5) Large ranges of reagents are available
6) Potential pitfall: Does modifications causes distal conformational changes
rather than specific blocking of active site
7) Need full time qualified staff to perform the modification
8) Need close monitoring during the modification process
As described in Fig 3.2.1, after translation occurs (post translation), folding, oxidation
and signal peptide cleavage takes place, ER export , Golgi transporation as well as
vesicle packing occurs, protease cleavage liberates C peptide, then stimulates insulin
production.
6) Proteases [24]
There are some key industry suppliers, name and website link as below:
4.1.1 Introduction
Mass spectrometry is the ultimate technique for the accurate determination of the
molecular weight of a molecule by measuring its mass-to-charge (m/z) ratio. Ionized
molecules are generated by inducing either the loss or gain of a charge from a
neutral species [1]. Once formed, ionized molecules are electrostatically directed into
a mass analyzer where they are separated according to their m/z ratio, and finally
detected. Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS)
was first introduced in 1988 by Tanaka et al. [2], and Karas and Hillenkamp [3]. It has
since become a widespread analytical tool for the identification of peptides, proteins,
glycoproteins, and other biomolecules (carbohydrates, lipids, oligonucleotides, and
natural products) [1]. The efficient and directed transfer during a matrix-assisted
laser-induced desorption event provides high ion yields of the intact analyte with sub-
picomole sensitivity, and enables the mass analysis of complex biological samples
such as proteolytic digests (Table 4.1.1). MALDI allows for easy preparation and
rapid analysis of multiple samples at the same time (using a direct insertion multi-
sample plate), and sample amounts in the femtomole to picomole range. MALDI
predominantly generates ions that are singly charged, making it easier to identify
intact molecules[1].
Advantages Disadvantages
Practical mass range of up to 300 kDa. Matrix background, which can be a problem
Species of much greater mass have been for compounds below a mass of 700 Da.
observed using a high current detector This background interference is highly
dependent on the matrix material
Typical sensitivity in the order of low Possibility of photodegradation by laser
femtomole to low picomole. Attomole desorption/ionization
sensitivity is possible
Soft ionization with little to no fragmentation Acidic matrix used in MALDI may cause
observed degradation on some compounds
Tolerance of salts in millimolar
concentrations
Suitable for the analysis of complex mixtures
Linear time-of-flight (TOF) mass analyzers measure the precise time it takes for an
accelerated ion to traverse a high vacuum (~10-6 torr) field-free drift zone to a time-of-
flight detector (Figure 4.1.2). The pulsed nature of MALDI (ions are generated in
short, nanosecond pulses) is well suited to TOF analyzers since the ion’s initial time
of flight can be started with each pulse of the laser and completed when the ion
reaches the detector [1]. With MALDI-TOF mass analyzers, all the ions are given the
same amount of energy through an accelerating potential [1]. Because the ions have
the same energy, but a different mass, the lighter ions reach the detector first
because of their greater velocity, while the heavier ions take longer due to their
heavier masses and lower velocity. In essence, the time an ionized molecule takes to
arrive at the detector depends on the mass, charge, and kinetic energy of the ion [1].
Linear MALDI-TOF mass analyzers can routinely analyze intact proteins and large
peptides from a mass range of 0.7 to 200 kDa, with an an accuracy range of 0.01-
0.1%.
The MALDI-TOF reflectron mass analyzer combines linear TOF technology with an
electrostatic mirror (reflectron) [1]. The ions enter the source region and are
accelerated toward the reflectron. The ions separate in time based on their relative
m/z ratio, reverse their path in the reflectron, and impact the time-of-flight reflectron
detector. The reflectron offers higher resolution over a linear TOF mass analyzer by
increasing the amount of time that ions take to reach the detector while reducing
(focussing) their kinetic energy distribution (i.e. equalising the different kinetic
energies of ions that arise during ionisation and acceleration). MALDI-TOF reflectron
mass analyzers can analyze small proteins, peptides, hormones and small molecules
up to 10 kDa, can distinguish monoisotopic masses, and have an accuracy range of
1-10 ppm [1].
MALDI-TOF (for proteins) and MALDI-TOF reflectron (for peptides) mass analyzers
have resolving power capabilities in the order of 400 to 10000 respectively. The
resolution and accuracy also depends on the presence of an internal standard, the
size/type of peptide/protein, sample purity and preparation, and the selection of
matrix material [1]. Even though MALDI is known to be more tolerant of salts, buffers,
and impurities, sample cleanup procedures (e.g. ZipTipTM or cold water washing) are
still useful [1]. The features of a peptide that affect peptide detection by MALDI-TOF
MS include its hydrophobicity (tends to adhere to a solid matrix), ionization efficiency,
mass, and basicity (arginine-containing peptides generally produce signals that are 2
to 20-fold stronger than lysine-containing peptides).
The design of modern mass analyzers have changed significantly in the last five
years to interface with MALDI and electrospray ionization (ESI), now offering much
higher accuracy and resolution, increased sensitivity, broader mass range and the
ability to give structural information. This has revolutionized biomolecular analyses,
allowing a measure of a wide range of biomolecular ions with ppm mass accuracy
and subfemtomole sensitivity [1]. An innovation that has had a dramatic effect on
increasing the resolving power of MALDI-TOF instruments has been delayed
extraction (DE), the process of cooling and focussing the ions immediately after the
MALDI ionization event [1]. In traditional MALDI instruments, the ions were
accelerated out of the ionization source immediately as they were formed
(continuous extraction). However, with DE, the ions are allowed to “cool” for ~150
nanoseconds before being accelerated to the analyzer. This cooling period generates
a set of ions with much smaller kinetic energy distribution, thereby resulting in a
dramatic improvement in resolution and accuracy for biomolecules less than 30 kDa
[1].
MALDI and ESI have clearly evolved to be the ionization sources of choice when it
comes to biomolecular analysis [1]. MALDI has the ability to analyze complex
mixtures, with less suppression signal than ESI, making it extremely useful for
analyzing biological samples such as protein digests (see Table 4.1.2) [1].
Other major applications for MALDI-TOF technology, with specific examples, include:
Some useful learning web resources linked to these Mass Spectrometer Facilities
and MALDI-TOF and MALDI-TOF TOF mass analyzers include:
1) MASCOT (http://www.matrixscience.com);
2) MS-Fit (http://prospector.ucsf.edu);
3) ALDENTE (http://au.expasy.org/tools/aldente/); and
4) ProFound (http://129.85.19.192/profound_bin/Web/ProFound.exe).
The following is a list of the major suppliers (and their website home pages) of MS
instruments, particularly MALDI-TOF and MALDI-TOF TOF mass analyzers:
Waters (http://www.waters.com/)
4.1.7 References
2. Tanaka, K., Waki, H., Ido, Y., Akita, S., Yoshida, Y., Yoshida, T. (1988) Protein
and polymer analysis up to m/z 100.000 by laser desorption time-of-flight mass
spectrometry. Rapid Commun Mass Spectom. 2, 151-153.
4. Bruker Daltonics. (2006) Ultraflex III & ultraflex III TOF/TOF. Ultimate
Performance
MALDI-TOF & TOF/TOF Systems.
http://www.bdal.de/modux3/modux3.php?pid=105,001,003,01,01,006,009,0&rid=105,
001
5. The Australian Proteome Analysis Facility (2006) MALDI MS Analysis for Protein
Identifications. Protein Identification by MALDI-TOF (PMF).
http://www.proteome.org.au/MALDI-MS/default.aspx
7. The SWISS-PROT protein database. (2006) The Australian mirror of the curated
protein sequence database compiled by the Swiss Institute of Bioinformatics and the
European Bioinformatics Institute.
http://au.expasy.org/sprot/
8. Cui, J., Wang, J., He, K., Jin, B., Wang, H., Li, W., Kang, L., Hu, M., Li, H., Yu,
M., Shen,
B., Wang, G., Zhang, X. (2004) Proteomic analysis of human acute leukemia cells:
insight into their classification. Clin Cancer Res. 10, 6887-6896.
9. Li, S., Wang, J., Zhang, X., Ren, Y., Wang, N., Zhao, K., Chen, X., Zhao, C., Li,
X.,
Shao, J., Yin, J., West, M., Xu, N., Liu, S. (2004) Proteomic characterization of two
snake venoms: Naja naja atra and Agkistrodon halys. Biochem J. 384, 119-127.
10. Mandrell, R., Harden, L., Bates, A., Miller, W., Haddon, W., Fagerquist, C. (2005)
Speciation of Campylobacter coli, C. jejuni, C. helveticus, C. lari, C. sputorum, and C.
upsaliensis by matrix-assisted laser desorption ionization-time of flight mass
spectrometry. Appl Environ Microbiol. 71, 6292-6307.
Chapter 5.1 2D Gel Electrophoresis
5.1.1 Introduction
• sample preparation
• first dimension isoelectric focusing
• second dimension gel electrophoresis
• staining
• image analysis
The second dimension SDS-PAGE can be run on either a vertical or a horizontal set-
up [6]. In a horizontal system, the equilibrated IPG strips are placed gel side down
on the surface of the stacking gel without any embedding procedure, while a vertical
system involves embedding IPG strips in agarose on top of the vertical second
dimension gel (Figure 5.1.2) [7]. An electrical charge is then applied to separate the
proteins according to their size. Similar to what happens in an ordinary SDS-PAGE
procedure, the molecules migrate across the gel at different speeds, with larger
molecules moving more slowly than smaller molecules. Horizontal gels produce
sharper spots because these are half as thick as vertical gels, but vertical systems
allow for simultaneous running of multiple gels in a single electrophoresis tank [5].
After the separation of the samples, individual spots may be visualized by staining
the gel. A variety of different types of staining procedures – some colorimetric, some
fluorescent – may be used in this procedure, each with its own advantages and
disadvantages. Silver staining is a highly sensitive procedure and the end results
can be mass-spectrophotometry compatible, but it is time-consuming, labor-
intensive, and causes poor staining with some proteins [5]. Also, spot intensity does
not necessarily correlate to protein concentration. Zinc or copper staining produces a
negative stain (gel is stained white, proteins are unstained) because zinc/copper
does not stain SDS, which coats the proteins [4]. Zinc/copper staining is rapid and
inexpensive but cannot be used on thin gels, which provide poor contrast with this
procedure. Coomassie Blue is the most commonly used stain for acrylamide gels,
and is expensive and easy to use. However, a destaining step is required and non-
specific staining, particularly with polysaccharides, sometimes occurs [3].
Fluorescent dyes bind to the SDS that coats the proteins. Fluorescent staining may
be done before IEF, after IEF but before SDS-PAGE, or after SDS-PAGE [5]. There
are many different kinds of fluorescent stains. Generally speaking, they are highly
sensitive, quick and easy to use, and can be used for quantitative purposes.
However, some fluorescent dyes are quite expensive and may be incompatible with
some plastic-backed gels [3]. After staining, the finals step is identification and
characterization of the protein spots. Gels are scanned and image analysis can be
performed to determine statistically and scientifically significant spots [2]. Image
analysis is conducted using specialized software that have the ability to perform spot
detection, spot matching between gels, and spot quantification and comparison [8].
These spots can then be excised, characterized, and identified by MS. An example
of a 2D gel is shown below (Figure 5.1.3).
Figure 5.1.3 Silver stained sample of human embryonic kidney cells (hek293 cell
line)
Source: http://gelbank.anl.gov/cgi-
bin/2dgels/gel_display_with_list.pl?NameOfGel=221
Fluorescent stains have several advantages over more traditional silver stains. They
are more sensitive, easy to use, and have a better dynamic range [10]. Spyro post-
electrophoretic fluorescent stains, in particular Spyro Ruby (Molecular Probes,
Eugene, OR), are some of the more recent stains to be developed. Spyro Ruby, in
addition to being highly sensitive, is also compatible with subsequent protein analysis
such as Edman sequencing and MS [12]. Since Spyro Ruby is expensive, Rabilloud
et al. have developed a protocol to produce an alternate stain that performs similar to
Spyro Ruby [13].
Image analysis is often laborious and slow, and difficulties are encountered
especially when performing spot boundary assignment, normalization of the gel
background and intensity variation, and spot matching between gels. There are
various commercial software packages that perform these tasks, but these still
require manual users, leading to a problem in reproducibility [10]. Smilansky
developed a system called Z3 that performed image analysis with improved speed
and precision. The algorithm that was developed relied on “computation of the
registration directly from the raw images, region-based matching, and
complementary pseudocolor display” [14].
2D-GE in its basic form has been in use for quite some time and is usually
categorized as part of classical proteomics. Classical proteomics involves a
separation step (2D-GE) followed by an identification step, usually MS. 2D-GE is a
relatively simple method for mapping differences in protein expression, and is
currently the most rapid method for direct targeting of protein expression differences
[20]. With some recent advances, 2D-GE has progressed since it was first described
by O’Farrell in 1975 [21]. IPGs have improved the resolutions of gels, while
fluorescent dyes provided sensitivity in visualization [22]. Computer software are
continuously being developed and improved. The use of DIGE has overcome some
problems with reproducibility. Because of its simplicity, versatility, and relatively easy
visualization, 2D-GE is still in use in spite of being a relatively old technology. In fact,
“no other technology allows the ready separation and quantitation and identification
of complex protein mixtures” [21]. However, there still remain several disadvantages
to this technique.
2D-GE has limited sensitivity. One hundred fifty µg of protein from a total cell extract
generally generates around 2000-3500 identifiable spots depending on the dye used,
and around 10,000 proteins using narrow pH gradients [21,23]. The total number of
proteins in a cell is estimated to be around 30,000. Low-abundance, very high or low
molecular weight, very acidic or basic, and/or hydrophobic species are often
underrepresented in a 2D gel. Besides the use of sensitive stains, other methods to
bypass the problem are sample fractionation, selective extraction of high abundance
proteins, and loading large sample sizes in a large gel [21]. However, other factors
such as resolution and ease of image analysis, labor-wise, will be sacrificed for
sensitivity.
Other limitations for this technique include limited loading capacity of IPG strips,
relatively low throughput, and low linear range of visualization procedures [20]. In
addition, image analysis is still a labor-intensive step in 2D-GE. Despite advanced
computer programs that aid in this task, full automation has not yet been achieved
and manual involvement is still required. This, in addition to supplying extra work,
may also lead to loss in reproducibility [10].
There are alternative methods that may be used, but these are generally not as
efficient as 2D-GE. Liquid chromatography-mass spectrometry (LC-MS) is a tandem
procedure wherein a protein mixture is separated by LC, and “pure” compounds are
introduced directly to the mass spectrometer [24]. Therefore, proteins with similar
retention characteristics can be differentiated via their mass spectra. However, LC-
MS is much less versatile than 2D-GE, and does not have the ability to provide
quantitative data [10]. Using LC-MS also presents difficulties in performing
differential display analysis [20]. The use of stable isotope-coded affinity tags (ICAT)
for protein extraction and tagging is commonly done in quantitative proteomics.
Samples are labeled and identified using LC/MS/MS [21]. This technique can be
used in differential expression studies of whole proteomes, and its main strength is in
its ability to provide quantitative data. A major limitation of this technique is that it is
unable to label proteins that do not contain cysteine [25]. Since protein digestion is
involved, it is difficult to establish the association of post-modified forms of peptides
and their assembly into modified proteins [23]. Also, due to non-denaturing
conditions, the source of fluorescence must be checked to determine that it is due to
the target protein and not from an unwanted protein complex [26].
Still, 2D-GE is considered the gold standard as a protein separation method prior to
MS because it is a proven research tool due to its simplicity and ability to quantify
and visualize a range of samples greater than what other techniques can currently
achieve.
Proteomics, in general, has a wide range of applications. For 2D-GE specifically, its
main applications are protein identification and differential expression [4]. Coupled
with MS, it is used for large-scale identification of proteins in a sample. In one study,
Xu et al. used 2D-GE and MS to identify soybean leaf proteins [27]. They analyzed
260 spots and compared these against various databases. They discovered that
majority of the identified leaf proteins were involved in energy metabolism. There are
clinical applications for this technique, as well. Human myocardial proteins were
identified using 2D-GE [28]. Using 2D-GE/MS, proteomic profiles of various cells can
be generated to provide not only information about protein content, but also on
protein activity, interactions, and localization. Hoffrogge et al. performed proteomic
analysis on neuronal stem cells, Lominadze et al. analyzed neutrophils, and
Martinez-Hereida et al. did work on sperm cells [29,30,31]. Proteomic analysis using
2D-GE can also be used on pathogens. Pereira et al. looked at proteomic profiles of
the bacterium H. pylori to investigate possible pathogenic factors [32].
While non-gel-based methods are also being used for quantitative differential
proteomics, 2D-GE methods, with complementary technology that improve sensitivity
and expand analytical range, are still considered highly useful and informative tools
in proteomic analysis.
These are some websites relevant to 2D gel electrophoresis. There are various
databases that provide data on proteins and various 2D-PAGE reference maps.
Some of these are:
• ExPASy - SWISS-2DPAGE (human, mouse, E. coli, yeast)
http://au.expasy.org/ch2d/
• Max Planck Institute for Infection Biology (mostly microbial organisms)
http://web.mpiib-berlin.mpg.de/cgi-bin/pdbs/2d-page/extern/index.cgi
• Argonne National Lab, Protein Mapping Group
http://gelbank.anl.gov/2dgels/index.asp
• Joint ProteomicS Laboratory
http://www.ludwig.edu.au/jpsl/Databases.asp
• Danish Centre for Translational Breast Cancer Research
http://proteomics.cancer.dk/cgi-bin/CelisWeb.exe?MsetList.htm
• Siena-2DPAGE
http://www.bio-mol.unisi.it/2d/2d.html
• World-2DPAGE Portal (a portal for 2D PAGE databases)
http://au.expasy.org/world-2dpage/
A manual on 2D-GE using IPGs by Angelika Görg can be located on the website
below. It covers the basic steps of the procedure and possible variations in the
protocol.
http://www.weihenstephan.de/blm/deg/manual/manfrm.htm
A good journal that covers the advances, mainly technical, in the field of 2D-GE is
Electrophoresis, while many applications of the technology are published by
Proteomics. Issues and articles from both journals are available online.
• Electrophoresis home page
http://www3.interscience.wiley.com/cgi-bin/jhome/10008330
• Proteomics home page
http://www3.interscience.wiley.com/cgi-bin/jhome/76510741/
Companies that are involved in the 2D-GE industry have two main groups of
products: the reagents that are used in sample preparation, gel running, etc, and the
various image analysis software. Some companies offer both types of products but
there are several bioinformatics companies that develop specialized software.
Invitrogen offers 2D-GE systems and components such as IPG strips, ampholytes,
and gels. They also produce Spyro Ruby fluorescent gel stains and Spyro photo
filters.
• Invitrogen 2D PAGE
http://www.invitrogen.com/content.cfm?pageid=10830
• Protein stains for 2D gels
http://probes.invitrogen.com/servlets/directory?id1=8&id2=78&id3=335
Companies that provide 2D-GE reagents, gels, and equipment include GE
Healthcare (formerly Amersham Biosciences), NextGen Sciences, and Bio-Rad
Laboratories. Sigma-Aldrich has various stains and IPG strips for use in 2D-GE.,
while Genetix has the GelPix instrument that can perform spot excision, data
tracking, and onboard imaging.
• GE Healthcare 2D Electrophoresis
http://www4.amershambiosciences.com/aptrix/upp01077.nsf/Content/2d_elec
trophoresis
• NextGen Sciences 2D Electrophoresis
http://www.nextgensciences.com/NewWebsite/Products/2DElectrophoresis.ht
m
• Bio-Rad Laboratories home page
http://www.biorad.com
• Sigma-Aldrich Protein Electrophoresis
http://www.sigmaaldrich.com/Area_of_Interest/Life_Science/Proteomics_and_
Protein_Expr_/Protein_Analysis/Protein_Electrophoresis.html
• Genetix GelPix
http://www.genetix.com/instr/gelpix.asp
5.1.7 References
1. The Maiman Institute for Proteome Research, Tel Aviv University (2006) Two-
Dimensional Gel Electrophoresis,
http://www.tau.ac.il/lifesci/units/proteomics/2dimgel.html
2. Molecular Structure Facility, University of California at Davis (2006) 2-D Gel
Electrophoresis,
http://msf.ucdavis.edu/2-d_gel.html
3. Biocompare (2006) 2D Gel Electrophoresis Tutorial,
http://www.biocompare.com/tutorials/2DE_tutorial/html/background.asp
4. Jefferies, J.R. (2005) 2D Gel Electrophoresis for Proteomics Tutorial,
http://www.aber.ac.uk/parasitology/Proteome/Tut_2D.html
5. Görg, A., Obermaier, C., Boguth, G., Harder, A., Scheibe, B., Wildgruber, R.
& Weiss, W. (2000) The current state of two-dimensional electrophoresis with
immobilized pH gradients. Electrophoresis. 21, 1037-1053.
6. Carrette, O., Burkhard, P.R., Sanchez, J. & Hochstrasser, D.F. (2006) State-
of-the-art two-dimensional gel electrophoresis: a key tool of proteomics
research. Nat. Protocols. 1, 812-823.
7. Görg, A., Boguth, G., Harder, A., Obermaier, C., Scheibe, B., Wildgruber, R.
& Weiss, W. (1998) Two-Dimensional Electrophoresis of Proteins Using
Immobilized pH Gradients,
http://www.weihenstephan.de/blm/deg/manual/manfrm.htm
8. Mathematical and Information Sciences, Commonwealth Scientific and
Industrial Research Organisation (2005) 2D Gel Image Analysis,
http://www.cmis.csiro.au/iap/RecentProjects/protein.htm
9. Herbert, B. & Righetti, P.G. (2000) A turning point in proteome analysis:
sample prefractionation via multicompartment electrolyzers with isoelectric
membranes. Electrophoresis. 21, 3639-3648.
10. Lilley, K.S., Razzaq, A. & Dupree, P. (2001) Two-dimensional gel
electrophoresis: recent advances in sample preparation, detection and
quantitation. Curr. Opin. Chem. Biol. 6, 46-50.
11. Santoni, V., Kieffer, S., Desclaux, D., Masson, F. & Rabilloud, T. (2000)
Membrane proteomics: use of additive main effects with multiplicative
interaction model to classify plasma membrane proteins according to their
solubility and electrophoretic properties. Electrophoresis. 21, 3329-3344.
12. Molecular Probes (2005) SPYRO® Ruby Protein Gel Stain,
http://probes.invitrogen.com/media/pis/mp12000.pdf
13. Rabilloud, T., Strub, J., Luche, S., van Dorsselaer, A. & Lunardi, J. (2001) A
comparison between Spyro Ruby and ruthenium II tris (bathophenanthroline
disulfonate) as fluorescent stains for protein detection in gels. Proteomics. 1,
699-704.
14. Smilansky, Z. (2001) Automatic registration for images of two-dimensional
protein gels. Electrophoresis. 22, 1616-1626.
15. Westbrook, J.A., Yan, J.X., Wait, R., Welson, S.Y. & Dunn, M.J. (2001)
Zooming-in on the proteome: very narrow-range immobilised pH gradients
reveal more protein species and isoforms. Electrophoresis. 22, 2865-2871.
16. Zhou, G., Li, H., DeCamp, D., Chen, S., Shu, H., Gong, Y., Flaig, M.,
Gillespie, J.W., Hu, N., Taylor, P.R., Emmert-Buck, M.R., Liotta, L.A.,
Petricoin III, E.F. & Zhao, Y. (2002) 2D differential in-gel electrophoresis for
the identification of esophageal scans cell cancer-specific protein markers.
Mol. Cell Proteomics. 1, 117-124.
17. GE Healthcare (2005) Ettan DIGE System User Manual,
http://www6.amershambiosciences.com/aptrix/upp00919.nsf/(FileDownload)?
OpenAgent&docid=ABD636471F4EFF17C12571C000813047&file=18117317
AB.pdf
18. Craven, R.A., Totty, N., Harnden, P., Selby, P.J. & Banks, R.E. (2002) Laser
capture microdissection and two-dimensional polyacrylamide gel
electrophoresis: evaluation of tissue preparation and sample limitations. Am.
J. Pathol. 160, 815-822.
19. National Institute of Child Health & Human Development, National Institutes
of Health (2006) Introduction to Laser Capture Microdissection,
http://dir.nichd.nih.gov/lcm/LCM_Werbsite_Introduction.htm
20. Monteoliva, L. & Albar, J.P. (2004) Differential proteomics: an overview of gel
and non-gel based approaches. Brief. Funct. Genomic Proteomic. 3, 220-239.
21. Stein, R.C. & Zvelebil, M.J. (2002) The application of 2D gel-based
proteomics methods to the study of breast cancer. J. Mammary Gland Biol. 7,
385-393.
22. Görg, A., Postel, W. & Günther, S. (1988) The current state of two-
dimensional electrophoresis with immobilized pH gradients. Electrophoresis.
9, 531-546.
23. Hitt, E. (2006) Separation of complex protein samples prior to mass
spectrometry remains a bottleneck in the rapid resolution of proteomes,
http://www.dddmag.com/ShowPR.aspx?PUBCODE=016&ACCT=160000010
0&ISSUE=0405&RELTYPE=PR&ORIGRELTYPE=GPF&PRODCODE=00000
000&PRODLETT=AQ
24. Ardrey, R.E. (2003) Introduction. In Liquid Chromatography-Mass
Spectrometry, pp. 1-5. J. Wiley, Chichester, UK.
25. Smolka, M., Zhou, H. & Aebersold, R. (2002) Quantitative protein profiling
using two-dimensional gel electrophoresis, isotope-coded affinity tag labeling,
and mass spectrometry. Mol. Cell Proteomics. 1, 19-29.
26. SWEGENE Proteomics Lund (2003) Peptide-based Approaches to
Proteomics,
http://www.proteomics.swegene.lu.se/L=proteomics-peptide
27. Xu, C., Garrett, W.M., Sullivan, J., Caperna, T.J. & Natarajan, S. (2006)
Separation and identification of soybean leaf proteins by two-dimensional gel
electrophoresis and mass spectrometry. Phytochemistry. 67, 2431-2440.
28. Wittmann-Liebold, B., Graack, H. & Pohl, T. (2006) Two-dimensional gel
electrophoresis as tool for proteomics studies in combination with protein
identification by mass spectrometry. Proteomics. 6, 4688-4703.
29. Hoffrogge, R., Beyer, S., Volker, U., Uhrmacher, A.M. & Rolfs, A. (2006) 2-DE
proteomic profiling of neuronal stem cells. Neurodegener. Dis. 3, 112-121.
30. Lominadze, G., Ward, R.A., Klein, J.B. & McLeish, K.R. (2006) Proteomic
analysis of human neutrophils. Methods Mol. Biol. 332, 343-356.
31. Martinez-Heredia, J., Estanyol, J.M., Ballesca, J.L. & Oliva, R. (2006)
Proteomic identification of human sperm proteins. Proteomics. 6, 4356-4369.
32. Pereira, D.R., Martins, D., Winck, F.V., Smolka, M.B., Nishimura, N.F.,
Rabelo-Goncalves, E.M., Hara, N.H., Marangoni, S., Zeitune, J.M. & Novello,
J.C. (2006) Comparative analysis of two-dimensional electrophoresis maps
(2-DE) of Helicobacter pylori from Brazilian patients with chronic gastritis and
duodenal ulcer: a preliminary report. Rev. Inst. Med. Trop. Sao Paulo. 48,
175-177.
33. Duncan, M.W. (2006) Protein-based biomarker & drug discovery,
http://www.genengnews.com/articles/chtitem.aspx?tid=1240&chid=1
34. Torres-Cabala, C., Bibbo, M., Panizo-Santos, A., Barazi, H., Krutzsch, H.,
Roberts, D.D. & Merino, M.J. (2006) Proteomic identification of new
biomarkers and application in thyroid cytology. Acta. Cytol. 50, 518-528.
35. Roessler, M., Rollinger, W., Mantovani-Endl, L., Hagmann, M.L., Palme, S.,
Berndt, P., Engel, A.M., Pfeffer, M., Karl, J., Bodenmueller, H., Ruschoff, J.,
Henkel, T., Rohr, G., Rossol, S., Rosch, W., Langen, H., Zolg, W. & Tacke,
M. (2006) Identification of PSME3 as a novel serum tumor marker for
colorectal cancer by combining two-dimensional polyacrylamide gel
electrophoresis with a strictly mass spectrometry-based approach for data
analysis. Mol. Cell. Proteomics. 0, M600118-MCP200.
36. Li, D.Q., Wang, L., Fei, F., Hou, Y.F., Luo, J.M., Wei-Chen, Zeng, R., Wu, J.,
Lu, J.S., Di, G.H., Ou, Z.L., Xia, Q.C., Shen, Z.Z. & Shao, Z.M. (2006)
Identification of breast cancer metastasis-associated proteins in an isogenic
tumor metastasis model using two-dimensional gel electrophoresis and liquid
chromatography-ion trap-mass spectrometry. Proteomics. 6, 3352-3368.
37. Pieper, R., Gatlin-Bunai, C.L., Mongodin, E.F., Parmar, P.P., Huang, S.T.,
Clark, D.J., Fleischmann, R.D., Gill, S.R. & Peterson, S.N. (2006)
Comparative proteomic analysis of Staphylococcus aureus strains with
differences in resistance to the cell wall-targeting antibiotic vancomycin.
Proteomics. 6, 4246-4258.
38. Foucher, A.L., McIntosh, A., Douce, G., Wastling, J., Tait, A. & Turner, C.M.
(2006) A proteomic analysis of arsenical drug resistance in Trypanosoma
brucei. Proteomics. 6, 2726-2732.
Chapter 5.2 Chromatographic Methods for the Separation of
proteomes
5.2.1 Introduction
There are more than 10,000 protein folds in existence in human beings. Proteins
sequences are distributed among these folds, which are non-homogenous and most
of them are rare. The distribution of proteins in these folds follows the asymptotic
power laws, which are identified in different biological and physical systems of the
body and are associated with the scale free networks. By using chromatographic
techniques we can identify the proteins that are distributed in these folds.
Using the new techniques like biomarkers for early detection and diagnosis of
dangerous diseases like cancer was in wide use in early 2003. Two-dimensional gel
electrophoresis (2D-PAGE) is the foundation of many discovery-based proteomics
studies. Technologies such as laser capture micro dissection (LCM) and highly
sensitive MS methods are widely used methods to recognize the protein that are
articulated between distinct cell populations. Technologies such as reverse phase
protein arrays will facilitate the recognition of target pathways in small biopsy
specimens. The other technique used to analyse the classification of lysates from
body fluids which suites for the diagnosis and tratment of the disease is Surface-
enhanced laser desorption/ionization time-of-flight (SELDI-TOF). Application of gel
based proteomic techniques also enables to discover the drugs and prevention for
lung and bladder cancer by using the technique of tissue biopsies. Most of the
eukaryotic protein activity is altered by post-translational modifications. To record the
modification sites in detail, techniques like novel mass spectrometric peptide
sequencing and analysis technologies are widely used to study the chemistry of
modifications.
The recent advances in mass spectroscopy has brought the analysis of protein into
Lime light of cancer research. By using these techniques in cancer research, one can
Find different issues of proteomics in cancer research like profiling of tumour cells,
tumour fluids and protein microarrays and pharmacoproteomics and mapping of
cancer signaling pathways and the role of biomarkers in diagnosis of cancer disease
and monitoring of the disease and therapeutic and immune response to the cancer.
The above-mentioned are the benefits of the development of the techniques like
functional protein arrays and data handling. Cancer is a dysregulstion in the network
of the intracellular and extracellular signaling system of the body. Molecular therapy
given to the cancer patient is to target the effected signaling system of human body.
The rising technology in proteomic techniques used with genomic analysis fulfils this
need and bring the scientific approval for molecular stratification. Proteomic
technology offers the state of kinase pathways, and provides post-translational
phosphorylation data, which is not accessible by gene arrays. Such techniques
provides a new pathways for developing new clinical therapy in curing the disease.
There are also some disadvantages in the SDS PAGE electrophoresis. The bands
produced by SDS-PAGE are sharper, which affects the protein shape and charge.
Sodium dodecyl sulfate is an anionic detergent present in the SDS PAGE
electrophoresis disrupts protein quaternary, tertiary and secondary structural levels
And renders all proteins highly negative charge. There are also some problems with
protein weight estimated by SDS PAGE. Joule heating is the other disadvantage of
SDS PAGE in which high power electrophoretic separations run out and leads to
unintentional consequences.
In mass spectroscopy the mass spectral information of the sample is lost due to
the limit of detection of the sample is lowered. This is not much enhanced than a
gas chromatography with flame ionization detector. By this technique the problem
arises like chemical noise due to any component containing an ion with the same
mass as that of the component which is scanned. In this methods some compounds
cannot accept hydrocarbons easily. No information of fragments is produced by the
tandem mass spectrometry on cationized molecules. During the process of electron
ejection or electron capture more amount of fragmentation is generated. In MALDI
Mass spectroscopy the background matrix, which is used as a stationary phase
creates a problem for molecular samples with a mass below 700Da. By using acidic
matrix in the MALDI mass spectroscopy also causes degradation in some
compounds.
The major applications of the SDS PAGE electrophoresis include the implementation
of on chip preconcentration of proteins. In this technique two membranes of different
sizes are used for photopolymerization. Proteins are trapped on the membrane of
SDS PAGE and eluted out by reversing the electric field. By using this technique
proteins with low concentrations are also detectable with in a time span of 30 min of
preconcentration time. Preconcentration analysis is also used in DNA analysis and
clinical diagnostics.
These days monoclonal antibody therapies are used to cure various human
diseases, which has led to development of different analytical methods to detect and
quantitate different size variants. In the place of SDS CE using gel based polymer for
separation of different proteins, these days SDS PAGE electrophoresis is used due
to its various advantages. SDS PAGE technique utilizes non-gel polymer solutions
for separation. The other important solutions like dextrans are also used as
separation matrices. These dextrans are used as matrices in bare fused silica
capillary, which is the advanced technology used in the analysis of separation
technique.
In United States of America an annual rate of 2.2 million tons of phenolic resins and
Phenol formaldehyde polymers are produced. Previously these polymers are thought
to be nonbiodegradable and are produced for many industrial and commercial
purpose. After conducting various experiments using gas chromatography and mass
spectroscopy it was founded that a degradation product named 13C labeled phenol
was detected. The main fungus responsible for this biodegradation was white-rot
fungus Phanerochaete chryso- sporium. This was the basic platform for analysis of
bioremediation and biorecycling of phenolic resins.
References:
http://www.healthtech.com/2005/bmks/msb.asp(access date
25sep2006)
http://www.mrc-dunn.cam.ac.uk/research/proteomes.html(access date
25sep2006)
http://www.chem.uic.edu/web1/ocol/spec/MS1.htm(access date
30sep2006)
http://proteomics.cancer.gov/resource_room/scientific_bibliography_re
views.asp(access date 10oct2006)
http://masspec.scripps.edu/MSHistory/whatisms.php(access date
18oct2006)
Holt LJ, Enever C, de Wildt RM, Tomlinson IM. (2000 Oct; 11) Curr
Opin Biotechnol. (5): 445-9. (access date 24oct2006)
5.3.1. Introduction:
Mass spectrometric measurements are carried out in the gas phase on ionized
analytes. By definition, a mass spectrometer consists of an ion source, a mass
analyser that measures the mass-to-charge ratio (m/z) of the ionized analytes, and a
detector that registers the number of ions at each m/z value.
The mass analyser is the main and the central component of this technology. In
the context of proteomics, its key parameters are sensitivity, resolution, mass
accuracy and the ability to generate information-rich ion mass spectra from peptide
fragments (tandem mass or MS/MS spectra) (Fig 5.3.3)[8-10]. There are four basic
types of mass analyser currently used in proteomics research. These are the ion
trap, time-of-flight (TOF), quadrupole and Fourier transform ion cyclotron (FT-MS)
analysers. They are very different in design and performance, each with its own
strength and weakness. These analysers can be used separately or, in some cases,
put together in tandem to take advantage of the strengths of each.[2]
Fig 5.3.2: Use of ESI and SDS-Page and HPLC technique for a proteomic
experiment.
The typical proteomics experiment consists of five stages. In stage 1, the proteins to
be analysed are isolated from cell lysate or tissues by biochemical fractionation or
affinity selection. This often includes a final step of one-dimensional gel
electrophoresis. MS of whole proteins is less sensitive than peptide MS and the
mass of the intact protein by itself is insufficient for identification. Therefore, proteins
are degraded enzymatically to peptides in stage 2, usually by trypsin, leading to
peptides with C-terminally protonated amino acids, providing an advantage in
subsequent peptide sequencing. In stage 3, the peptides are separated by one or
more steps of high-pressure liquid chromatography in very fine capillaries and
eluted into an electrospray ion source where they are nebulized in small, highly
charged droplets. After evaporation, multiply protonated peptides enter the mass
spectrometer and, in stage 4, a mass spectrum of the peptides eluting at this time
point is taken (MS1 spectrum, or 'normal mass spectrum'). The computer generates a
prioritized list of these peptides for fragmentation and a series of tandem mass
spectrometric or 'MS/MS' experiments ensues (stage 5). These consist of isolation of
a given peptide ion, fragmentation by energetic collision with gas, and recording of
the tandem or MS/MS spectrum. The MS and MS/MS spectra are typically acquired
for about one second each and stored for matching against protein sequence
databases. The outcome of the experiment is the identity of the peptides and
therefore the proteins making up the purified protein population.
Source:
(http://www.nature.com/nature/journal/v422/n6928/fig_tab/nature01511_F1.html)
Fig 5.3.3: Stages in MS/MS experiment
The left and right upper panels depict the ionization and sample introduction process in
electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). The
different instrumental configurations (a−f) are shown with their typical ion source. a, In
reflector time-of-flight (TOF) instruments, the ions are accelerated to high kinetic energy and
are separated along a flight tube as a result of their different velocities. The ions are turned
around in a reflector, which compensates for slight differences in kinetic energy, and then
impinge on a detector that amplifies and counts arriving ions. b, The TOF-TOF instrument
incorporates a collision cell between two TOF sections. Ions of one mass-to-charge (m/z)
ratio are selected in the first TOF section, fragmented in the collision cell, and the masses of
the fragments are separated in the second TOF section. c, Quadrupole mass spectrometers
select by time-varying electric fields between four rods, which permit a stable trajectory only
for ions of a particular desired m/z. Again, ions of a particular m/z are selected in a first
section (Q1), fragmented in a collision cell (q2), and the fragments separated in Q3. In the
linear ion trap, ions are captured in a quadruple section, depicted by the red dot in Q3. They
are then excited via resonant electric field and the fragments are scanned out, creating the
tandem mass spectrum. d, The quadrupole TOF instrument combines the front part of a triple
quadruple instrument with a reflector TOF section for measuring the mass of the ions. e, The
(three-dimensional) ion trap captures the ions as in the case of the linear ion trap, fragments
ions of a particular m/z, and then scans out the fragments to generate the tandem mass
spectrum. f, The FT-MS instrument also traps the ions, but does so with the help of strong
magnetic fields. The figure shows the combination of FT-MS with the linear ion trap for
efficient isolation, fragmentation and fragment detection in the FT-MS section.
Source: (http://www.nature.com/nature/journal/v422/n6928/fig_tab/nature01511_F2.html)
5.3.2.1 Ion source
The first very important component in a mass spectrometer is the ion source
through which various ions are created after which they are passed into the mass
analyzer. There are various types of ion sources through which ions can be
generated. The choice of ionic source depends on the experiment and the
compounds to be analysed. The various ion sources are listed below[11]-
ESI is also used in many cases where the sample is present in the electrospray
capillary of small internal diameter. The analyte is forced into this capillary at a
particular flow rate and the resulting electrospray plume nebulizes the sample. The
details are present in chapter 4.2.
The basic types of mass analyzers used in mass spectrometry are summarized
below:
Type Acronym Principle
Time-of-flight TOF Time dispersion of a pulsed beam; separation by
time-of-flight
Magnetic sector B Deflection of a continuos beam; separation by
momentum in magnetic field
Linear Quadrupole Q Continuous ion beam in linear radio frequency
quadrupole field; separation due to stability of
trajectories
Linear Quadrupole ion LIT Continuous ion beam and trapped ions; storage
trap and eventually separation in linear radio
frequency quadrupole field due to stability of
trajectories
Quadrupole Ion trap QIT Trapped ions; separation in three dimensional
radio frequency quadrupole field due to stability of
trajectories
Ion cyclotron ICR Trapped ions; separation by cyclotron frequency
resonance (Lorentz force) in magnetic field
Table 5.3.1 Basic types of mass analysers
Source: Mass Spectromerty: A Textbook, 2004
The principle of TOF is simple in the sense that ions of different mass: charge ratio
i.e. m/z are dispersed in time during their flight along a field-free drift path of known
length. Provided all the ions start their journey at the same time or within a sufficiently
short time interval, the lighter ions will arrive earlier at the detector than the heavier
ones.[12]
These are comparatively large devices capable of high resolution and accurate
mass determination suited for wide variety of ionization methods. It is basically a
momentum analyzer rather than a direct mass analyzer as commonly assumed.[12]
Fig 5.3.4 Path of magnetic sector analyzer
Source: (http://www.chem.ox.ac.uk/spectroscopy/mass-
spec/Lecture/oxweb%20sector%202.jpg)
5.3.2.3 Detectors
The final element of the mass spectrometer is the detector. The detector records
the charge induced or current produced when an ion passes by or hits a surface. In a
scanning instrument the signal produced in the detector during the course of the scan
versus where the instrument is in the scan (at what m/z) will produce a mass
spectrum, a record of ions as a function of m/z[13].
The simplest of the detector is a Faraday cup i.e. an electrode where the ions
deposit their charge. They are still in use to measure abundance ratios with high
accuracy in isotope ratio mass spectrometry [12]. Other type of detector includes the
electron multiplier, where, the ions when bombarded on metal (or PbO coated
surface) induce emission of electrons. [13].They became predominant with the
advent of scanning mass spectrometers. Progress has also been made to employ
cryogenic detectors, a rather special type of an ion counting detector for high-mass
ions in TOF-MS.[12]
Sequencing of the human genome and numerous pathogens [14] was a great
outbreak for overall biological sciences and opened new gates for active research in
the field of molecular biology. As a result it also opened gates for proteomics [14].
Interest has been developed in applying proteomics to understand the process of
disease development, develop new biomarkers for the purpose of diagnosis and
hence early detection of the disease [14]. Once biomarkers are developed further
work can be done in developing drug delivery systems.
Studies have identified disease related changes in the protein expression using 2D
gels and mass spectrometry. Studies on the diseases of heart [15] have gathered a
set of pathological conditions with acute onset and some with slow progression of
diseases and some with chronic progression. Changes in the myocardial proteins
associated with the heart failure have been found out in relevant animal models such
as that in rat myocytes [15]. Altered overall levels of specific proteins or altered post-
translational modifications of proteins such as myosin light chain 2 have been
reported in heart failure [16].
Mass spectrometry has been applied to the in situ proteomic analysis of tissues.
This approach allows imaging of protein expression in normal and disease tissues
[17]. In this method the frozen tissue is sliced and sections are applied on the MALDI
plate, which are then analysed at regular intervals. The mass spectra obtained at
each interval are then compared yielding a spatial distribution of individual masses
across the tissue section [14]. Tumour analyses using this approach have shown
differences in protein expression between normal and tumour tissues that may have
specificity for different tumour types [17].
Finally there have also been developments in the instrumentation of this technique.
In the case of LC–MS, the last two decades have seen some significant
developments and improvements in instrumentation design, especially the
introduction of robust, user-friendly interfaces such as those based on atmospheric
pressure ionisation techniques, e.g. electrospray (ESI) and atmospheric pressure
chemical ionisation (APCI)[22].
Mass spectrometry has come of age in this decade and has proved a powerful
technique to unleash the unknown. Many proteins have been identified till date by
using this instrument. As mentioned earlier, the proteins are digested with trypsin and
then analysed by mass spectrometer. The masses are then submitted to various
protein databases where the protein is theoretically digested and then compared with
proteins that are experimentally generated by the database. The match is scored on
number of factors, depending on the search program utilised.
This technique has certain advantages and disadvantages associated with it. It is
rapid with a low turn around time. It is highly sensitive even with low amounts of
sample. It measures masses of molecules accurately and hence it is used to
determine the number of subunits in different olygomers, which cannot be
determined accurately by other methods like size exclusion, analytical
ultracentrifugation or cryo electron microscopy [23]. Though accurate, this technique
can’t be used efficiently enough for non-covalent compounds [23]. Before entering
the mass analysers, the sample undergoes ionisation and not all molecules ionize
well during the ionization process; dissociation of the complexes could also occur
during ionization. One of the important requirements of this technique is that once the
masses are submitted to the databases, the protein should be present in the
database list (databases websites given at the end). Proteins that are less than
15kDa are not suitable for MALDI-TOF mass spectrometry [24].
5.3.5 Applications:
In recent years there has been development of powerful technology that has given
a boom for biological sciences. Mass spectrometry is one of them which has been
recently been used extensively in the new field of proteomics. Protein analysis is
done by mass spectrometry routinely these days. It finds applications in the medical
field for the diagnosis of diseases. New biomarkers are being identified by using
mass spectrometric analysis, which would improve tuberculosis diagnosis [25].
The following pages for Programs used to search MALDI-TOF Peptide Mass
Fingerprint data:
• MASCOT: http://www.matrixscience.com/
• MS-Fit: http://prospector.ucsf.edu/
• ALDENTE: http://au.expasy.org/tools/aldente/
• ProFound: http://129.85.19.192/profound_bin/WebProFound.exe
5.3.6 References:
3. Wilkins, M. R., Pasquali,C, Appel, R.D, Ou, K, Golaz ,O, Sanchez, J.C, Yan, J.X,
Gooley, A.A, Hughes,G, Humphery-Smith, I., Williams, K.L & Hochstrasser, D.F.
(1996) From proteins to proteomes: large scale protein identification by two-
dimensional electrophoresis and amino acid analysis., Biotechnology, 61-65.
4. http://en.wikipedia.org/wiki/Proteomics
5. Fenn, J. B., Mann, M., Meng, C. K., Wong, S. F. & Whitehouse, C. M. (1989)
Electrospray ionization for the mass spectrometry of large biomolecules., Science.
246, 64-71.
8. Pandey, A. M., M. (2000) Proteomics to study genes and genomes, Nature, 837-
846.
10. Mann, M., Hendrickson, R. C. & Pandey, A. (2001) Analysis of proteins and
proteomes by mass spectrometry, Annual Review of Biochemistry. 70, 437-473.
11. Herbert, C. G. J., R.A. W. (2003) Mass Spectrometry Basics, CRC PRESS,
United States of America.
13. http://en.wikipedia.org/wiki/Mass_spectrometry#Detector
15. Van Eyk, J. E. (2001) Proteomics: unraveling the complexity of heart disease
and striving to change cardiology, Current opinion in Molecular Therapeutics. 3, 546-
553.
16. van Der Velden, J. e. a. (2001) Effects of calcium, inorganic, phosphate, and pH
on isometric force in single skinned cardiomyocytes from donor and failing human
hearts, Circulation. 104, 1140-1146.
17. Stoeckli, M., Chaurand, P., Hallahan, D. E. & Caprioli, R. M. (2001) Imaging
mass spectrometry: a new technology for the analysis of protein expression in
mammalian tissues, Nature Medicine. 7, 493-496.
18. Petricoin, E. F., Zoon, K. C., Kohn, E. C., Barrett, J. C. & Liotta, L. A. (2001)
Clinical proteomics: translating benchside promise into bedside reality, Nature
Reviews Drug Discovery. 1, 683-695.
20. Lasonder, E., Ishihama,Y., Andersen,J. S., Adriaan, M. W., Vermunt, A. P.,
Sauerwein,R.W., Wijnand, M. C., Eling, N. H., Waters,A. P., Stunnenberg,H.G. &
Mann,M. (2002) Analysis of the Plasmodium falciparum proteome by high-accuracy
mass spectrometry, Nature. 419, 537-542.
22. Wood, M., Laloup, M., Samyn, N., del Mar Ramirez Fernandez, M., de Bruijn, E.
A., Maes, R. A. A. & De Boeck, G. (2006) Recent applications of liquid
chromatography-mass spectrometry in forensic science, Journal of Chromatography
A. 1130, 3-15.
25. Agranoff, D., Fernandez-Reyes, D., Papadopoulos, M. C., Rojas, S. A., Herbster,
M., Loosemore, A., Tarelli, E., Sheldon, J., Schwenk, A. & Pollok et, al. (2006)
Identification of diagnostic markers for tuberculosis by proteomic fingerprinting of
serum, Lancet. 368, 1012-1021.
26. Chen, B.-M., Liang, Y.-Z., Chen, X., Liu, S.-G., Deng, F.-L. & Zhou, P. (2006)
Quantitative determination of azithromycin in human plasma by liquid
chromatography-mass spectrometry and its application in a bioequivalence study,
Journal of Pharmaceutical and Biomedical Analysis. 42, 480-487.
27. Torres-Cabala, C., Bibbo, M., Panizo-Santos, A., Barazi, H., Krutzsch, H.,
Roberts, D. D. & Merino, M. J. Proteomic identification of new biomarkers and
application in thyroid cytology, Acta Cytologica. 50, 518-528.
28. Downes, M. R., Byrne, J. C., Dunn, M. J., Fitzpatrick, J. M., Watson, R. W. G. &
Pennington, S. R. Application of proteomic strategies to the identification of urinary
biomarkers for prostate cancer: A review, Biomarkers: Biochemical Indicators Of
Exposure, Response, And Susceptibility To Chemicals. 11, 406-416.
29. Mullen, A. M., P.C. Stapleton, D. Corcoran, R.M. Hamill and A. White. (2006)
Understanding meat quality through the application of genomic and proteomic
approaches, Meat Science. 74, 3-16.
Chapter 5.4 Bioinformatics in proteome analysis
Supriya Narayanan
(s3119801)
5.4.1 Introduction
Yeast proteome
http://fig.cox.miami.edu/~cmallery/150/gene/yeast_proteome2.jpg
Proteomics is a relatively new field developing only in the last 10 years. Based on 2D
gel electrophoresis protein profiles, ideas were proposed in 1970s and 1980s to build
protein databases such as the human protein index.
In the late 1980s and early 1990s, there were only small sequence databases.
However, through the 90s, along with genomic data, protein data was also gathered
and the databases began to grow. With complete libraries at hand, rapid identification
of proteins was only limited by the ability to extract their sequence information. This
gap was rapidly filled by mass spectrometry techniques and database search
algorithms.
In 1993, five independent reports were published that described the implementation
of this insight in database search algorithms. These algorithms, together with MALDI-
TOF mass spectrometry peptide analysis, constituted a 'protein identification' method
that is now known as peptide mass mapping (or peptide mass fingerprinting). In this
type of analysis, the collected 'MS spectra' are used to generate a list of proteolytic
(peptide) fragment masses, which are then matched to the masses calculated from
the same proteo-lytic digestion of each entry in a sequence database, resulting in
identification of the target protein.
Algorithms that match MS/MS spectra to sequence databases have greatly facilitated
mass spectrometric protein identification by this approach. MS/MS spectra are also
ideally suited to search translated EST and other sequence databases containing
incomplete sequences.
The next development was of the gel-independent approach to proteomics using LC-
MS/MS systems. The combination of LC-MS/MS and sequence database searching
has been widely adopted for the analysis of complex peptide mixtures generated
from the proteolysis of samples containing several proteins. This approach is often
referred to as 'shotgun' proteomics and has the ability to catalog hundreds, or even
thousands, of components contained in samples isolated from very different sources.
Specific examples include the identification of proteins in the periplasmic space of
bacteria, yeast ribosomal complexes, murine nuclear interchromatin granule clusters
(nuclear speckles), murine mitochondrial soluble intermembrane proteins, human
urinary proteins, yeast TFIID-associated proteins, proteasomal proteins, human
microsomal proteins, human membrane proteins and yeast nuclear pore proteins
pre-fractionated by SDS polyacrylamide gel electrophoresis, and proteins from yeast
lysates.[26]
From its inception to the present day, proteomics has evolved substantially.
Conceptually, proteomics has become a biological assay for the quantitative and
qualitative analysis of complex protein samples. Technologically, proteomics has
become a combination of relatively mature tools that support protein cataloguing and
quantitative proteome measurements reliably, sensitively and at high throughput.[26]
This is an invaluable tool for various reasons. Firstly, global data sets are rich in
information but difficult to analyze using traditional knowledge-based interpretation.
Secondly, the more the data the better it is. That is, it is much more informative to
collect several global data sets on the same system, and to use mathematical tools
such as cluster analysis to extract biological insights or to formulate hypotheses.
Thirdly, It is expected that additional systematic proteomic data, including activity
profiles, interaction maps and profiles of (regulatory) modifications, will provide
further insights into the structure, function and control of biological systems.
The use of bioinformatics in proteome analysis has made the handling of data much
easier.
Before computer processing comes into the picture, extensive data, particularly
through crystallography and NMR, are required for the study of any protein. With the
development of bioinformatics tools and databases, the structure and its relationship
to function of newly discovered proteins can be understood in a very short time.
The use of high throughput platforms and parallel processing in proteomics has
resulted in a huge surge in the amount of data being generated throughout the world.
However, the large scale analysis of proteins results in data analysis problems [14].
This is because of the nature of the two main technologies being used to study the
structure of the protein:
♦ Systems based on gel electrophoresis
♦ Non-gel based methods.
Software written for analysis of gel data has been improved over the years but still
requires manual intervention.
On the other hand, data from non-gel based methods such as mass spectrometry
consists of not only the peptides being analyzed but also the noise in the system. As
the vast majority of the data is noise, there is a huge wastage of computer resources
and time in trying to analyze this data. Also, algorithms written to analyze the MS
data only indicate the significance of the matches and not the actual value. Hence
there are a huge number of false positive results. The analysis of these results by
untrained or inexperienced personnel can lead to acceptance of false matches and
the faulty identification of the protein.
The wide adoption of proteomics approaches into biological research will require
several developments to combat data overload and ensure data quality. First, tools
must be readily available to de-select MS/MS spectra from search routines that are
unlikely to yield a match because of poor quality. Second, search algorithms require
further refinement to diminish the false positives and false negatives (merely setting
scores high to diminish false positives is counter to the aim of the experiment); this
problem is beginning to be addressed through the development of true probability-
based scores that are akin to the assignment of quality scores to each base in DNA
sequencing. Third, spectral matching algorithms for peptide MS/MS spectra need to
be made commercially available. And fourth, a database of truly nonredundant
transcripts of the organism under study is required, together with an extensive
relational database that can acquire data from the diverse range of instruments
involved in each stage of the proteomics experiments
The ability to generate information has now outstripped the ability to analyze the
information being generated. This is because the amount of information being
entered into the databases in increasing by geometric succession. However, the rate
of increase of computing power is based on Moores law.
The ability to collect large proteomic data sets already outstrips the ability to validate,
to interpret and to integrate such data for the purpose of creating biological
knowledge. Therefore, software tools will be developed to help manage, interpret,
integrate and understand proteomic data. The lack of suitable software tools currently
limits essentially all areas of proteomic data analysis, from database searching using
MS/MS spectra to the assembly of large data sets containing different types of data
in relational databases. To derive value from the data that goes beyond an initial
scan for 'interesting observations' and to make data portable and comparable, it will
be necessary to develop algorithms that assign a score to each data point that
estimates the probability that the observation is correct. Just as the assignment of
quality scores to each base in DNA sequencing using the algorithm Phred was
essential for the success of genome sequencing programs, it can be expected that
probability-based scores calculated for proteomic data will have a similar impact on
proteomics.
More recent initiatives have shown that quantification of information is the next
hurdle.
Every data involved in bioinformatics analysis has to be weighted with an appropriate
number in order to give it the proper significance. These weights express the relative
importance of each component of the entity or each event of a process.
Studies have also highlighted the limitations of shotgun proteomics, including the
difficulty of detecting and analyzing by collision-induced dissociation (CID) mass
spectrometry all of the peptides in a sample, the qualitative nature of data-dependent
experiments, and the challenge of processing the tens of thousands of CID spectra
generated in a typical experiment—one of the many informatics challenges that still
faces scientists in this field. On average, a protein digested with trypsin will generate
30−50 different peptides. A tryptic digest of the proteome of a typical human cell will
therefore generate a peptide mixture containing at least hundreds of thousands of
peptides. Even the most advanced LC-MS/MS systems cannot resolve and analyze
such complexity in a reasonable amount of time.
[26,27,28]
Functional analysis
♦ Sequence comparison
Sequence comparison helps identification of individual proteins and
understanding the difference between normal and abnormal cell
proteomes[7].
Evolutionary analysis
Biomedical applications
♦ Drug discovery
♦ Proteomic profiling
Proteomic profiling helps determine the differences in protein expression
patterns between cells or with a cell at different times.[22]
Others
♦ Analysis of microarrays
Chips containing arrays can be automated and connected to software for
analysis.[19]
Include useful leaning sites, reference/leading laboratory sites, resource sites e.g.
Databases
Swiss-prot (http://www.expasy.com/swiss_prot )
PDB (http://www.rcsb.org/pdb/home/home.do)
CATH (http://www.cathdb.info/latest/index.html)
SCOP (http://scop.mrc-lmb.cam.ac.uk/scop/)
Bioinformatics tools
Various bioinformatics tools and what they are used for is as given below.
A. Predict the protein sequence from a given nucleotide sequence by finding the
most probable open reading frame.
GENSCAN (http://genes.mit.edu/GENSCAN.html)
Grail (http://compbio.ornl.gov/Grail-1.3/)
FGENEH(http://genomic.sanger.ac.uk/gf/gf.shtml)
Genmark(http://genemark.biology.gatech.edu/GeneMark/webgenemark.html)
AACompIdent(http://www.expasy.ch/tools/aacomp/)
AACompSim
http://www.expasy.ch/tools/aacsim/
TagIdent
http://www.expasy.ch/tools/tagident.html
compute pI/MW
http://www.expasy.ch/tools/pi_tool.html
peptideMass
http://www.expasy.ch/tools/peptide-mass.html
SAPS
http://www.isrec.isb-sib.ch/software/SAPS_form.html
BLAST
FASTA
ClustalW
http://www.ebi.ac.uk/clustalw/
Pfam
http://www.sanger.ac.uk/Pfam/
PROSITE
http://www.expasy.ch/prosite/
fingerPRINTScan
http://bioinf.man.ac.uk/fingerPRINTScan/
BLOCKS
http://www.blocks.fhcrc.org/
PredictProtein Server
http://www.embl-heidelberg.de/Services/sander/predictprotein/
Meta PP
http://dodo.cpmc.columbia.edu/predictprotein/submit_meta.html
TMHMM
http://www.cbs.dtu.dk/services/TMHMM-1.0/
TOPPRED
http://www.biokemi.su.se/~server/toppred2/
MultiCoil
http://gaiberg.wi.mit.edu/cgi-bin/multicoil.pl
The MultiCoil program predicts the location of coiled-coil regions in amino
acid sequences and classifies the predictions as dimeric or trimeric. The
method is based on the PairCoil algorithm
Tertiary structure prediction by homology modeling
[12]
5.4.7 References
13. Scott D. Patterson, Ruedi H. Aebersold, (2003) Proteomics: the first decade
and beyond, Nature Genetics 33, 311 - 323
14. He QY, Chiu JF.(2003 Aug), Proteomics in biomarker discovery and drug
development , J Cell Biochem. 1;89(5):868-86.
15. Dowsey AW, Dunn MJ, Yang GZ, (2004 Dec), ProteomeGRID: towards a
high-throughput proteomics pipeline through opportunistic cluster image
computing for two-dimensional gel electrophoresis., Proteomics.;4(12):3800-
12.
16. Chamrad DC, Korting G, Stuhler K, Meyer HE, Klose J, Bluggel M ( 2004
Mar). , Evaluation of algorithms for protein identification from sequence
databases using mass spectrometry data, Proteomics.;4(3):619-28.
18. Lundgren DH, Eng J, Wright ME, Han DK. (2003 Nov) PROTEOME-3D: an
interactive bioinformatics tool for large-scale data exploration and knowledge
discovery. Mol Cell Proteomics.;2(11):1164-76..
20. Rai AJ, Chan DW. .(2003 Mar), Cancer proteomics: Serum diagnostics for
tumor marker discovery.. Nature. 13;422(6928):233-7.
21. Burbaum J, Tobal GM. ( 2002 Aug) Proteomics in drug discovery. Curr Opin
Chem Biol.;6(4):427-33.
22. White CN, Chan DW, Zhang Z. (2004 Jul), Bioinformatics strategies for
proteomic profiling. Clin Biochem. 37(7):636-41.
24. Cannataro M, Cuda G, Veltri P.( 2005;) Modeling and designing a proteomics
application on PROTEUS. Methods Inf Med44(2):221-6.
25. Lester PJ, Hubbard SJ. (2002 Oct), Comparative bioinformatic analysis of
complete proteomes and protein parameters for cross-species identification in
proteomics. Proteomics.;2(10):1392-405.
Proteomics is the study the structure, function and interactions of the total encoded
proteins (proteome) as well as their isoforms and modifications in a particular cell
line, tissue, or a whole organism of interest. There are three main approaches in
proteomics: expression proteomics, cell-map proteomics, and structural proteomics.
Expression proteomics (also called “differential expression proteomics”) is the study
of the set of all protein isoforms and their cell to cell differences and modifications.
Cell-map proteomics deals with protein–protein interactions within the cells, tissues
and so on. Structural proteomics includes the determination of three-dimensional
protein structures as well as their higher order complexes on a genome wide scale
[1, 2, 3].
Underlying Principles
Today the attainment of high-throughput proteomics has been possible due to the
modern technological supports such as the highest sensitivity of current
instrumentations related to proteome analysis, the most sophisticated protein
separation technologies and the highest precision in computational data analysis
methods [6]. A number of automated technologies (robotics and intelligent systems
technologies) have made it possible to capture a higher quality snapshot of the large
and complex proteome and analyse their activities [3].
Traditionally 2-DE has been used for obtaining the global picture of the expression
levels of a proteome under various conditions. In recent years, MS technologies have
evolved as a versatile tool for examining the simultaneous expression of more than
1000 proteins and the identification and mapping of posttranslational modifications.
High-throughput methods performed in an array format have emerged enabling
large-scale projects for the characterization of protein localization, protein-protein
interactions, and the biochemical analysis of protein function [7].
While 2-DE is extremely useful, it has some technical limitations. These include the
manual handling of gels which are cumbersome to run, have limits in sample
capacity, have poor dynamic range (has low sensitivity for very acidic or basic
proteins), and are biased toward abundant and soluble proteins (cannot detect low-
abundance proteins in absence of additional sample enrichment techniques). The
resolution of 2-DE is not sufficient compared to the enormous diversity of cellular
proteins, and there may be co-migrating proteins in the same spot. In addition, 2-DE
is time consuming and it may take days to run and analyze a single gel [3, 7, 8, 9,
10].
The a2DEoptimizer by NextGen Sciences features automated gel casting that can
cast multiple gels simultaneously being controlled and monitored by computer. It also
has the ability to create user-defined gradient gels which can be difficult to create
manually. Large Scale Biology, under their subsidiary, Predictive Diagnostics, has
released BAMF (Biomarker Amplification Filter), a computer platform combining 2-
DE, NMR, MS, and biomarkers to identify individual proteins [3].
Several features that are commonly offered by many of the newer automated gel
processing systems include ‘the ability to: (i) import and export gels into standard bit-
mapped graphics formats; (ii) manipulate, preprocess, filter, and organize gel
bitmaps; (iii) visualize and compare gels; (iv) create, queue, and monitor
computational analysis tasks; and (v) present results (e.g., peptide matches in an
excised, digested protein spot)’ [3].
Taking into account the limitations of 2-DE, a number of alternatives to 2-DE has also
emerged in recent years. Non-2-DE-based protein and peptide separation methods
combined with new protein chemistries and enrichment methods and highly
automated MS instrumentation have been developed that provide new tools for
analyzing the properties of proteomes with increased sensitivity and throughput [10].
Using liquid chromatography with various columns and pressure conditions or by bait
molecules mounted on a fixed surface, provided on small chips, proteins can be
sorted based on their various biochemical characteristics, such as hydrophobicity,
anion-, cation- or metal ion-binding capabilities, or specific protein–protein
interactions, e.g. receptor–ligand or antibody–antigen interactions. The small chip
procedure has pushed proteomics studies into high-throughput application on a
large-scale [11].
SELDI TOF/MS is most effective at profiling low molecular weight proteins (<20 kDa).
The application of small sample volumes (µl-range) and the detection of between
15,500 (low resolution SELDI-TOF) and >400,000m/z-ratios (high resolution SELDI-
TOF) makes proteomic profiling of diverse biological samples possible [11]. Figure
5.5.2 below illustrates a typical SELDI-TOF technology.
Figure 5.5.2 Surface-Enhanced Laser Desorption and Ionization (SELDI) technology. Using a
robotic sample dispenser, 1 µL of serum is applied to the surface of a protein-binding chip. A
subset of the proteins in the sample binds to the surface of the chip. The bound proteins are
treated with a matrix-assisted laser desorption ionization matrix and are washed and dried. The
chip, which contains multiple patient samples, is inserted into a vacuum chamber where it is
irradiated with a laser. The laser desorbs the adherent proteins and causes them to be
launched as ions. The time of flight (TOF) of the ion before detection by an electrode is a
measure of the mass-to-charge (m/z) value of the ion. The ion spectra can be analysed by
computer-assisted tools that classify a subset of the spectra by characteristic patterns of
relative intensity [12].
The MALDI-TOF instruments usually perform protein mass fingerprinting (PMF) [13].
Compared with MALDI, ESI has a significant advantage in that in can be easily
coupled to separation techniques such as liquid chromatography (LC) and HPLC,
allowing high-throughput and on-line analysis of peptide or protein mixtures.
Typically, proteins in a complex mixture are separated by ionic or reverse phase
column chromatography and then subjected to tandem MS (MS/MS) analysis via
online ESI [7, 10]. However, more recently offline spotting of peptides onto sample
targets and use of MALDI instruments capable of high throughput MS/MS analysis
has also become an option [10].
Figure 5.5.3 Multi-Q quantitation system. Multi-Q package accepts two data inputs: raw data
from mass spectra and protein identification results from search engines such as MASCOT,
SEQUEST, and X!Tandem. Data then undergo data conversion, peptide ratio determination,
and protein ratio determination, and final protein relative abundance ratios are produced [8].
As shown in Figure 5.5.3 above, Multi-Q provides a data converter for handling
spectral raw data and can also accept search results from various search engines,
including SEQUEST, X!Tandem and MASCOT. After automatic filtering of non-
iTRAQ-labeled peptides, the ratio of all peptides with unambiguous identification is
determined [8].
The determination of protein 3-D quaternary structures has greatly increased in the
past few years. The most common techniques for structural analysis of protein
include NMR, X-ray crystallography, structure prediction methodology, and also MS
to a certain degree.
VEGA ZZ, a molecular modeling software package, can analyze protein structures. It
has an extensive list of features including multiple file format support, atomic
potential attribution, 3-D molecular editor, and a protein–protein docking system [3].
Protein microarrays are representing the first new technology that can profile the
state of a signaling pathway target even after the cell is lysed. The reverse-phase
protein microarray (see Figure 5.5.4 below) has a unique ability to analyze signaling
pathways using small numbers of human tissue cells that were microdissected from
biopsy specimens procured during clinical trials. These arrays can be manufactured
in a sectored array format where dozens of analytes can be queried simultaneously
on one slide, which thereby increases the throughput and facile data analysis more
readily [12].
Reverse protein microarrays do not require direct tagging of the protein as a readout
for the assay, which yields dramatic improvement in reproducibility, sensitivity and
robustness of the assay over other techniques [12].
Protein microarray holds great promise in the identification of drug and drug targets,
as well as in basic proteomic research such as determining protein-protein
interaction, protein-lipid, and enzyme-substrate interactions and also in clinical
diagnosis. The reduced sample consumption in the microarray format is crucial in
proteome analysis since only minimal amounts of samples are available. Other
promises of protein microarray include real-time patient monitoring during disease
treatment and therapy [7]. In future, blood tests may be performed by providing fewer
drops of blood onto a chip with specific protein markers, providing valuable
diagnostic and real-time prognostic information [3].
In the last few years, ProteinChip® technology has been applied in the detection of
cancers such as cancers of the head and neck, lung, breast, pancreas, kidney,
bladder, prostate and ovary through detection of biomarkers in serum, urine,
pancreatic juice, nipple aspirate fluid or tissue homogenates. Besides early detection
of cancer, systematic analysis of the serum, tissue, cellular, and subcellular
proteome may help to find novel biomarkers that uncover transplant rejection, drug
toxicity, and chronic inflammatory or cardiovascular diseases [11].
The virulent and the attenuated strains of M. tuberculosis and Heliobacter pylori were
studied by high-resolution 2-DE and MALDI mass fingerprinting for potential DNA
vaccines and candidate antigens for serological detection. Marker proteins were
selected for the development of potential vaccines that show the high success of
proteomics studies [16].
MS/MS combined with ESI or MALDI has been developed as a potential tool for
identification of targeted microorganisms through analysis of peptides generated from
cellular proteins. Acid extraction of bacterial spore proteins followed by peptide
sequencing using MALDI MS/MS has been used to discriminate species from the
genus Bacillus in spore mixtures. LC-ESI MS/MS has been used in bacterial
classification based on the peptide sequence information generated from LC-ESI
MS/MS analysis of a bacterial protein digest. This method can be a strong
complement to the alternative approaches of comparing microbial genomes based on
DNA sequencing or microarray hybridization techniques [17].
The human protein atlas (created to show the expression and localization of proteins
in a large variety of normal human tissues and cancer cells) is available at
http://www.proteinatlas.org/
The journal ‘Drug Discovery Today’ publishes many articles focusing on the
application of proteomics technology (e.g. protein microarray) in the field of drug
development. The journal can be accessed at http://www.drugdiscoverytoday.com/.
The journal ‘Proteomics’ is a key resource for information on proteomics. The journal
is available at http://www3.interscience.wiley.com/cgi-
bin/jhome/76510741?CRETRY=1&SRETRY=0
A comprehensive list of available current LIMS packages may be found at the
LIMSource (www.limsource.com).
The following table lists some Protein Interaction Resources and some other
resources of Interest [18]:
BIND http://www.bind.ca
MIPS http://mips.gsf.de
http://psidev.sf.net/
PEDRo http://sourceforge.net/projects/pedro
http://pedro.man.ac.uk/
SBEAMS http://www.sbeams.org/
The following table lists some proteomics software systems and their resources [6]:
Software Focus URL
Mascot MS–MS search engine www.matrixscience.com
Mascot Integra Data management for proteomics www.thermo.com
Sequest MS–MS search engine www.proxeon.com
EPICenter Data management and validation for www.agilent.com
peptide ID data
Spectrum Mill MS–MS search engine environment www.genologics.com
Proteus LIMS Proteomics data management LIMS www.proteomecenter.org
Peptide Prophet High-throughput validation of peptide www.proteomecenter.org
identifications
Protein Prophet Protein identification (statistical) www.thegpm.org
X!Tandem Open source search engine for MS–MS www.thegpm.org
GPM Public database of identified peptides www.proteomecenter.org
XPRESS Quantitative differential analysis for ICAT www.proteomecenter.org
SBEAMS Systems biology experiments analysis and http://ncrr.pnl.gov/prism
management
PRISM High-throughput proteomics information www.proteomesoftware.com
management system
Scaffold Protein identification automation software www.phenyx-ms.com
Phenyx Protein identification and validation http://proteome.nih.gov
platform
DBParser Protein identification and validation http://mzmine.sourcefourge.net
platform
MZmine Differential LC-MS analysis of http://www.cebitec.uni-bielefeld.de
metabolomics data
ProDB Storage and analysis of identification www.proteios.org
proteomics experiments
PROTEIOS Storage, analysis and annotation of www.proteomesystems.com
proteomics experiments
ProteomIQ Integrated proteomics data management http://genome.ucsc.edu
platform
Proteome Browser Protein sequence annotation http://www-helix.inrialpes.fr
PepLine Software pipeline for protein identification www.waters.com
Protein Expression Quantitative and qualitative proteomics http://bio.mki.co.jp/product/xome
System analysis
Xome Quantitative and qualitative proteomics www.caprion.com
analysis
CellCarta Integrated suite for quantitative proteomics www.proteomecenter.org
analysis
mzXML File format (standard) for mass spectra www.proteomecenter.org
data
Trans-Pproteomic XML-based analysis pipeline for www.appliedbiosystems.com
Pipeline proteomics data
Software Focus URL
ProICAT Protein quantitation and identification for www.appliedbiosystems.com
ICAT
ProQuant Protein quantitation and for iTRAQ www.amershambiosciences.com
DeCyder MS Identification and quantitation analysis www.bioinformaticssolutions.com
platform
MS peaks De novo protein identification http://ca.expasy.org
Expasy proteomics Protein sequence analysis tools and www.ionsource.com
server databases
Ion Source On line resource of mass spectrometry www.matrixscience.com
methods
5.5.7 References
2 Tyers, M. & Mann, M. (2003) From genomics to proteomics, Nature 422, 13 March
2003, 193-197
3 Alterovitz, G., Liu J., Chow J. and Ramoni, M.F. (2006) Automation, parallelism,
and robotics for proteomics, Proteomics 2006, 6, 4016-4022
5 Wilke, A., Rückert, C., Bartels, D., Dondrup, M., Goesmann, A., Hüser, A.T.,
Kespohl, S., Linke, B., Mahne, M., McHardy, A., Pühler A. and Meyer, F. (2003)
Bioinformatics support for high-throughput proteomics, Journal of Biotechnology
Volume 106, Issues 2-3, 19 December 2003, Pages 147-156
7 Zhu, H., Bilgin, M. and Michael Snyder, M. (2003) Proteomics, Annual Review of
Biochemistry 2003, 72: 783-812
8 Lin, W.T., Hung, W.N., Yian, Y.H., Wu, K.P., Han, C.L., Chen, Y.R., Chen, Y.J.,
Sung, T.Y. and Hsu, W.L. (2006) Multi-Q: A Fully Automated Tool for Multiplexed
Protein Quantitation, Journal of Proteome Research 2006, 5(9), 2328-2338
9 Dowsey, A.W., Dunn, M.J. and Yang, G.Z. (2004) ProteomeGRID: towards a high-
throughput proteomics pipeline through opportunistic cluster image computing for
two-dimensional gel electrophoresis, Proteomics 2004, 4, 3800–3812
10 Roe, M.R. and Griffin, T.J. (2006) Gel-free mass spectrometry-based high
throughput proteomics: Tools for studying biological response of proteins and
proteomes, Proteomics 2006, 6, 4678–4687
13 Arrigoni, G., Fernandez, C., Holm, C., Scigelova, M. and James, P. (2006)
Comparison of MS/MS Methods for Protein Identification from 2D-PAGE, Journal of
Proteome Research 2006, 5(9), 2294-2300
14 Rauch, A., Bellew, M., Eng, J., Fitzgibbon, M., Holzman, T., Hussey, P., Igra, M.,
Maclean, B., Lin, C.W., Detter, A., Fang, R., Faca, V., Gafken, P., Zhang, H.,
Whitaker, J., States, D., Hanash, S., Paulovich, A. and McIntosh, M.W. (2006)
Computational Proteomics Analysis System (CPAS): An Extensible, Open-Source
Analytic System for Evaluating and Publishing Proteomic Data and High Throughput
Biological Experiments, Journal of Proteome Research 2006, 5(1), 112-121
15 Hogan, J. M., Higdon, R. and Kolker, E. (2006) Experimental Standards for High-
Throughput Proteomics, OMICS A Journal of Integrative Biology, Volume 10,
Number 2, 2006, 152-157
17 Dworzanski, J.P., Deshpande, S.V., Chen, R., Jabbour, R.E., Snyder, A.P., Wick,
C.H. and Li, L. (2006) Mass Spectrometry-Based Proteomics Combined with
Bioinformatic Tools for Bacterial Classification, Journal of Proteome Research 2006,
5(1), 76-87
6.2.1 Introduction
Firstly, two fusion proteins are created: the protein of interest (X), which is
constructed to have a DNA binding domain (BD) attached to its N-terminus, and its
potential binding partner (Y), which is fused to an activation domain (AD). If protein X
and protein Y interact, then their DNA-BD and AD will combine to form a functional
transcription activator (TA). This newly formed TA will then go on to transcribe a
reporter gene, which is simply a gene whose protein product can be easily detected
and measured. In this way, the amount of the reporter produced can be used as a
measure of interaction between the protein of interest and its potential partner[1, 2].
In addition to having the fusion proteins encoded for, these plasmids will also contain
selection genes, or genes encoding proteins that contribute to a cell’s survival in a
particular environment. An example of a selection gene is one encoding antibiotic
resistance; when antibiotics are introduced, only cells with the antibiotic resistance
gene will survive. Yeast two-hybrid assays typically use selection genes encoding
proteins capable of synthesizing amino acids such as histidine, leucine and
tryptophan[2].
Once the plasmids have been constructed, they must next be introduced into a host
yeast cell by a process called transfection. In this process, the outer-membrane of a
yeast cell is disturbed by a physical method, such as sonification or chemical
disruption. This disruption produces holes that are large enough for the plasmid to
enter, and in this way, the plasmids can cross the membrane and enter the cell[3, 4].
Once the cells have been transfected, it is necessary to isolate colonies that have
both ‘bait’ and ‘hunter’ plasmids. This is because not every cell will have both
plasmids cross their plasma membrane; some will have only one plasmid, while
others will have none. Isolation of transfected cells involves identifying cells
containing plasmids by virtue of their expressing the selection genes mentioned
previously[4].
After the cells have been transfected and allowed to recover for several days, they
are then plated on minimal media, or media that is lacking one essential nutrient,
such as tryptophan. The cells used for transfection are called auxotrophic mutants;
these cells are deficient in producing nutrients required for their growth. By supplying
the gene for the deficient nutrient in the ‘bait’ or ‘hunter’ plasmid, cells containing the
plasmid are able to survive on the minimal media, whereas untransfected cells
cannot. Selection in this way occurs in two rounds: first on one minimal media plate,
to select for the ‘bait’ plasmid, and then on another minimal media plate, to select for
the ‘hunter’[3, 4].
Once inside the cell, if binding occurs between the hunter and the bait, transcriptional
activity will be restored and will produce normal Gal4 activity. The reporter gene most
commonly used in the Gal4 system is LacZ, an E. coli gene whose transcription
causes cells to turn blue. In this yeast system, the LacZ gene is inserted in the yeast
DNA immediately after the Gal4 promoter, so that if binding occurs, LacZ is
produced. Therefore, detecting interactions between bait and hunter simply requires
identifying blue versus non-blue[2, 4].
Three-hybrid System
In this yeast two-hybrid variation a third protein (Z) is expressed along with the DNA-
BD and AD fusions. Expression of the reporter gene is used to select for interactions
that occur only in the presence of this protein. Three-hybrid systemwas developed to
detect and analyse RNA-protein interactions in which the binding of a bifunctional
RNA molecule links the DNA-BD and AD hybrid-proteins and activates transcription
of the reporter gene. This system is known as RNA three-hybrid system[5].
In recent years, there has been a growing interest in the development of prokaryotic
two-hybrid. A large number of prokaryotes two hybrid systems have been described,
but despite their many advantages, the prokaryotes two hybrid systems still have not
been widely implemented for large-scale or proteomic scale protein interacting
mapping in the same way as the yeast two hybrid system[6, 7].
The yeast two-hybrid system became one of the most popular technologies for the
detection of protein-protein interactions because it is fairly simple, rapid and
inexpensive (avoids the costly protein purification and antibody development needed
in the traditional biochemical methods). No previous knowledge about the interacting
proteins is necessary for a screen to be performed[5].
Limitations
Some classes of proteins are not suitable to analysis by the yeast two-hybrid system.
For example, transcriptional activators may activate transcription without any
interaction. Another class of troublesome proteins are those containing hydrophobic
transmembrane domains which may prevent the proteins from reaching the
nucleus[13]. To overcome this limitation one of the alternative membrane-associated
two-hybrid systems may be used. Other proteins may require modification by
cytoplasmic or membrane associated enzymes in order to interact with binding
partners[5, 13].
Drug discovery is a lengthy and costly process which involves target identification
and validation, drug screening and safety assays, development of animal models and
ultimately, testing of potential drug candidates in clinical trials[14]. The development
of a novel drug may take 10-15 years, with cost estimates of ~800 million US$. It is a
complex process that includes the identification of biological targets as well as the
identification leads that aim at altering or inhibiting the function of a particular
target[14].
The yeast Saccharomyces cerevisiae has long been recognised as a valuable model
organism for studies of eukaryotic cells. Auerbach et al highlighted emerging yeast
based functional genomic and proteonomic technologies in their paper, using yeast
as a model organism in drug discovery processes. These approaches include the
utilisation of variations of the yeast two hybrid systems. With regard to screening for
novel drugs, the yeast two hybrid system can be broadly applied to two areas:
identification of target and its validation and screening for compounds that inhibit the
interaction between two therapeutic target proteins[14, 15].
As a genetic assay, the yeast two-hybrid system is perfectly suitable for high-
throughput studies. This has been used successfully to create so-called “whole
genome interaction networks” where all proteins of a given organism are
systematically tested against each other using high-throughput yeast two-hybrid
strategies[14, 16].
As many proteins that play important roles in human disease have orthologues in
lower eukaryotes such as yeast, D. melanogaster or C. elegans, a potential route
towards identification of novel targets is to create interactions networks in these
model organisms and then try to transfer the insights gained to the human situation.
Comprehensive protein interaction maps have been created for yeast and recently
also for D. melanogaster and C.
Elegans. As an example, the D. melanogaster interaction map identified a total of 4,
780 high-confidence interactions, involving 4,679 proteins[14, 17]. The authors then
used the Homophila database which lists all proteins in D. melanogaster that have
orthologues implicated in disease pathways in human in order to integrate their
protein interaction data with already known disease pathways in human[15, 17]. As
an example of this approach, they demonstrate that a previously known transcription
factor involved in human B cell non-Hodgkin’s lymphoma is connected to two
calcium-dependent phosphatases, a finding that suggests calcineurin phosphatases
may be valid drug targets for treating lymphomas and other cancer types[14].
The two-hybrid system has also been adapted further to study drug-protein
interactions[18]. This technique, termed the yeast "three-hybrid system", uses a
synthetic heterodimer consisting of two different organic ligands to bring into
proximity the DNA-binding domain fused to the receptor of one ligand and the
activation domain fused to the receptor for the second ligand[5]. The feasibility of this
system was demonstrated by using as the hybrid ligand a heretodimer of covalently
linked dexamethasone and FK506. The receptor for dexamethasone was fused to
the LexA DNA binding domain and a Jurkat cDNA library fused to a transcriptional
activation domain was screened; three overlapping clones of FKBP12, the human
FK506 binding protein, were isolated[5, 18].
Reverse two-hybrid systems that can be used to select small molecules that inhibit
protein-protein interactions also have been demonstrated[18]. It is described that
expression of proteins that interact through the two-hybrid system is controlled by the
GAL promoter. Following galactose induction, the two interacting proteins are
synthesized and their association induces the synthesis of a toxic gene. Only cells
where a small molecule inhibits the protein-protein interaction survive. Using this
system, nanomolar concentrations of FK506 were shown to disrupt the association of
FKBP12 with R1 of the transforming growth factor β receptor family[18].
6.2.5 Relevant Web Sites
http://www.promega.com/
http://www.stratagene.com/
http://www.clontech.com/
6.2.7 References
5. Vollert, C. & Uetz, P. (2003) The Two Hybrid System in Encyclopedic Reference
of Genomics and Proteonomics in Molecular Medicine
Source: http://itgmv1.fzk.de/www/itg/uetz/publications/Vollert2003-2H.pdf
6. Clarke, P., Cuív, P. Ó. & O'Connell, M. (2005) Novel mobilizable prokaryotic two-
hybrid system vectors for high-throughput protein interaction mapping in Escherichia
coli by bacterial conjugation, Nucleic Acid Research. 33, e18.
7. Serebriiskii, I. G., Fang, R., Latypova, E., Hopkins, R., Vinson, C., Young, J. K. &
Golemis, E. A. (2005) A Combined Yeast/Bacteria Two Hybrid System, Molecular
and Cellular Proteonomics. 4.6, 819-826.
8. Luo, Y., Batalao, A., Zhou, H. & Zhu, L. (1997) Mammalian two-hybrid system: a
complementary approach to the yeast two-hybrid system, Biotechniques. 22, 350-
352.
9. Schenborn, E., deBerg, L. & Brondyk, W. (1998) The CheckMateTM Mammalian
Two-Hybrid System, Promega Notes. 66, 2-8.
10. Hall, R. A. (2004) Studying protein-protein interactions via blot overlay or Far
Western blot, Molecular Biology Methods. 261, 167-174.
12. Mahlknecht, U., Ottmann, O. & Hoelzer, D. (2001) Far-Western based protein-
protein interaction screening of high-density protein filter arrays, Journal of
Biotechnology. 88, 89-94.
13. McAlister-Henn, L., Gibson, N. & Panisko, E. (1999) Applications of the Yeast
Two-Hybrid System, Method. 19, 330-337.
14. Auerbach, D., Arnoldo, A., Bogdan, B., Fetchko, M. & Stagljar, I. (2005) Drug
Discovery Using Yeast as a Model System: A Functional Genomic and Proteonomic
View, Current Proteonomics. 2, 1-13.
16. Gwynne, P. & Heebner, G. (2003) Drug Discovery and Biotechnology Trends –
Proteomics I: In Pursuit of Proteins in Science
17. Stanyon, C. A., Liu, G., Mangiola, B. A., Patel, N., Giot, L., Kuang, B., Zhang, H.,
Zhong, J. & Jr, R. L. F. (2004) A Drosophila protein-interaction map centered on cell-
cycle regulators Genome Biology. 5, R96.
source: http://genomebiology.com/2004/5/12/R96
18. Parsons, A. B., Geyer, R., Hughes, T. R. & Boone, C. (2003) Yeast genomics
and proteonomics in drug discovery and target validation, Progress in Cell Cycle
Research. 5, 159-166.
Chapter 6.3 Biosensor methods for study of protein-protein
interactions
6.3.1 Introduction
A Mass Spectrometer is a device that measures the mass to charge ratio and
typically consist of three parts; an ion source, a mass analyser and a detector
system. Aebersold (2003) have identified the ionization process in MS may
deteriorate the characteristics of protein and peptides[1]. The classic 2D gel
electrophoresis combined with mass spectrometry cannot accurately measure small
difference in concentration[2]. Instead of technical advances on 2D PAGE, it still
regards as time consuming and not suitable to examine large number of samples.
Matriz-assisted Laser/Desorption ionisation time of flight mass spectrometry (MALDI-
TOP MS) offer a more sensible of identifying small volumes of protein and this
technique rapidly take over the previous MS following its discovery. MALDI can
quickly recognize masses of peptides generated from a pure fragmented protein and
it provide highly reliable datas. SELDI offer a similar measurement of analysing
biological mixtures. The main differences between MALDI and SELDI is that in
MALDI the surface or beads are passive probes and do not contribute to the reaction
whereas in SELDI, protein are immobilized on one of a variety of chip surfaces, all
with different binding specificities (Refer to chapter 4.1 and chapter 5.3 for more
details).
SELDI coupled with protein chip is an effective tool for simultaneous detection of a
variety of relevant protein expression under different condition. Differences in protein
expression level can then be used to identify disease, differentiate different stage of a
disease, or different time point following by toxicant treatment. Therefore, analyses of
SELDI data play a pivotal role in developing SELDI technology. Many researches
have been conduct under rigorous procedures to detect and discard low quality
spectra prior data analysis in SELDI. Hong et al (2005) presented a novel 144
spectrum as a correlation matrix to measure and detect low quality data[3] (Refer to
chapter 9.2 for more details).
6.3.2.3 SPR&MS
Campell et al (2004) develop novel SPR biosensing array measurement that can
measure 120 interactions simultaneously and a computer interfaced video camera to
probe the interactions (Figure6.3.2.2) By implanting array method in SPR, it offer at
least two advantages: measurement smaller protein and calculating Kinetic and
concentration in parallel[5].
6.3.2.5 SPR&AFM
Another advanced of SPR biosensor is that the method may be combined with AFM
to obtain additional information such as identification of surface topology of bound
proteins and thickness of protein layers immobilized on the surfaces of protein
arrays.
The fact that SPR generates real-time binding data make it well suited to the analysis
of equilibrium measurement and kinetic measurement. With respect to equilibrium
analysis, it is very time-consuming and required several injection of different
concentration. In general, the time of equilibrium is determined primarily by the
dissociation rate constant K-off. Slow K-off values are usually resulting from high
affinity interaction (KD<10 nM). In contrast, weak (KD>100 nM) interaction has fast
K-off value and tends to be easily studied. SPR could generate highly reproducible
data. With respect to kinetic measurement, SPR is easy to use and the analysis
software is available, it is quite a common practice of generating kinetic data.
Due to labelling protein or other detection probes may result in loss of biological
activity, non-labelling methods are preference. The current non-labelling methods
include SELDI, AFM and SPR[6].
In addition to the use of unlabeled protein, SPR offer the advantage that it operated
in situ. In other words, there is no requirement of substrate to rinsed or dried the
sample before analyse. This feature is extremely useful for quantifying low affinity
protein-protein interaction[7].
SPR is not suitable for high throughput assays; due to average analysis take
approximately 10 mins. Common conundrums include surface deteriorate over time
and air bubble may blockage micro fluidic system. Many research have been conduct
to change this disadvantage.
During the immobilizing procedure, protein may undergo denaturation and show non-
specific binding in the analysis of protein-protein interaction.
There are currently two types of SPR sensors, angular and spectral SPR biosensors,
widely be used in proteome researches. Angular SPR biosensors are based on the
angular interrogation and analyse protein interaction by scanning incidence angles at
a constant wavelength. On the other hand, spectral SPR biosensors are based on
the wavelength interrogation and scan wavelengths at a constant incidence angle to
analyse biomolecular interactions.
SPR imaging measured the intensity change of refractive index on the gold surface.
Jung’s group (2004) have applied this technology into monitoring protein expression
by E. Coli. In short, they immobilized three proteins; maltose-binding protein tagged
human interleukin-6, hexahistidne-tagger human growth factors and glutathione S-
transferase-tagged human interleukin-6, onto gold-coated slides pre-treated with
cyclodextran, Ni-iminodiacetate and glutathione, respectively. They found the
conundrum of SPR imaging was the SPR spectrum may not horizontally move, which
results in the fluctuation of SPR imaging[8].
The main difference between SPR spectroscopic imaging and SPR imaging is that
the former detected the shift of SPR wavelength, but the later detected the SPR
intensity. The SPR spectroscopic imaging was combined position automation and
SPR spectroscope into one technology. There are two modes of SPR spectroscopic
imaging: line-scanning modes and whole scanning modes. A typical experiment of
antibody-antigen interaction of SPR spectroscopic imaging is that tthree G-proteins
such as rhoA, rhoA N19, rac1, and C-reactive protein were immobilized onto a protein
array modified with a mixed thiol layer of 11-mercaptoundecanoic acid and
mercaptohexanol. Then, incubated the array with two monoclonal antibodies against
rhoA and rac1. Afterwards, the array were analysed by SPR spectroscopic imaging
by whole scanning mode (Fig. 6.3.4.2)[9].
The whole scanning method can provide detailed information and cross reaction of
antigen and antibody. In the figure 6.3.4.2, it shows anti-rac1 have a strong binding to
rac1. However, anti-rhoA has cross reactivity with rhoA N19. The drawbacks of whole
scanning methods is that it takes a long time to finish scanning due to the whole
protein array must be scanned by graduate moving to all spots. Yuk et al (2005) have
demonstrated the analysis of antibody-antigen interaction, by line scanning mode.
They scanned protein arrays along with the central lines and present the data by
color spectrum. In their experiment, 7 different proteins, such as tissue
transglutaminase (TGase), hemoglobin, haptaglobin, rhoA, rac1, GST and rhoA N19,
were immobilized and then their interaction with three antibodies, anti-rhoA, anti-
rac1, and anti-haptoglobin were analysed by the line-scanning mode of the SPR
biosensor. In summary, the sensitivity of an in situ spectral SPR biosensor was not
as good as an ex situ spectral SPR biosensor[9].
Dr.Zhou’s research
http://www.calstatela.edu/dept/chem/Zhou/research.html
Wikipedia
http://en.wikipedia.org/wiki/Surface_plasmon_resonance
Introduction of SPR
http://www.uksaf.org/tech/spr.html
6.3.7 References:
7.1.1 Introduction
Proteins play a crucial role in many of the physical and chemical processes in the
living cell [1]. These ubiquitous molecules have a highly organized three dimensional
structure in solutions. Peptides are smaller version of proteins which generally are
less organized in solution state [2]. There is no clear dividing line between proteins
and peptides but acceptable distinctions being proteins are larger peptides. With the
completion f the human genome project the study of proteins have increased
considerably, also the use of proteins as target molecules for synthesis of drugs has
increased the need for artificial synthesis of peptides [3].
1] Bergmann and Zervas, 1932- The discovery of the carbobenzoxy group which
could be used as an easily removable functional group for polyfunctional amino
acids.
As we shall see further on, this last discovery was perhaps the most important one; it
allowed for the increased use of synthetic peptides in experiments and has mostly
replaced solution phase peptide synthesis. Bacteria and viruses were and are still
used for the production of proteins by means of expression systems. But with the
advent of solid phase and solution phase peptide synthesis it was possible to mass
produce peptide sequences.
7.1.2 Principle
The main principle behind solid phase peptide synthesis can be summed up as
follows ‘if a peptide is bound to solid insoluble matrix then the reagents that are
unreacted in the production phase of any synthetic step can be washed, decreasing
the time required for chemical synthesis, decreasing the chances of side reactions.’
The T-Boc synthesis by Merrifield uses an acid labile temporary protecting group
while the F-Moc synthesis developed by Sheppard uses a basic labile temporary
protecting side group.
Both methods of solid phase peptide synthesis are carried out from C terminal to N
terminal, with the N terminal end being protected by a removable/temporary group to
which the next amino acid in line is attached. The process is reversed in nature,
where the cell’s ribosomes synthesize proteins from the N terminal to the C terminal.
Solution Phase Peptide Synthesis [4]- Although this method has mostly been
replaced in labs by the solid phase synthesis method, it is still used for large scale
production of synthetic peptides. It is often referred to as the classical method of
peptide synthesis. The N-α-protected amino acids are added in a stepwise manner to
a growing chain which starts from the C-terminal amino acid. Each step of the
coupling process is brought about by one of given methods as follows-
Solid Phase Peptide Synthesis [SPPS] [5]- This is the preferred method of peptide
synthesis, especially for small labs that require high quality synthetic peptides in
small amounts. In this technique, a solid matrix is placed in a reaction vessel. This
resin has a reactive group which forms the base for the growing peptide chain and
which can bind to the carboxyl end of the N-protected amino acids. A single cycle
consists of de-protecting the temporary reactive group, draining the de-protecting
agent, and adding another amino acid to the growing chain. At the end of each cycle,
the peptide is tested for purity. When the desired sequence has been achieved, the
completed peptide is removed by another cleavage agent, which targets the bond
between the C terminal amino acid and the support. The peptide is then freeze dried
and analyzed by means of high performance liquid chromatography [HPLC] and
mass spectrometry to characterize it.
Two kinds of protecting groups are present on amino acids. The ‘temporary’
protecting groups protect the amino end of the amino acids and are removed at the
beginning of each cycle. These are the N-α-tert.-butyloxycarbonyl [Boc] group and
the N-α-fluorenylmethyloxycarbonyl [Fmoc] groups. The differences in the two
methods of SPPS are based on the chemistry associated with these two groups, the
former being an acid labile group and the latter being a base labile group. Therefore,
we have the Boc synthesis technique and the Fmoc synthesis technique. The
‘permanent’ groups are present to protect the side chains from reacting with each
other and are usually similar for both methods. These are usually ether, ester or
urethane derivatives of benzyl alcohol. In Boc chemistry, the permanent groups are
often modified with the addition of electron donating methyl/methoxy groups or
electron withdrawing halogenic groups. The permanent groups are removed in the
final step of cleavage.
Boc Chemistry [6]- Boc chemistry is the classical method of solid phase peptide
synthesis, developed by Merrifield. The temporary group Boc group is introduced on
to amino acids using either di-tert-butyl dicarbonate or 2, 2-phenylacetonitrile, in
aqueous 1, 4-dioxanee containing NaOH [7]. This Boc group cannot be removed by
alkali and neucleophiles, but can be removed rapidly by inorganic and organic acids.
This is one of the main steps in the Boc chemistry. The initial Boc protected amino
acid is attached to the resin by an HF cleavable bond. Other Boc protected amino
acids are added in successive cycles. A single cycle can be outlined as follows-
Deprotection- The protecting, acid labile group is removed using neat trifluoroacetic
acid [TFA].
Addition- Dimethylformamide [DMF] is used to neutralize and wash away the residual
TFA and the activated amino acid is added to the reaction vessel. Confirmation of the
binding is obtained by carrying out a ninhydrin test on a small sample of the resin and
measuring the absorbance of the ninhydrin solution against the amount of resin taken
in the sample. In case, the percentage of binding is relatively low, the entire cycle is
repeated again, with the same amino acid. This process is referred to as double
coupling. The result after double coupling is usually accepted as final, no matter what
it is. The entire cycle is then repeated again with the next amino acid until the desired
sequence has been achieved.
Removal- After the desired peptide sequence is complete; the peptide is removed
from the resin using anhydrous hydrogen fluoride [HF].
Thiol compounds are often added to protect the peptide from the carbocations that
are generated in the process.
Figure 7.1.4 - Summary of the Boc chemistry.
Reference: http://en.wikipedia.org/wiki/Peptide_synthesis
The use of Fluorous tags in conjunction with solid phase peptide synthesis has been
recently discovered. Taking into consideration the unique property of Fluorous
compounds, they were used to tag the growing peptide and were de-tagged at the
end.
The unique property of fluorous compounds was used to tag in conjunction with
SPPS as described by [8]
Fig 7.1.2 – Fluorous Peptide Synthesis
Reference: www.fluorous.com/peptides.html
The level of sophistication is set to increase with the improvement in linker design,
rapid on and off bead technology as well as automated synthesis equipment.
Advanced Automation Peptide Protein Technologies Apex 396 for multiple peptides,
Model 90 for small and mid scale production and Model 400 for large scale
production are the examples of automated systems. Matrix 384 automated
synthesizers can produce up to 384 peptides simultaneously. Although there is
extensive use of high throughout techniques like automated synthesizers, most
organizations prefer to have knowledge of manual peptide synthesis.
The transition from t-Boc to F-Moc was one the most notable advancement in Solid
Phase Peptide Synthesis. The major change in F-Moc was the use of mild basic
condition (piperidine) for the cleavage of peptide in the place of HF cleavage in the
classical method.
Boc chemistry uses an acid labile temporary protecting group while Fmoc chemistry
uses a mild base labile group as the temporary protecting group. Although the
chemistry and the reagents required are different, the cycle is essentially similar to
Boc chemistry. Deprotection of the amino acids is achieved by 20% piperidine in
DMF. Deprotection is much slower than deprotection by TFA in Boc chemistry
because the reaction kinetics is influenced by the formation of an anionic nitrogen
group.
The main advantage of Fmoc chemistry is that it does not involve the use of HF to
remove the permanent protecting groups or for cleavage from the solid support. TFA
is used instead. The use of HF is dangerous, harsh on the peptide and is difficult to
handle.
Also when peptides are evaluated by HPLC, Mass Spectroscopy, Sequence
Analysis, it has been found that synthesis by Fmoc chemistry is more reliable and
accurate. This was true as Fmoc uses less harsh conditions for cleavage of peptide
from resin and removal of permanent protecting groups. G.B. Fields et al. compared
the two techniques and indicated that Fmoc chemistry is more reliable and tends to
give desired peptides in better purities [9].
Advances in the development of methods for the design and synthesis of peptides,
pseudo-peptides and related compounds helps in the understanding of protein
structure, domain structure and fold, topology, and dynamics. Peptide therapy has
enormous potential in diverse areas as growth control, blood pressure,
neurotransmission, hormone action, pain, digestion, reproduction.
Some of the medical and biological applications for synthetic peptides are
Enzyme inhibitors - Inhibition of enzymatic systems is on eof the prime targets for
regulation and control of diseases. Current targets include cancer cells, bacteria,
humans and many other systems example 5-flurouracil is used as chemotherapeutic
agent, penicillin are considered as first line antimicrobials, while lovastatin is used as
a cholesterol reducing agent. Recently peptides have been used as therapeutic
agents for targeting specific enzymes or systems. For example hirudin peptides have
helped to understand the mechanism of thrombin-hirudin coagulation pathway.
Receptor Studies – the use of peptides in the study of receptor types and subtypes
and providing analogues for in vivo studies [4].
Antigenic and Immunogenic uses – synthetic peptides are used in the study of
immune responses due to its ability to mimic similar antigenic sequences in proteins.
The ability to design novel analogues which mimic certain properties of the original
compound present in the body has been used to study peptide hormones,
neurotransmitters, enzyme inhibitors and so on [11].
GenScript Corporation
http://www.genscript.com/peptide.html
Sigma Aldrich
http://www.sigmaaldrich.com/Brands/Fluka___Riedel_Home/Organic___Synthetic/Pe
ptide_Synthesis.html
7.1.8 References:
1. Ashely, M.J., V.J. Hruby, and J.-P. Meyer, Bioorganic Chemistry Introduction
to Peptides and Proteins, ed. S.M. Hecht. 1998: Oxford University Press.
2. Lyod-Williams, P., F. Albericio, and E. Giralt, Chemical approaches to the
Synthesis of Peptides and Proteins. 1997: University of Barcelona, Barcelona,
Spain.
3. AnaSpec. 2006 [cited 2006 24 October]; Available from:
http://www.anaspec.com/.
4. Grant, G.A., Synthetic Peptides: Beginning the Twenty-first Century. 2nd ed.
2002: Oxford University Press.
5. Ermolat’ev, D.S. and E.V. Babae, Solid-phase synthesis of N-(pyrimidin-2-
yl)amino acid amides. Chmestry Department, Moscow State University,
Moscow, Russia, 2005: p. 172-178.
6. Merrifield, B., Solid Phase Peptide Synthesis - The synthesis of Tetrapeptide.
Journal of the American Chemical Society, 1963. 85: p. 2149.
7. Bodanszky, M. and A. Bodanszky, The Practice of Peptide Synthesis. 1984,
Berlin, Springer-Verlag.
8. Zhang, W., Fluorous tagging strategy for solution-phase synthesis of small
molecules, peptides and oligosaccharides. Curr. Opin. Drug Discovery
Development, 2004. 7(6): p. pp. 784-797.
9. Fields, G.B., et al., Evaluation of Peptide Svnthesis As Practiced in 53
Different Laboratories.
10. Beeley, N. and A. Berger, A Revolution in Drug Discovery. BJM, 2000. 321: p.
pp. 581-582.
11. Hurby, V.J., F. al-Obeidi, and W.Kazmierski, Emerging approaches in the
molecular design of receptor selective peptide ligands - Conformational,
topographical and dynamic consideraions. Journal of Biochemistry, 1990.
268: p. pp. 249-262.
Chapter 7.4 Purification and characterization of synthetic
peptides
Sandip D Kamath
7.4.1 Introduction
Peptides can be defined as long chains of amino acids, where the amino acids are
linked together by peptide bonds. Peptides and proteins are complex biomolecular
structures which form the basic building blocks of life. It is the final product or the
expression of the vast genetic information. In the biological process, peptides are
produced by the addition of amino acids, one after another through the formation of
peptide bonds, in a sequence that is dictated by the RNA sequence, which in turn is
decided by the DNA material of the cell. This is called the primary structure of the
peptide thus formed. The folding of the structure then takes place to give a final
functional protein. The size, shape, structure and function of the protein are decided
by the primary amino acid sequence of that protein. In nature, the assembly of amino
acid starts from the amino acid terminal and it proceeds towards the carboxy terminal
of the last amino acid. Peptides can also be synthesized in the lab artificially, which
are called synthetic peptides.[1] Synthetic peptides can be made by a process called
“Solid Phase” chemical synthesis. In this process, the synthesis of the peptide, starts
from the carboxy terminal and makes its way to the amino terminus, in contrast to the
biosynthetic pathway. The amino acid that constitutes the carboxy terminal of the
proposed peptide is attached to a solid surface, i.e. a resin. Subsequent amino acid
molecules are linked to the previous amino acid at its carboxy terminal forming a
peptide bond. This is not a straight forward reaction; there are other reactive groups
on the amino acid that can react with the forming chain and form secondary
structures. To avoid this, blocking agents are used to protect other reactive groups
and expose only the required group. t-BOC and f-MOC are two such agents that are
widely used in preparation of synthetic peptides.[2] After the linking process of the
amino acid is complete, the blocking agent is removed and the amino group is
exposed for the next amino acid to react with it. This cycle is continued until the
required peptide is formed. Peptides containing up to 20 amino acids can be easily
formed with the above technique. It is a bit more difficult to synthesize peptides
containing up to 100 amino acids, though they have been synthesized successfully.
[2]
After the characterization of the required peptide, the next step is its purification.
Different chromatographic techniques are employed in the purification step such as
preparative reversed phase high performance liquid chromatography (RP-HPLC), Ion
exchange chromatography (IEC), Size exclusion chromatography (SEC) and affinity
chromatography.[3] The first step in the purification of any peptide is to describe the
optimal conditions of the process. Specific information about the peptide and related
topics should be gathered; as in the intended use of the product, availability of the
starting material and its handling, intended use of the final product, contaminants that
have to be removed completely, the level of purity required in the final product, and
the equipment and resources available.[4] The time frame to complete the work and
the economical constraints should be defined before finalizing the purification
layout.[5] Information regarding the purity and contaminants will help in designing the
purification protocol. Purification of proteins is a complex procedure. Most of the
times it is not a one step process. Depending on the level of purity required for the
final peptide, the process of purification of peptides is designed in a multi-step
manner. The initial steps help in the removal of impurities that are easily detectable
and separable, and the final step is used for removal of contaminants that are nearly
similar to the original peptide in terms of size and composition.[4] Each added step in
the process will lead to some loss of the final product. Better purity might be achieved
by more number of steps, but it results in lower yields of the final product due to the
loss associated with each step. Hence it becomes necessary to use fewer, optimum
number of steps in purification to get a good yield of the final product with acceptable
level of purity. Different chromatographic techniques must be engaged in a logical
sequence so as to avoid the need of any additional processing steps and to keep the
steps to a minimum.
The purification sequence can be broadly divided into 3 stages. The first stage is
called capture, wherein the main aim is to isolate, concentrate and stabilize the main
peptide. [6] At this step, one cannot expect to obtain a highly pure target protein as
the outcome. The main goal of this stage is to separate the target peptide from
contaminants that are very dissimilar to it, and from impurities that might destabilize
the target peptide [6]. RP-HPLC is a good technique to isolate the protein, but
clogging might occur due to the large impurities present in the crude sample and
hence not recommended. Techniques widely used for carrying out this stage are Ion
exchange chromatography and hydrophobic interaction chromatography [6]. The
second step is called the intermediate stage. In this stage the major bulk impurities
are removed. The resolution of the separation in this stage is required to be a little
higher because the contaminants are such that it more or less resembles the target
peptide in terms of structural properties. RP-HPLC is a suitable technique that can be
used at this stage of purification. The final stage is the Polishing stage [6]. The aim of
this step is to remove trace impurities from the sample. These impurities are very
similar to the target peptide in terms of structure, and it is difficult to separate them
using a low resolution separation technique. The aim of the polishing stage is to
obtain 100% pure target peptide. Size exclusion chromatography maybe used for this
stage, but for separating structural variants that differ very slightly from the target
peptide, RP-HPLC might be used owing to its excellent resolution power.[6]
Figure 7.4.2 Interaction of a solute with a typical reversed phase medium. Water adjacent to
hydrophobic regions is postulated to be more highly ordered than the bulk water. Part of this
‘structured’ water is displaced when the hydrophobic regions interact leading to an increase in
the
overall entropy of the system.
(teachline.ls.huji.ac.il/72682/Booklets/AmershamRPCManual.pdf)
Large scale preparative purification of the synthetic peptides requires both high
resolution separation and the ability of the technique to be scaled up. When the RP-
HPLC technique is used, the purification of the synthetic peptide is optimized using
small particle reversed phase medium and then scaled-up accordingly keeping the
selectivity same, and by increasing the particle size.
A recent chromatographic technique that has been proposed is the novel mixed-
mode reversed phase weak-anion-exchange type stationary phase. The separation
material contains two distinct binding domains in a single chromatographic ligand.
First is the lipophilic alkyl chain for the hydrophobic interactions, similar to what we
see in reversed-phase chromatography, and second a cationic site for anionic
exchange chromatography for interaction with an oppositely charged solute.[7] The
purified final product was analyzed HPLC-UV and HPLC-ESI-MS and it was found
that all the major impurities had been removed in a single run, employing the HPLC-
WAX stationary phase. IN comparison to RP-HPLC this technique has improved
productivity, and the yield of the pure peptide per run using the HPLC/WAX method
was found to be 15 times higher as compared to the standard gradient elution RP-
purification.[7]
Polymeric particles have been long used in the purification of biomolecules. These
are generally semi-rigid porous particles with limited mechanical stability and are
used in low pressure, low flow conditions. This limits its use to the capture phase of
purification and it is unsuitable for the high pressure polishing stage of the purification
protocol. In the large-scale preparative and process scale purification of synthetic
peptides, new polymers which make the stationary phase have been developed. The
rigid copolymers of styrene and divinylbenzene have been developed for the same
purpose.[8] These polymeric particles have high mechanical stability and are able to
operate at high linear velocities. The pore size and morphology are optimized so as
to facilitate unhindered solute diffusion and to provide maximum surface area to
enhance the loading capacity. A pore size of 100 Angstorm is developed for the
purification of synthetic peptides. The stability of the polymers enables cleaning with
sodium hydroxide without any particle deterioration or any snags in the selectivity.[8]
One of the main problem encountered in the purification of synthetic peptides is the
separation of contaminants that are similar in shape and structural properties to the
target peptide. These impurities are closely eluted near the target peptide which
makes it difficult to separate. This problem can be taken care of with the use of
fluoros-based separation.[9] There are two different methods that can be followed.
Either the final target peptide is tagged with an appropriate fluoros protecting group
or the impurity that resembles the target peptides or to the intermediate unreacted
product. Then with the use of Affinity chromatography, the fluoros tagged entity can
be easily separated from the rest. It has been demonstrated that the fluoros tagged
impurities can be easily separated from the target peptide using Fluoros-HPLC.[9]
RP-HPLC is mainly used for the purification of synthetic peptides. The main
advantage of this technique is that it can used for the separation of proteins that are
available in microquantities and at the same time, be used for large scale purification
of peptides. Also it is generally coupled with other chromatographic techniques in a
logical sequence to get good yield of the final product and perform this task in a
suitable time frame. Biomolecules with hydrophobic character can be easily
separated using reversed phase chromatography with excellent recovery and
resolution. But there are some problems associated with the use of RPC. One of the
main problems is the buildup of contaminants in the reversed phase columns. The
contaminants have lesser retention power and are eluted in the void volume.[10]
These undesired impurities might be interpreted by the detector as chromatographic
peaks, baseline upsets or negative peaks. If the contaminants are retained on the
column and if the mobile phase is not strong enough to elute them, then they start
accumulating near the column heads, after many injections. If the contaminant build
up is too high in the RP column, then they start to act like a stationary phase, and
analytes interact with these modified phase to give an altered separation patern.
Retention times can shift and tailing occurs [10]. If there is too much contamination in
the column, the pressure on the pump increases to a high level and can cause the
column to settle and create a void. The best method to prevent this is specific
washing techniques after a number of runs on that column.[10]
Perhaps the area that has been affected by this technology the most is the
Biopharmaceutical sector. In this sector, it is of utmost importance that the peptides
used as therapeutic agents are free of any sort of impurities as it might lead to the
deterioration of the therapeutic agent. Apart from its applications in the medical field,
it has wide ranging uses in other fields as well.
Affinity chromatography has gained popularity in recent years, due to the high
specificity in the separation of the target peptide. The selection of peptide ligand is
the most important step, which can enable the separation of the target molecule in a
single step. Any alterations in it can lead to unwanted results. Therefore it is
necessary to obtain a pure sample of the peptide ligand, and it can be done by using
the chromatographic purification techniques. For example the PY574 alpha is an 18
amino acid peptide constituting part of the intracellular domain of the PDGF α-
receptor. This peptide was synthesized and used as a ligand for subsequent affinity
purification of transduction signal protein form the cell lysates. The purification was
achieved in a single step using reversed chromatographic technique. [12]
Another application, where a synthetic peptide was used as a ligand, was in the one
step affinity purification of the recombinant urokinase type plasminogen activator
receptor. In this experiment the high affinity synthetic peptide antagonist (AE152)
was synthesized and purified to be used as a ligand for performing a affinity
purification.[11]
7.4.5 Relevant web sites
Applied Biosystems.
http://products.appliedbiosystems.com
Viscotek
http://viscotek.com
7.4.7 References
In the field molecular biology there is great discovery of the DNA structure by Watson
and crick reveals the most of secret of the biological system up to sequencing of the
gene. The sequencing of the gene is really very helpful to make the RNA and protein
because the final product of DNA metabolism is the protein. In the transcription the
RNA formed is not same as the revers of the amino acids sequence as per the gene
sequence. In the post transcriptional modification the some nucleotide are spliced
and just useful are interconnected and give protein. In this recent time the large
amount of sequencing data is available because of human genome project, but due
to improper decoding techniques we are helpless to find perfect gene sequences. It
is very difficult to find gene without any intron sequence nucleotide give directly
proteins. To solve this we have some idea like to find the entire factor for the invitro
post transcriptional modification, but it is too hard to decode large number of
searched sequence. The use of this DNA and RNA sequence to cure disease is very
tedious method and time consuming.
The decoding of proteins from its amino acid sequence and use whole group of small
peptide is more convenient and easy. The use of different specific known protein
particle will give us sure result what we want. “Peptide library is a systematic
combination of different peptides in large number. It is a powerful tool for drug
discovery, structural studies and other applications. Solid phase peptide
synthesis, along with other methods, has been successfully used to prepare
peptide libraries."[1].The basic principle of the peptide library is expression of
libraries of peptides in mammalian cells to select for trans-dominant effects on
intracellular signalling systems. [3] Peptide library is the use of peptide to find
signalling pathways of the cell. These peptides which we will design as per
expectation interfere with intracellular signalling. This will be understood by the use of
reagent we used. These reveals of this signalling pathway are helpful to cure and find
a drug for that decease. When we carry the peptide library into the cell due their
small size we can infrequently scan that peptide which binds to the intracellular
protein or surface protein. The peptides are as affinity reagents so we can isolate
their target protein complex and distinguished their mechanism. We will create this as
therapeutics model for drug discovery. [2]
Fig-1 general mechanism of peptide library
Source: http://www.stanford.edu/group/nolan/library_systems/peptide_systems.html
This antibodies block cocaine molecule and therefore cocaine disable to bind
agonistically to that nerve receptor. So there is no psychological effect on central
nervous system. To develop this system they use three approach “(1) Those
compounds that can be used in a substitution-based treatment as a cross-tolerant
stimulant; (2) medications that serve as antagonists by blocking the binding of
cocaine to its cognate receptors; and (3) compounds that function by acting at other
sites distinct from the cocaine site of action but functionally antagonize the effects of
cocaine. Several biopsychological models have been proposed and evaluated to
address addiction and relapse prevention. [5] “Unquestionably, an improved
pharmacotherapy would increase the effectiveness of such programs and alternative
strategies for treating cocaine addiction are needed if progress is to be made.”[4]
Among this approaches the second approach is good to cure the effect of cocaine on
central nervous system.
(2)Epitope mapping using m RNA display:
To map the epitope or to discover the specific part of the antigen we have to depend
on the antibody, which is protein -protein interaction process. Here antibodies are
used as molecular reagent .in the traditional method. We have to depend on the
expression of the different overlapped poly peptide .This is achieved very fine and
correct sequence by using the antibody probing reactivity. This method is very
tedious, time consuming and costly.
In this display technique each expressing vector contain multiple copy of each single
polypeptide sequence on its surface. The revivals of the active peptide are by the
affinity of that peptide by the florescence- activated cell sorting or by biopanning. In
short the peptide is fused with the mRNA at the 3' end with the help of tethered
puromycin compound. This fusion is now selected by the RT-PCR amplification for
the sequencing. [6]
Fig-2 out line of the whole procedure of epitope mapping using mRNA display
Source: http://peds.oxfordjournals.org/cgi/content/full/18/7/309
(3)Cloning at molecular level for the expression of the insect FMRF amide receptor:
FMRF amide and its compound in nervous system are found in very wide amount in
all invertebrates which has very useful function. Here we use the Chinese hamster
ovary cell to express the cloned Drosophila orphan receptor. The screening of
peptide library discloses that the receptor reacted with very high affinity FMRF amide.
This affinity is very high than the drosophila’s FMRF receptor.
The peptide library is tested on the drosophila and other insect and many peptide
hormones. “Addition of 10–5 or 10–6 M of these peptides to the pre-treated CHO cells
gave negative results for many of these peptides, but peptides resembling FMRF
amide at their C termini, and FMRF amide itself, gave clear bioluminescence
responses, Because FMRF amide was the most potent peptide in inducing the
bioluminescence response.”[7] For the testing they prepare eight of the peptide which
is associated of to FMRF amide peptide of drosophila. It is assumed that this peptide
is the found in the prepohormone. After testing all the peptide, from the result it is
found that the FMRF amide-6 ea the most potent intrinsic protein. [7]
7.5.3 Advantages of the peptide library
There are many advantages of the peptide library techniques but the main
advantages are as per the following
(1)With the use of genetic engineering we can identification of the gene coded for the
surface protein but this gene is for the all the surface protein so it is random peptide.
The main advantage of this peptide library is an affinity of the peptide towards its
target protein. [8]
(2) Display avoids the problems of cytotoxicity, soluble protein expression and
secretion bias in cell-based systems; it could be an ideal means by which to display
functional (single chain) proteins for applications such as target discovery and
functional identification. [9]
(4) This library offers the advantage of multimeric binding (several hundred copies of
the peptide are displayed on each phage), and the peptide sequences identified from
the binding phases often reveal sequence themes that make it easier to narrow in on
the consensus sequences that bind your target[11]
(5) “By delivering libraries of peptides to cells we can scan for rare peptides that bind
to and induce function of intracellular signalling proteins and surface molecules.
Using these peptides as affinity reagents we can isolate their target protein
complexes and discern mechanism. Finally, we are using them to create therapeutic
modalities from either their targets, the peptides themselves, or by some better
understanding of the biological process that is uncovered.” [2]
The main limitation of the peptide library technique is in the discovery of the epitope.
When we searching the antigen related to the diagnostic purpose that microbial
sequence database is not complete. Therefore searching of similarity will fail to
notice any antigens which are not presented in the available database. [13] “The use
of longer peptides cannot overcome a primary limitation of peptide libraries, the
inability to represent conformational epitopes comprising amino acids from distant
positions along the polypeptide sequence.”[14] In the random peptide display in
identifying actual epitope is very time required to improve specific phage clones
because three or more round for the panning require getting consensus amino acid
sequence ant find epitope. This will take 3-4 week. This technique is costly and
labour précising. “Additionally, a random peptide library, despite a theoretical size of
>1012 peptides, may not include all possible amino acid combinations and therefore
may not contain specific binding-peptide motifs.”[15]
Another limitation of the peptide library technique is the folding of engineered protein
to their natural form. [16]Owing to the poor sequence diversity some shapes may be
incomplete with the target .This incompleteness is because of electrostatic charges
repulsion .Hence these peptides are not selected to their target protein complex. [17]
“One limitation of synthetic peptide libraries is the lack of specific criteria for deciding
whether a detected binding activity is genuine or is due to non-specific
interactions.”[18] More over the problem of coupling of amino acids mixture for the
preparation of peptide library. The coupled peptide like di, tri, and tetra are not found
as they initially formed. [19]
The phage presentation of the peptide library is a powerful tool for the recognition of
compound and to distinguish and bind to that target. Phage system is used for the
detection of protein ligand dealings for the binding affinity improvement. The varieties
of display methodology that survive offer a variety of capability in panning for
attraction interaction. This display system offer a opportunity to address issue of
expression prejudice and discover the possibility of creating low molecular
compounds which have drug like property. Phage display using peptide library will be
used as a biomarker for the disease like cancer cardiovascular disease and
angiogenesis. “The peptides are capable of homing to specific pathways and targets
within those pathways. By generating the proteins expressed from cDNA libraries,
display methodologies can provide a direct link between phenotype and
genotype.”[21]
In the cell the signal transduction process, the interaction of protein is the major
reaction. We want to determine that how this protein interact and with signalling
complex and produce a signal. Some of the peptide library developed which is useful
for the study of protein-protein interaction. This all approach basically depends on
peptide or the DNA sequence of peptide to identify the motif for the binding. An
oriented peptide array library. (OPAL) approach that will ease high through place
proteomics investigation. “OPAL integrates the principles of both the oriented peptide
libraries and array technologies. Hundreds of pools of oriented peptide libraries are
synthesized as amino acid scan arrays. We demonstrate that these arrays can be
used to map the specificities of a variety of interactions, including antibodies, protein
domains such Src homology 2 domains, and protein kinases.”[22]
(4) Exploring Biochemistry and Cellular Biology with Protein Libraries:
Peptide library express a broad network for essential enzymes and binding protein
specificity. Furthermore to introduction rules for the molecular level recognition, the
binding preferences and tolerance from such library ca n reveals the mechanism of
biochemical and cellular procedures. The peptide obtained from protein library will
also useful for the pharmaceutical compounds and even reagents to further discovery
of cell biology. [23]
(1) http://www.stanford.edu/group/nolan/library_systems/library_systems.html
(2) http://www.ncbi.nlm.nih.gov
(3) http://peds.oxfordjournals.org/cgi/content/full/19/5/211
(4) http://www.healthtech.com/2003/pgd/index.asp
(5) http://www.uvminnovations.com/biomed.html
Website: http://www.auspep.com.au/products_services/custom%20services.htm
Website: http://www.albachem.co.uk
(3) Genzyme Pharmaceuticals, 675 West Kendall Street, Cambridge, MA 02142, USA Tel:
617-374-7248, Fax: 617-768-6433
Website: http://www.genzymepharmaceuticals.com
7.5.7
Reference:
[1]Peptide library, Princeton Bimolecule Corporation, i, Corporation
http://www.pbcpeptide.com/Peptide%20Library.htm access date (23/09/06)
[2]http://www.stanford.edu/group/nolan/library_systems/library_systems.html access
date (23/09/06)
[3] Dominant effector genetics in mammalian cells, Rigel, Inc., San Francisco,
California, USA.
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=
11137994&dopt=Abstract
[4] Dickerson, T.J., Janda, K.D., Recent Advances for the Treatment of Cocaine
Abuse: Central Nervous System Immunopharmacotherapy. AAPS Journal.
2005; 07(03): E579-E586. DOI: 10.1208/aapsj070359
[5]Hoffman, J.A., III, Caudill, B.D., III, Koman, J.J, III, Luckey, J.W., Flynn, P.M.,
Hubbard, R.L. Comparative cocaine abuse treatment strategies: enhancing client
retention and treatment exposure. J Addict Dis. 1994; 13:115-128.
PubMed DOI: 10.1300/J069v13n04_01
[6]William, W.J., Olsen, B.N., Roberts, R.W., Epitope mapping using mRNA display
and a unidirectional nested deletion library
http://peds.oxfordjournals.org/cgi/content/full/18/7/309 access date(23/09/06)
[9] He, M., Taussig, M. J., Ribosome displays: Cell-free protein display technology,
http://www.discerna.co.uk/technique%20review.pdf access date (24/09/06)
[10] Murray A, Smith R.G., Brady K, Williams S, Badley, R.A., Price M.R.
Cancer Research Laboratories, University of Nottingham, Nottingham, NG7 2RD,
United Kingdom. Generation and refinement of peptide mimetic ligands for Para
tope-specific purification of monoclonal antibodies.
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=
11520027&dopt=Abstract , Access date (24/09/06)
[13] Hamby, C. V., Llibre, M., Utpat, S. Wormser, G. P., Use of Peptide Library
Screening To Detect a Previously Unknown Linear Diagnostic Epitope: Proof of
Principle by Use of Lyme Disease Sera, Department of Microbiology and
Immunology, Division of Infectious Diseases, Department of Medicine of New York
Medical College, Valhalla, New York, http://cvi.asm.org/cgi/content/full/12/7/801
Access date (25/09/06)
[16] Park, S., Xu, Y., Stowell, X. F., Gai, F., Saven, J. G. , Boder , E. T., Limitations of
yeast surface display in engineering proteins of high thermostability, Department of
Chemistry, University of Pennsylvania, Philadelphia, PA 19104, Department of
Chemical and Bimolecular Engineering, University of Pennsylvania, Philadelphia, PA
19104 and Laboratory for Research on the Structure of Matter, University of
Pennsylvania, Philadelphia, PA 19104, USA Present address: Department of
Biology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA,
http://peds.oxfordjournals.org/cgi/content/full/19/5/211 Access date (25/09/06)
[17] Palmer, S. J., Redfern, M. R., Smith, G.C., Cox, J. P. L, Sticky Egyptians: a
technique for assembling genes encoding constrained peptides of variable length,
Department of Chemistry and 1Department of Mathematical Sciences, University of
Bath, Bath BA2 7AY, UK, http://nar.oxfordjournals.org/cgi/content/full/26/11/2560
Access date (25/09/06)
[19] Boutin, J. A., Gesson, Henlin, J. M., Bertin, S., Lambert, P.H., Volland, J.P.,
Fauch`ere, J.L., Limitations of the coupling of amino acid mixtures for the preparation
of equi molar peptide libraries, a Department of Peptide and Combinatorial Chemistry
and b Department of Analytical and Physical Chemistry, Institut de Recherches
SERVIER, 11 Rue des Moulineaux, F92150 Suresnes, France.
http://www.5z.com/moldiv/volume3/pp43- 60.pdf Access date (25/09/06)
[20] Liu, R, Enstrom, A.M., Lam, K.S. Combinatorial peptide library methods for
immunobiology research, UC Davis Cancer Centre, Division of
Haematology/Oncology, and Department of Internal Medicine, University of California
Davis, Sacramento, CA, USA.
www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=
http://Retrieve&db=PubMed&dopt=Abstract&list_uids=12543103 Access date
(25/09/06)
[23] Diaz, J. E., Howard, B. E., Neubauer, M.S., Exploring Biochemistry and Cellular
Biology with Protein Libraries, Allison Olszewski1 and Gregory A. Weiss, Department
of Chemistry and 2Department of Molecular Biology and Biochemistry, University of
California, Irvine Irvine, CA 92697-2025, USA.http://chem.ps.uci.edu/~gweiss/Diaz-
CIMB-03.pdf Access Date (25/09/06)
Chapter 7.6 Applications of Synthetic Peptides
Shruti Saptarshi
7.6.1 Introduction
Proteins are complex organic molecules made up of carbon, nitrogen,
oxygen, hydrogen and sulphur. These are abundantly found in nature in the form of
enzymes, hormones, antibodies, and thus play an important role in the overall
metabolism of living organisms. Proteins are made up of twenty different types of
basic structural units called the amino acids. The amino acids found in all the living
beings are the same. Every amino acid consists of a chiral carbon atom, an amino
group, a carboxy terminal and a replaceable /reactive R group. A condensation
reaction between the carboxy and amino terminals respectively of two amino acids
leads to the formation of a Peptide Bond. A couple of amino acids linked by the
peptide bond form a dipeptide. Peptides are essentially small chains of up to 20 such
α amino acids linked by the peptide bond. These are basically small fragments of the
protein molecule that are functional. Under natural circumstances peptides are
synthesized by the process of m RNA translation.
As the name suggests SPPS involves synthesizing the peptides while it is still
attached to a solid phase. The solid phase used is an insoluble polymer contained in
a column usually a resin. The artificial synthesis of the peptide occurs from the
carboxy terminal unlike the amino terminal in case of natural synthesis.
Fig 7.6.1 Synthesis of peptides schematic representation.
Source: http://courses.cm.utexas.edu/emarcotte/ch339k/fall2005/Lecture-Ch3-
2/Slide22.JPG
The first step involves coupling of the amino acid to the ligands present on the
resin. This is made possible by the means of the chloromethyl linkers. The main
criterion of this step is to achieve a stable bond between the amino acid and the
polymer support. An alternative resin used now a days is the PAM i.e.
phenylacetamidomethyl resin. It helps form a much stable bond between the amino
acid and the resin as compared to the ones traditionally used.
Once the amino group is firmly attached to the polymer support, its reactive
functional group and the R group has to be protected so as to prevent the formation
of complex secondary structures. Typical labile groups used for protecting the alpha
amino groups include the t-Boc (tertbutyloxycarbonyl) and the F-moc (9-
flourenylmethloxycarbonyl). Yet another important characteristic of these groups is
easy removal so as to add a new amino group.t-boc is a stable labile group and can
be easily removed by using TFA (trifluroacetic acid) and dichloromethane. Whereas
F-moc can be removed by using concentrated amine solutions such as piperidine in
N- methylpyrrolidone. The labile groups used to protect the R group remains
attached during the entire process of synthesis.
For the formation of the peptide bonds during the elongation of the peptide it
is essential to activate the carboxyl groups. This is achieved by using symmetric
anhydrides, carbodiimides, phosphonium and uronium salts. This process is
repeated until the desired peptide length is achieved. Once ready the peptide is
cleaved from the polymer support to produce a free peptide which then subjected to
characterization and purification, by using techniques such as reverse phase HPLC,
mass spectroscopy, ion exchange chromatography etc. The newly synthesized
peptide is then lyophilized.
Peptide sequences are specific and this makes the peptides unique with
regards to its chemical and physical properties. Peptides are manufactures for
myriad reasons. Peptides being specific in nature have a wide use in therapeutics,
protein engineering, in studying structure – function relationships,
immunodiagnostics. Novel molecules such as oligonucleotide-peptides, PNA (peptide
nucleic acids), peptoids, mimetics, MAPs, antibodies etc. are all essentially derived
from peptides.
7.6.4 Applications
PEPTOIDS
Yet another application that is being explored for its potential clinical benefits
is the use of Peptide Mimetics as vaccines for cancer. The basic principle on which
the cancer vaccines work is basically inducing an immune response against the
tumor antigen by the means of whole tumor cell vaccines, dendritic cell vaccines,
adjuvant vaccines, DNA, viral vector vaccines etc. Most of the tumor antigens
exhibited by the tumor cells are also expressed by normal cells. Hence it is difficult to
induce a strong humoral immune response against the self antigens. Peptides
mimetic are artificially designed specific proteins. Thus if such antigenic determinants
mimicking the tumor antigens are introduced into the body then it will be possible to
induce a stronger immune response. However the only ambiguity associated with this
form of immunotherapy is the risk of initiating an autoimmune response. A recent
report stated that immunizing mice with a peptide that mimics MUC1 is able to
generate a strong cellular and protective response against MUC1 as well as lysing
human breast cancer cell lines [6]
This idea is still in its infancy and demands conducting many clinical trials so
as to device a new strategy that can be useful for humans. Peptoids are also being
characterized for their antimicrobial properties. One of the research papers reported
the antibiotic activity of a peptoid CHIR29498 and some of its analogues against a
host of gram positive as well as gram negative bacteria. Destruction of the bacterial
cell wall was found to be the mode of action for the peptoid [7].
Peptide nucleic acids are achiral DNA/RNA analogs. Unlike the sugar
phosphate backbone present in the nucleic acids these contain pseudo peptide
namely N– (2-amino-ethyl--glycine units. This skeleton also it is not charged. PNAs
have a unique ability to bind to DNA as well as RNA thereby forming hybrids that are
much more stable. They also exhibit complementarity like their natural counterparts
thus obeying the Watson – Crick hydrogen bonding scheme. These are synthesized
by using solid phase peptide synthesis. Moreover peptide nucleic acids are relatively
resistant to attack by proteases and possess thermal stability along with ionic
strength. Hence they fine wide applications in the fields of chemistry, medicine, and
genetic diagnostics. PNAs are used extensively in anti gene, anti sense studies as it
is able to inhibit translation and transcription of the gene towards which it is targeted.
They can be used as markers in DNA mapping projects. PNAs labeled with biotin,
fluorescent dyes, reporter enzymes are commonly used in hybridization experiments
such as DNA arrays, Northern, Southern blots, and detection of point mutations.
PNAs are now used as tools for biosensors which help in detecting specific DNA
sequences in test samples. It involves immobilization of a single – stranded nucleic
acid probe onto an optical transducer over which the sample is passed. Any kind of
mismatch is conveyed in the form of an electrical signal which is then detected. Thus
PNAs can be used in place of synthetic DNA or RNA but with added benefits as
mentioned above.
PEPTIDES AS VACCINES:
One of the most critical goals of vaccine development is to trim down the
structurally complex molecules to smaller ones which are high on resolution, can be
rapidly modified, exhibit specificity in terms of the antigenic property. Use of synthetic
peptides is hence the best option. Synthetic peptides are commonly used as
immunizing agents. They are the simplest alternatives to vaccine development. The
synthetic methods described above have made it possible to synthesize peptides
corresponding to specific antigenic epitopes present on an infectious agent.
Currently lots of effort is being put in finding peptide based vaccines against cancer
and AIDS. One such report stated synthesis of conformationally constrained peptides
that bind tightly to 2F5monoclonal antibody (specific against a recognition epitope on
the HIV envelope glycoprotein gp 41). This peptide is to be conjugated with a carrier
protein and then used as a vaccine so as to elicit an immune response against HIV
neutralization. This idea has potential therapeutic benefits.[8]
Synthetic peptides are also now used as regents for immunodiagnosis, also
as inhibitors and for protein engineering purposes. Application of synthetic peptides
in structure function studies is of great importance.
• Peptide Synthesis
www.learner.ccf.org/services/molecbiotech/peptide.php
• Biomimetic oligomers
http://barronlab.chem-eng.northwestern.edu/Project1.html
• http://www.chem.qmul.ac.uk/iubmb/misc/phorm.html
7.6.6 Key Industry Suppliers
Key industrial suppliers of synthetic peptides in their various forms include the
following:
• United Biomedical Inc.
• GL Biochem (Shanghai) Ltd.
• Invitrogen Corporation
• CSBio Inc.
• Biopeptide Co., Inc.
• EZBiolab Inc.
7.6.7 References
7.7.1 Introduction
DNA and RNA are the two forms of genetic materials present in nature from long
ago. Scientists are trying to synthesis DNA and RNA in lab and have got a little
success too. But now they have discovered a new form of Nucleic acid called as
Peptide Nucleic Acid (PNA).
PNA, Peptide Nucleic Acid was synthesis by Neilsen, Egholm, Berg and Buchardt in
1991.DNA and RNA consist of sugar backbone and phosphate and nitrogen bases
attached to back bone. PNA is an artificially synthesised analogue of DNA, having
pseudo peptide in place of sugar phosphate backbone in DNA. PNA has a backbone
of repeated units of N-(2-aminoethyl)-glycine units linked by peptide bonds. Which
are attached to four bases (adenine, guanine, thymine or cytosine) found in DNA. It
also has amino and carboxyl terminals like amino acids. It has N terminus at first
position and it corresponds to 3’ end of DNA or RNA. I.e. 5’ end of PNA binds to
complementary single stranded DNA. This means that PNA sequences are written
from 3’ to 5’. Unique feature of PNA is that it is shows resistance to nucleases and
proteases. [4]
Fig: PNA having peptide backbone. Fig: comparison of DNA and PNA
Source: http://en.wikipedia.org/wiki/PNA Source:
http://www.biosyn.com/new_popup.htm
PNA has attracted major attention in fields of chemistry and biology because of its
attractive chemical, physical, and biological properties and its chances to be very
useful as an active component for diagnostic as well as pharmaceutical applications.
PNA has behaviour like DNA; it binds to the complementary nucleic acids strands.
Also as the backbone of PNA is not nucleic acid it is neutral in nature and results in
stronger binding between PNA/DNA then between PNA/PNA and results in better
specificity. They are also resistant to hydrolytic (reactions) cleavage and so are not
easily degraded inside the living cell. Also PNA is proficient of sequence specific
recognizing DNA and RNA obeying Watson and Crick hydrogen bonding scheme.
The hybrid form by hydrogen bonding is having too great thermal stability and unique
ionic strength effects. It forms stable triplex of PNA-DNA-PNA by identifying duplex
homopurine sequence of DNA and binds to it by strand invasion, which has a looped
out DNA strand from duplex DNA which it invaded. [2] It is experimented that the
melting temperature (Tm) of a normal dA10-dT10 DNA hybrid is 23°C while the Tm of a
similar dA10-dT10 DNA/PNA hybrid is 86°C1.
PNA was designed originally to recognize double strand DNA. IT was thought to form
oligonucleotides that can base pair to double strand of DNA via. Hoogsteen base
paring in major groove. In this way there are nitrogen base pairs of DNA in PNA but
the sugar backbone is replaced by a pseudo peptide. It was believed that if this
peptide would be of neutral nature it will recover triplex binding capability of ligand.
The PNA designed in this way mimic the single strand nucleic acid. Many structures
were taken into consideration during early stags but later on when criteria’s like water
solubility, rigidity and least chemical accessibility the final structure which was
accepted was of PNA. [3]
It is been thought that this novel discovery of PNA will be useful in lot of ways. PNA
can be used as a probe molecule as it is having unique stability and other
biochemical properties. It is having more enzymatic and chemical stability than
nucleic acid can hybridize very well too.
It has better chances of having new ways of detection using PNA as it has unique
molecular structure. [5]
PNA is now been used as gene targeting drug, where in PNA’s directed towards
double stranded DNA exhibit antigene properties where as targeting of single
stranded RNA leads to antisense effects. As a result PNA’s are used as bactericidal
antibiotics for regulating splice site selection and as telomerase inhibitor.
As PNA’s are neutral in nature they cannot be easily delivered as they do not have
charge. Many new methods have been used to overcome this problem. Incorporating
positive charge to PNA so that it can be easily transported has been tried, for which
lysine and arginine has been used. Also ligands are attached e.g. short peptide
sequences like antibodies or steroids for delivery of PNA’s. [14]
Antisense PNA targeting was used to identify critical HIV gene segment inside gag-
pol encoding gene. Here translation of mRNA of HIV gag-pol was disrupted with
antisense targeting stopping the production of virion from cells infected cronically with
HIV. [16]
Light up probes are developed using PNA. There is asymmetric cyanine dye thiazole
orange (TO) attach to it, and when PNA binds to target DNA, dye binds to DNA and
gives flurosense. [17]
It is been mentioned earlier that the basic properties of PNA are as such much more
superior. Like specific binding, neutral nature, protection against hydrolytic cleavage
and many more. It is seen that PNA-DNA duplex is very stable than corresponding
DNA-DNA or DNA-RNA duplexes. [3] There is formation of D-loop in PNA generation
which is used to study mechanism of transcription and inducing gene expression.The
PNA are synthesised in laboratory which is big advantage and are easily synthesised
by tBoc or fMoc chemistry. PNA-DNA/mRNA hybrids recognized with high efficiency
by enzyme RNase H which reflects its ability to act as a substrate to identify cellular
enzyme as in DNA. Also there are more betterment done in the PNA itself so as to
increase its efficiency. Like at Oxford research lab researchers have developed
nucleo-amino acid derived from proline and spacer amino acid, they provide
conformational constraint to the PNA. Spacer amino acid is at n terminus and it
provide selective binding to either RNA or DNA. The chiral PNA so form can be used
in quality control as it forms single enantiomer product. The spacer presents help to
modify properties of PNA like adding charge to backbone or increasing
hydrophobicity. [15]
PNA has low cell membrane permeability which result in negligible intracellular
concentrations. As it has resistance to water solubility it has less bioavability. They
cannot be delivered to conventional cationic formulations eg. Liposome and micro
spheres as they are neutral in nature. To increase the uptake of PNA because of its
low solubility it is used in covalent bounded DNA-PNA hybrid form. It is seen that
PNA’s are expensive in aspect of production as compared to other artificially
synthesised peptides. [14]
PNA was originally designed as a ligand for the recognition of double stranded DNA.
Later on it was found out that peptide nucleic acids are very stable, specific in their
nature, water resistant and hydrolytic cleavage resistant , and as a result it has a
broad spectrum of uses now.
The specificity of PNA to bind to a chosen target is of major interest and is there for
used in medical and biotechnological context. They show a new scope for
development in field of gene therapeutic agent, diagnostic devices for genetic
analysis and molecular tools for nucleic acids manipulation.
They are also used for antisense and antigene therapy in eukaryotic nerve cells and
even in rat’s brain. This activity is also shown in E.coli. PNA binds to complementary
sequences on DNA and thus can inhibit transcription of that gene that is antigene
strategy and for antisense strategy an analogs can be designed which will bind to
complementary sequences in mRNA after recognizing it, and in this way it inhibits the
translation of that gene. [2]
Detection of SNP; PNA are now used to detect SNP’s which in turn is used to detect
neurodegenerative disease identification. In this detection method PNA is used as a
probe along with S1 nuclease enzyme and amplifying conjugated polymer (CP). PNA
is flurolabelled and it hybridizes to DNA at sequence of interest and thus helps in
recognization of SNP’s. [6]
Detection of Transcription Factors: A photo Functionalised PNA ongomer was
designed, which was used to detect transcription factor. [7]
Chromosomal Identification: Centromeric PNA probe specific to chromosomes 1, 4,
9, 16, X and Yare used to detect oocytes, blastomeres and polar bodies. The
identification method used was FISH.[8]
PNA directed genome rare cutting: Usual restriction enzyme is converted into
infrequent genome cutter using PARC i.e. PNA-assisted rare cleavage based on
Achilles’ heel’ cleavage strategy. A sequence specific complex of double stranded
genomic DNA and bis-PNA is treated with DNA methyl transferase also known as
methylase. Bis-PNA is removed and sample is treated with restriction enzyme which
recognizes same methylase sight and thus cannot cleave them. There are few
methylation sights which were protected from methylation by bis-PNA binding to
them which will be identified by restriction enzyme and will cleave them after bis-PNA
are removed. Bis-PNAs with various combination and with different
methylation/restriction enzymatic pairs generates a new class of genome rare cutters
[10]
Purification of Nucleic acids: PNA hybridize to nucleic acid in two different ways.
Sequence specific method and generic method. Sequence specific method is a
selective method which requires sequence information on the target and synthesis of
dedicated PNA which bind to target DNA. The generic method does not require
sequence of target DNA and uses triplex form of PNA. It is capable of bulk
purification. [11]
-http://www.horizonpress.com/hsp/books/pna.html
-www.springerlink.com/index/D5078873017U6131.pdf
-http://www.eurogentec.com/module/FileLib/oligo_16.pdf
-http://www.fasebj.org/cgi/reprint/14/9/1041.pdf
- http://www.abrf.org/ABRFNews/1993/September1993/sep93impepna.html
7.7.7 References
[1] Peptide Nucleic Acids: Protocols and Applications. Horizon Printing Press. (1999)
http://www.horizonpress.com/hsp/books/pna.html Access Dt.:26/10/2006
[2] Ray, A., Nordén, B. Peptide nucleic acid (PNA): its medical and biotechnical
applications and promise for the future. Department of Physical Chemistry, Chalmers
University of Technology, S 412 96, Gothenburg, Sweden.
http://www.fasebj.org/cgi/content/full/14/9/1041 Access Dt.: 27/10/2006
[4] Matsudaira, P., Coull, J., Peptide Nucleic Acids: A New Nucleic Acid Analog.
Whitehead Institute for Biochemical Research, Dept. of Biology. Massachusetts
Institute of Technology, Cambridge and Millipore Corporation, Core R& D, Specialty
Chemistry Group, Bedford, MA 01 730
http://www.abrf.org/ABRFNews/1993/September1993/sep93impepna.html
[5]Brandt, O., Hoheisel, J., Peptide nucleic acids on micro arrays and other
http://www.moltools.org/pooled/articles/BF_SITEART/view.asp?Q=BF_SITEART_13
9244 Access Dt.: 27/10/2006
[6] Gaylord, B. S, Massie. M. R., Feinstein, S. C., Bazan, G. C., SNP detection using
peptide nucleic acid probes and conjugated polymers: Applications in
neurodegenerative disease identification.(2004). Materials Department and Institute
for Polymers and Organic Solids and Neuroscience Research Institute, University of
California, Santa Barbara, CA 93106.
http://www.pnas.org/cgi/content/abstract/102/1/34. Access Dt.: 27/10/2006
[7] Fujimori, F., Kitagawa, F., Abe, Y., Ohori, Y., Kiyota,R., Nakamura,Y., Ikeda, H.
and Murakami, Y., Application of peptide nucleic acid (PNA) for detection of
transcription factors binding probes. Department of Biological Science & Technology,
Faculty of Industrial Science & Technology, Tokyo University of Science, Yamazaki,
Noda-Shi, Chiba, 278-8510, Japan.
http://nass.oxfordjournals.org/cgi/content/abstract/2/1/69 . Access Dt.:27/10/2006
[8] Paulasova, P., Andréo, B., Diblik, J., Macek, M. and Pellestor, F. The peptide
nucleic acids as probes for chromosomal analysis: application to human oocytes,
polar bodies and preimplantation embryos.(2004) Laboratory of Assisted
Reproduction, Motol Hospital, Vuvalu 84, 150 06 Praha 5, Czech Republic and
CNRS UPR 1142, Institut de Génétique Humaine, 141 rue de la Cardonille, F-34396
Montpellier Cedex 5, France http://molehr.oxfordjournals.org/cgi/content/full/10/6/467
Access Dt.:27/10/2006
[9] Fiandaca, M. J., Hyldig-Nielsen, J.J. and Coull, J.M. PNA Blocker Probes
Enhance Specificity in Probe Assays.(2004)
http://www.horizonpress.com/hsp/abs/abspna.html Access Dt.:28/10/2006
[10] Demidov, V.V., Frank-Kamenetskii , M. D., PNA directed genome rare cutting.
http://www.horizonpress.com/hsp/abs/abspna.html Access Dt.:28/10/2006
[11] Orum, H., Purification of nucleic acids by hybridisation to affinity tagged PNA
probes. http://www.horizonpress.com/hsp/abs/abspna.html Access Dt.:28/10/2006
[12]Fujimori, F., Kitagawa, F., Abe, Y., Nakamura, Y., Ikeda, H. and Murakami, Y.,
Design of the photosensitized peptide nucleic acids for the analysis of geno-typing.
Department of Biological Science & Technology, Faculty of Industrial Science &
Technology, Tokyo University of Science, Yamazaki, Noda-Shi, Chiba, 278-8510,
Japan. http://nass.oxfordjournals.org/cgi/reprint/1/1/177.pdf Access Dt.:28/10/2006
[13] Marin V. L., Roy S, Armitage BA., Recent advances in the development of
peptide nucleic acid as a gene-targeted drug. Department of Chemistry, Carnegie
Mellon University, 4400 Fifth Avenue, Pittsburgh, PA 15213-3890, USA.
[14] Wang, G., XU, X. S., Peptide nucleic acid (PNA) binding-mediated gene
regulation. 1Institute of Environmental Health Sciences, Wayne State University, 2727
Second Avenue, Detroit. http://www.nature.com/cr/journal/v14/n2/full/7290209a.html .
Access Dt.: 29/10/2006
[15] New Peptide Nucleic Acids, Technology transfer from University of Oxford.
http://www.isis-innovation.com/licensing/1332.html . Access Dt.:30/10/2006.
[16] Peptide nucleic acids as epigenetic inhibitors of HIV-1. International journal of
Peptide Research and Therapeutics, September 21, 2006. Pages 269-286.
www.springerlink.com/index/D5078873017U6131.pdf Access Dt.: 30/10/2006
[17] Snvik, N., Westman, G., Wang, D. and Kubusta, M. Light up probes: Thiozole
Orange conjugated Peptide Nucleic Acid for detection of Target Nucleic Acid in
Homologous Solution. Department of Molecular Biology. Lundberg Institute.S-413 90.
Goteburg, Sweden. Department of Organic Chemistry. Chalmers University of
Technology, S 412 96, Gothenburg,Sweden.
http://www.molbiotech.chalmers.se/research/mk/lightup/Anal%20Biochem%20281,%
2026%20(2000).pdf Access Dt.:30/10/2006
Chapter 8.2 Antibody Engineering
Rushabh Gohil
8.2.1 Introduction
Fig.8.2.2 A Monoclonal Antibody showing the desired modification in the Fab region
to create specificity in its binding site to a particular antigen.
(Source: http://www.immuno-precise.com/images/idio-1.gif)
Antigens specific to the process are repeatedly injected to the mice for
the production of specific antibody, facilitated due to proliferation of the desired B
cells. These mice then produce the desired sets of lymphocytes in reaction to the
antigens.
The spleen cells (rich in B cells and T cells) are separately cultured.
These spleen cells produce antibodies specific to the antigens that were injected.
The myeloma cells producing the effect of the antigens in the mice are cultured
separately. The myeloma cell line used for the purpose of synthesizing a Hybridoma
should be peculiar for two important characteristics. One, it should have stopped
synthesizing antibodies; and two; it should be a mutant that can not synthesize the
enzyme hypoxanthine guanine phosphoribosyl transferase (HGPRT).
Considerable efforts during the last 10-15 years have been made to
improve the yield of monoclonal antibodies using hybridoma technology. Also,
consistent efforts are being made to develop newer methods for the production of
monoclonal antibodies using antibody engineering as a key tool. One such
commendable effort in the recent times has been the use of Phage Display to
produce monoclonal antibodies using recombinant phages.
Both conventional hybridoma and phage display antibody production
exploit the vast diversity of the mammalian antibody repertoire. The fundamental
difference is that with hybridoma antibody production, this diversity is harnessed by
the immortalization of antibody producing B-cells, while with phage display it is the
genes that encode antibody variable regions (V-genes) that that are immortalized.
Thus, the sacrificing of animals like mice is overcome by the use of phase display
technique to create hybridomas. Also, a further major advantage of antibody
production by phage display is that in many cases the whole process can be
performed in vitro, thereby negating the requirement for target antigens to be
immunogenic[5].
There have also been quite a few advancements in the process of using
hybridomas to create monoclonal antibodies. One such advancement is that the cell
fusions are facilitated through the use of polyethylene glycol (PEG). This reagent is
used to assist the fusion of the myeloma and spleen cells and also helps in
preserving the hybridomas. A second advancement is the use of continuous cell lines
as fusion partners for the antibodies producing B cells. Feeder layers consisting of
extra cells to feed newly formed hybridomas are used for optimal growth and
hybridoma production. The most common feeder layers consist of murine peritoneal
cells, marcrophages derived from mouse, rat or guinea pigs and extra non
immunized spleen cells.
Computer graphic techniques are also being used to build specific antigen
binding sites in antibodies. Using this approach some designer antibodies of practical
value has already been produced. In this strategy, genes are not really cloned from
lymphocytes, but are instead designed from a repertoire of antibody genes available
in a collection.
Although, being touted as the most efficient and revered technologies, the
use of monoclonal antibodies obtained by the process of antibody engineering has its
fair share of advantages and disadvantages. Continuous efforts are being made by
the protein technology industry to overcome the major shortcomings of this
engineering technology[8].
1. Because the antibody is monoclonal, it may not produce the desired biologic
response thus providing insufficient effector response.
2. Monoclonal antibodies against conformational epitopes on native proteins may
lose reactivity with antigens that have been minimally perturbed. This could
threaten their specificity for the antigens.
3. Antibodies sometimes display unexpected cross-reactions with unrelated
antigens.
4. The production of monoclonal antibodies through antibody engineering requires
a very large commitment in terms of time, effort as well as the capital
expenditure it incurs!
Diagnosis
Immunopurification
Immunotherapy
1. Abbott Laboratories
2. Alexion
Alexion Pharmaceuticals is a leading American biotechnology company
that is preparing to launch its first commercial product, eculizumab in 2007 and
another product pexelizumab being studied in a large Phase III trial.
(http://www.alexionpharm.com/)
3. Biogen IDEC
5. Genentech
6. Genmab
7. ImClone Systems
8. Medarex
9. MedImmune
10. UCB
8.2.7 References
3. Barrett, J. T. (1976) Basic immunology and its medical application, Saint Louis
:Mosby.
7. Baker, M. (2005) Upping the Ante on Antibodies, Nature Biotechnology. 23, 1065-
1072.
Krutika Wikhe
8.3.1 Introduction
Proteins which are composed of amino acids form the building blocks of the human
body. Their immense importance and role in biological pathways has lead to an
explosion in protein studies. As researchers started to delve into proteins, their
structures and their functions, the need to produce them aroused. This in its turn led
to the search for a proper biological system to produce the proteins such that their
integrity, quality and quantity could validate such a study. Well it didn’t turn out to be
an easy task and I m sure many protein scientists of that era would agree that just
finding such a system and then optimizing that system for a protein proved to be a
daunting task. And we, who do produce proteins on a regular scale for studying their
effects, toxicity, functions, structures and many other uses too ought to be thankful
for those protein scientist who did the major chunk of the work by finding such
biological systems for us.
Once scientists came up with the first biological expression system, viz. E.coli, they
thought that all the problems associated with protein production are history. But then
other problems cropped up like inclusion body formation, intracellular secretion of
proteins, and purification and separation of the desired protein from the rest of the
cellular mass. This lead to a search for a better and more high through put
expression system and from that time till now there has been continuous research for
a more efficient and high throughput protein expressing system. We have come a
long way from using E.coli, although it is still the preferred system over others. Over
the course of time yeast, mammalian cells, Baculovirus and the most recent cell free
protein expression systems have been exploited.
¾ E.coli: The genetics and biochemistry of E.coli are probably the best
understood of any known organism. The knowledge gained in studying E.coli
biology has been applied to the development of many molecular cloning
techniques. Most cloning vectors and methods utilize E.coli as a preferred
host, primarily because of the ease with which it can be grown and
manipulated. It is also suitable for expressing proteins because of its rapid
doubling time and its ability to grow on a wide range of nutrient media. It also
provides numerous transcriptional and translational control elements that can
be applied to the expression of foreign genes. The steps involved in foreign
gene expression are :
1) Insertion of the gene into an expression vector, mostly plasmid.
2) Transforming a suitable E.coli host strain with the plasmid for example
by electroporation.
3) Evaluation of protein stability and expression.
4) Once small scale experiments have verified the expression and stability
then large scale production of the protein can be started in fermentation
system.
5) Production is followed by purification and characterization of the
protein.
Fig 8.3.1 A flow diagram for a typical E.coli based expression system.
Ref: http://silver-server.dur.ac.uk/Teaching/Expression_Systems/2_E_Coli/index.htm
Although E.coli is the first choice of expression system for any protein
researcher, you really can’t expect that organism to satisfy everyone’s
needs. What I mean by that is, for researchers interested in eukaryotic
proteins which need to be post-translationally modified, a eukaryotic
system must be used. Furthermore, if the protein is expressed in an
insoluble state in E.coli, one way to circumvent this problem is to express
the protein in eukaryotic system.
¾ Yeast: For more than a decade the yeast Saccharomyces cerevisiae has
been used extensively utilized or the production of foreign proteins. One of
the reason for choosing Saccharomyces as a system for expression is the
vast knowledge base about the organism and presence of eukaryotic post-
translational modification pathways. The other positive aspect towards using
yeast as a system for production of proteins is that it has been approved as a
safe organism by the FDA and hence it can be used for production of
biologically important proteins on a commercial scale[7]. The goals achieved
in expression of foreign proteins in S.cerevisiae are achievement of desired
yield, production of proteins with desired post translational modifications and
secretion to the extra cellular medium. Through continuous research in this
field, there are now many vectors and host strains available to direct gene
expression in S.cerevisiae. A variety of choices are now available with
respect to specific elements used to direct expression and secretion like the
promoters used, the signal sequence for secretion, selectable marker and
even the mechanism of replication[7].
Early on in the 1970s, the methylotropic yeast Pichia pastoris was developed
to convert methanol into a high quality protein. By early 1980s the focus
shifted from Saccharomyces to Pichia as a eukaryotic microbial system to
produce large quantities of heterologous protein of interest to protein
researchers[7]. The transformation methods developed for Saccharomyces
work well even with Pichia. We can also produce either intracellular or
secreted protein with Pichia. Secretion requires the presence of a signal
peptide on the expressed protein to target it towards the secretory
pathway[8]. Currently Invitrogen has the exclusive rights for the distribution of
Pichia expression technology[9].
The preferred reason for using Pichia is it gives a much higher yield of
desired protein than Saccharomyces. Another reason for expression in Pichia
is it secretes very low levels of its native proteins and so it can be easily
purified from the medium.
The major producers of commercial yeast systems are Invitrogen, BD
Biosciences. BD Biosciences Clontech offers the YEASTMAKER™ Yeast
Transformation System and the YEASTMAKER™ Yeast Plasmid Isolation Kit,
as well as many types of yeast media and MATCHMAKER Yeast Two-Hybrid
System[10]
But as with all other systems, there are some problematic issues concerning
protein production in yeast too. Briefly the following problems arise:
1. Genetic instability of the transformed yeast particularly during scale-up.
2. Inability to produce toxic proteins.
3. Inefficient secretion of larger (>30kDa) proteins.
4. Proteolysis of secreted proteins.
¾ Baculovirus:
Baculovirus has emerged out as a popular system for overproducing
recombinant proteins in eukaryotic cells. The main difference in Baculovirus
expression system and that of yeast is that in Baculovirus we can use a
helper-independent virus that can be propagated to increase titers in insect
cells. This in turn would result in high protein production which after all is one
of the aims during protein production. Another positive aspect of using
Baculovirus is it has a large genome and hence can take in large inserts of
foreign DNA. Finally Baculovirus being non-infectious to humans they give a
possible advantage when expressing oncogenes or toxic proteins[11].
Currently the most widely used system employs the lytic virus Autographa
californica. The basic methodology of expressing foreign proteins in this
system involves the following steps:
1. Gene is first cloned into a transfer vector
2. The recombinant vector is transfected into insect cells.
3. In a homologous recombination event, the foreign gene is inserted into
the viral genome.
4. Recombinant viruses can be identified by DNA hybridization or PCR
technique.
¾ Mammalian cells:
In recent years, mammalian cells have been used vastly for production of
recombinant proteins mainly those requiring post-translational modifications.
They also serve as means for examining aspects of gene replication,
transcription and translation. Although there is a wide range of mammalian
cells available for expression only few have emerged as systems of choice for
protein production. The common features of mammalian cell lines desired
are, they should be capable of continuous growth and can be grown in
suspensions, they should have low risk of infection by viruses and can be
characterized easily with respect to morphology and gene copy number[14].
The general procedure for mammalian cell expression is as follows [8]:
Mammalian cells generally are preferred when expressing proteins for human
applications. The following are the typical uses of mammalian expression
systems:
1. Verification of cloned gene product.
2. Production and isolation of genes from cDNA libraries.
3. Production of correctly folded and glycosylated proteins
4. Production of clinically active viral surface antigens and monoclonal
antibodies.
Mammalian produced cells are quality controlled through a process whereby
incompletely folded and unassembled proteins into the secretory pathway are
selectively inhibited.
Even the use of human cell line is not perfect, since transformation required
to produce a stable cell line might in turn result in altered glycosylation.
Moreover mammalian expression techniques are time consuming, difficult on
a larger scale and costly. Mostly optimal techniques are a compromise
between transfection efficiency and post treatment viability of cells and
regulating this factor is generally troublesome. Complex nutrient requirement
and low product concentration also means that the end product should
warranty this approach to be commercially viable.
One of the major sources of commercially available mammalian expression
vectors are Stratagene[15] and BioLabs[16].
We have seen that each system has some unique features and qualities
which allow it to be used for certain kinds of proteins. Most protein
researchers base their choice of expression system on the kind of protein
they have to express and the extent of purification, yield and structural and
functional conformation they desire. To give an overall comparison of all the
above mentioned expression systems the following diagrammatic
representation would prove helpful:
Fig 8.3.4 Comparison of protein expression systems.
Ref: http://www.proteinsciences.com/technology/technology_why.htm
8.3.5 References:
Introduction:
Proteins are composed of 20 basic units called amino acids which consist of a central
carbon atom (the alpha-carbon) bound to an amino group (NH2), a carboxyl group
(COOH), a hydrogen atom, and one of 20 different R groups. The alpha-carboxyl
group of one amino acid is joined to the alpha-amino group of the next by a peptide
bond to form chains of amino acid residues (polypeptide chains). Proteins are
functional polypeptide chains. The unbranched chain of amino acid residues has
direction, beginning at the amino end, and the chain of regularly repeating peptide
bonds is called the backbone, while the R groups projecting from the backbone are
known as side chains. The peptide bonds of the backbone are rigid and planar due to
their partial double bond character.
There are an unlimited number of conformations that proteins could adapt, but most
fold spontaneously into one particular stable shape. This particular shape occurs
because backbone groups and side chains interact with each other, and water, so that
particular conformations have more stabilising interactions than others. Randomly
coiled protein is void of its activity, and that isolated proteins in solution can revert to
their original active conformation after denaturing conditions are removed. It was
concluded, therefore, that the information needed to refold that protein into its native
form must be inherent in the amino acid sequence and that a protein’s sequence
specifies its conformation.
Globular proteins fold into a compact globular shape. In contrast, fibrous proteins do
not fold into a compact shape as their function lies in their fibrous nature. The
hydrophobic nature of certain amino acids is the main driving force in the adoption of
these compact structures. The side chains of amino acids can be polar or non-polar.
Non-polar residues are hydrophobic and pack together in the interior of the globular
protein structure to avoid contact with water, whereas polar residues, and the polar
groups in the backbone, are hydrophilic and form hydrogen bonds with each other and
water. These hydrogen bonds form a major part of protein structure stability, and are
formed when a hydrogen atom is shared between a hydrogen donor and a hydrogen
acceptor. When hydrogen bonds form between backbone groups in proteins the alpha-
amide group is the donor and the alpha-carbonyl group the acceptor.
The folding of a protein can not be a random search through all possible structures as
this would take far longer than the observed time, of approximately one second, for
typical proteins to fold from random to their folded state. Among the millions of
possible folding patterns, proteins take up one working, native, structure. Proteins are
thought to initially fold rapidly into a structure in which most of the final secondary
structure elements have formed and are aligned in roughly the correct way. This is an
open and flexible conformation, called a molten globule, and is the starting point for a
relatively slow process in which the side chains are repeatedly adjusted to form the
correct tertiary structure. This second stage is thought to have a variety of correct
pathways to the final conformation. This process can be summarised as local folding,
formation of long range interactions, then local rearrangements to give the final most
stable folded state. This model has the assumption that, although hydrophobic
residues direct the initial folding, they also direct the slower tertiary folding, and
allows rearrangements to be made.
The forces and interactions described above fold a protein into a particular 3D
conformation, and this apparently complex structure of proteins is in fact governed by
a set of relatively simple principles. These principles are explained by splitting the
conformation into various levels which build on each other to produce the entire
protein shape. To ensure proper folding, cells have evolved a sophisticated and
essential machinery of proteins called molecular chaperones that assist the folding of
newly made polypeptides. The importance of proper protein folding is underscored by
the fact that a number of diseases, including Alzheimer's and those involving
infectious proteins (prions), result from protein-misfolding events.
Molecular chaperones have an essential role in the regulation of protein conformation
states -- the process during which transient or stable interactions with client proteins
affects their conformation and activity. Chaperones capture unfolded polypeptides,
stabilize intermediates, and prevent misfolded species from accumulating in stressed
cells. The capacity of the Hsp70 and Hsp90 chaperones to regulate these processes
involves a constellation of positive and negative co-chaperones that function in
various combinations to interact with chaperones to release folded proteins, to
facilitate the assembly or disassembly or chaperone-containing heteromeric
complexes, to confer substrate specificities, and to affect subcellular trafficking.
All the information needed to specify the three dimensional conformation of protein is
encoded by proteins amino acid sequence. In vitro protein folding is studied primarily
using small model proteins consisting of fewer amino acids. These small model
proteins can be unfolded and they spontaneously fold back into native structure upon
removal of denaturant. Several features define the basic differences between folding
of proteins in vivo and in vitro.Firstly the cytosol is an extremely crowded
environment with more macromolecular concentrations. Such macromolecular
crowding leads to excluded volume effects which strongly affect biochemical rates by
increasing protein association constants and thereby increasing intermolecular
interactions including aggregation. Secondly about one third of newly synthesized
proteins must be targeted to an organelle or are secreted to an extra cellular
compartment where their functions are fulfilled. These proteins are targeted as nascent
or loosely folded polypeptides to their translocation machinery where they fold to
their native states. Thirdly all proteins are synthesized by the ribosome from N- to C-
terminus, implying that as long as polypeptide synthesis proceeds, the folding
information is incomplete. Three different models explain how and when a folding of
a polypeptide chain to its structure is achieved in living cells. The first model suggests
that folding of a growing polypeptide chain is postponed by chaperone binding until
its synthesis is completed. Here folding is initiated only upon release of protein from
ribosome. The second model suggests that formation of secondary and tertiary
structures begins as soon as polypeptide chain emerges from ribosomal exit. The third
model proposes a step wise folding where initial folding is initially delayed and is
allowed to proceed only when sufficient sequence information is available for the
generation of folded domain.
Recent Advances:
Feeling the molecular forces through a haptic device (i.e. force reflecting robotic
device) while visualizing and manipulating them in 3D virtual environments will be
an invaluable tool for engineers and scientists in almost every field. The haptic
feedback will allow finer control of atoms during manipulation and provide a gateway
between our world and the “nano” world. The role of force feedback in molecular
simulations with applications to protein-ligand docking are investigated. Protein-
ligand interactions in biochemical applications determine phenomena ranging from
sensory perception to enzyme catalysis Computationally fast models are developed
for simulating molecular interactions and then use the haptic device during the
simulations to guide a ligand into a receptor site while reflecting the forces acting on
the ligand to the user in real-time. Presence of a haptic interface will accelerate the
binding process and reduce the development time involved in scientific analysis.
Proteins need to be flexible in order to perform their cellular functions. For example,
most, if not all, biological processes are regulated through association and
dissociation of protein molecules. Thus, the elucidation of the mechanism governing
flexibility is critically important in biology and health sciences for the ultimate goal of
controlling the functions of proteins and designing new proteins (protein engineering)
and new drugs. In many proteins, large conformational transitions involve relative
movements of almost rigid structural units. The vibrational motions are indicative of
the large amplitude motions. The largest amplitude motions obtained by normal mode
analysis will be compared with the conformational changes of the proteins observed
by the crystallographers upon a substrate (such as a drug) binding. In this study a set
of proteins with both open and closed (with and without a ligand) conformations will
be extracted from the protein data bank. These conformations will be analyzed and a
computational tool will be developed to model these proteins' motions. An
optimization algorithm will be used to find the combinations of these modes in order
to have a trajectory between the open and closed conformations.
Understanding how proteins fold not only is one of the most interesting theoretical
problems in molecular biophysics but also has far-reaching medical and
biotechnological consequencesThe great advantage of a really simple model is that
you can solve it exactly, at least for short chains of amino acids. You can examine
every possible folding of every possible sequence, picking out the ones of interest.
You can know with certainty which configurations have the most favorable
properties. Another advantage of a simple model is that you don't have to be an expert
in protein chemistry or molecular dynamics to play with it. Determining the process
by which proteins fold into particular shapes, characteristic of their amino acid
sequence, is commonly called the protein folding problem. The Protein Folding
Problem refers to the combinatorial problems involved in enumerating the
conformations of a given Protein molecule. Let each amino-acid residue in a 100
residue protein have 6 possible conformations, this leads to 6^100 possible
conformations available for this protein, this calculation does not include side chain
conformations which will increase the number of degrees of freedom further. The
question is now how does the protein fold given this large number of possible
conformations. These simple calculations urge the development of new efficient and
accurate search methods. Solving the folding problem has enormous implications:
exact drugs can be designed theoretically on a computer without a great deal of
experimentation. Genetic engineering experiments to improve the function of
particular proteins will be possible. Simulating protein folding can allow us to go
forward with the modelling of the cell. Protein folding can go wrong for many
reasons. When an egg is boiled, the proteins in the white unfold and misfold into a
solid mass of protein that will not refold or redissolve. In a similar way, irreversibly
misfolded proteins form insoluble protein aggregates found in certain tissues that are
characteristic of some diseases, such as Alzheimer's Disease.
Applications:
Main application of protein design is to understand why proteins fold and why they
misfold and aggregate. Understanding protein folding is important due to its
applications in the field of biomedicine (drug design, Mad Cow disease cause, etc.,)
and nanotechnology (self-assembly of nanomachines).The shape of a protein is the
principal determinant of its function. Arbitrary strings of amino acids do not, in
general, fold into a well-defined three-dimensional structure, but evolution has
selected out the proteins used in biological processes for their ability to fold
reproducibly into a particular three-dimensional structure within a relatively short
time. Some diseases are actually caused by slight misfoldings of a particular protein.
Understanding the mechanisms that cause a string of amino acids to fold into a
specific three-dimensional structure is an outstanding scientific challenge.
Appropriate use of large scale biomolecular simulation to study protein folding is
expected to shed significant light into this process. The level of performance
provided by Blue Gene (sufficient to simulate the folding of a small protein in a year
of running time) is expected to enable a tremendous increase in the scale of
simulations that can be carried out as compared with existing supercomputers.
Protein production can be regulated by transcription factors that bind to specific DNA
sites thus regulating the transcription rate of proximal genes. Finding these sites
known as Transcription Factor Binding Sites is fundamental to understanding gene
expression regulation. Professor Uri Keich in computer science works on this motif
finding problem. To find the most pronounced motifs in input sequences and to
analyse their significance to determine if they are artefacts of the size of data. This
work led to innovative significance evaluation of other statistical tests that is
especially important for analysing large datasets.
Proteins carry out the cells work acting as catalysts and controllers for numerous
chemical reactions and helping to give organs and tissues their shape. Studies of a
protein molecules intricately folded,3D structure are important for medical research,
drug development, chemical industrial processes and basic research because a
proteins structure determines its function. Crystals grown from dissolved protein are
useful to know about protein structure and are useful in research. This is a new
technology called rapid assay of protein folding which uses folding of a green
fluorescent protein to monitor folding and solubility of a test protein. This is done by
linking the two proteins in a hybrid molecule. When the hybrid is synthesized in the
host cell, the green fluorescent protein achieves its fluorescence if proper folding took
place. Folding assay can be used to treat incurable diseases like Alzheimers and
Huntingtons.
References:
http://www.crbmb.com/cgi/reprint/39/5-
6/261.pdf#search=%22role%20of%20chaperones%20in%20folding%20of%20protein
%22 role of
access date 15/09/06.
Cyrus Levinthal
Are there pathways in protein folding ?
J. Chim. Phys. (1968), vol 65, pp 44-45 access date 10/10/06.
http://www.lanl.gov/news/dateline/Dateline0999.pdf
Huong Nguyen
8.5.1 Introduction
The transition from genomics to proteomics for studying protein interactions has
brought the field of protein arrays into the limelight [1,2]. In allowing the fast and
miniaturised parallel analysis of massive numbers of diagnostic markers in complex
samples within a single experiment, high-throughput (HT) protein arrays are
promising tools for discovery and validation of biomarkers, drug screening,
diagnostics and clinical assays [3-8]. The technological feasibility of protein arrays
depends on the different factors that enable the arrayed proteins to recognise
molecular partners and on the specificity of the interactions involved [1].
There are two general types of protein arrays. Firstly, analytical arrays utilise
antibodies, antibody mimics or other proteins to measure the presence and
concentrations of proteins in complex mixtures. Analytical arrays can be subdivided
into forward- and reverse-phase protein arrays [9]. Forward-phase protein arrays
immobilise a single test sample containing several different target analytes on the
substratum. Reverse-phase arrays consist of multiple, different samples that are
immobilised on a chip. Each spot represents an individual test sample [10]. This
format allows multiple samples to be analysed under the same experimental
conditions for any given analyte [10,11]. Secondly, functional protein arrays assess a
collection of target proteins or even an entire proteome for a wide range of
interactions and biochemical activities [8,12,13].
Underlying Principles
Proteins can be arrayed either on flat solid phases or in capillary systems. Preferred
solid phases are modified glass or filter membranes because of their low-
fluorescence background. Binding can be covalent or non-covalent. Parameters such
as charge, viscosity, membrane pore size, pH, binding capacity and non-specific
binding play important roles in the generation of protein arrays [5].
Protein arrays are usually printed (gridded/spotted) and imaged using the same
arrayers and scanners as for DNA arrays. Arraying devices are largely pin-based
systems that transfer nanoliter amounts of liquid either on the outside of solid pins or
inside a split- or ring-shaped reservoir. Current detection strategies are classified as
label-free methods and labelled probe methods. Label-free methods include mass
spectrometry (utilises a protein-selective surface for immobilisation of a complex
protein solution) [5] and surface plasmon resonance (SPR) (optical biosensors for
monitoring biomolecular interactions) [4]. Imaging has also been based on either
direct fluorescent labelling of antigens, indirect labelling of antibodies or sandwich
assays using secondary antibodies or specific antibody-binding reagents [5,11] (Fig
2). Different fluorophores allow multiplexing and differential protein expression
profiling [5].
Fig. 2 Representation of labelled probe methods used in protein array detection [11]
Surface Chemistry
Considerable effort has been made in the last years to improve immobilisation of
proteins on modified glass surfaces. Examples of recent advances include:
The selection and production of the capture agents are the most critical points in
protein-detecting arrays. They must be highly specific for the protein of interest and
with an affinity sufficient to capture even proteins at very low concentration [11].
Two array technologies quickly being adopted by researchers are 2D arrays and
bead arrays. In 2D arrays the capture agents are arrayed into planar substrates such
as polystyrene, glass or silicon. These arrays utilise fluorescently labelled reporter
molecules and are analysed using microarray scanners. In contrast, bead arrays
immobilise capture agents onto beads containing an integrated reporter dye. The dye
encodes for the identity of the capture agent on the bead. These arrays are read
using a fluorescent particle counter [12].
Atomic force microscopy (AFM) has been developed for detection at the singular
molecular level in protein nanoarrays which exhibit almost no detectable non-
specific protein blinding [10]. It is also currently widely used for surface
characterisation in protein microarrays. AFM reveals the change in height of an
immobilised protein upon binding with its cognate molecule [11].
HT Protein Production
The production of proteins using cDNA libraries in E. coli with subsequent purification
remains the gold standard. The procedures were adapted to HT expression in fully
automated systems. The purification is mainly based on short affinity tags to either
the N or C terminus of recombinant proteins and involves immobilised metal affinity
chromatography to affinity media. Alternate hosts such as Pichia pastoria and
Saccharomyces cerevisiae have been tested for HT protein expression [16].
Advantages
The key advantages of protein arrays over other techniques are based its capacity to
characterise a huge number of ordered protein spots simultaneously, thus replacing
numerous individual binding parameters in parallel assays with different probes [1].
Another advantage is that protein arrays, derived from DNA arrays, can be
manufactured using technologies adapted from DNA microarray production [17,18].
DNA arrays have been successful in gene expression profiling and mutation
mapping. As the focus shifts from genomics to proteomics, protocols are needed to
study the activity of encoded proteins that directly manifest gene function [13].
Protein arrays can accomplish this providing the proteins’ natural shape and
functionality are maintained [5].
Protein array technologies benefit from current techniques requiring higher volumes
of capture reagents and being less sensitive. For example, ELISA experiments
require nanogram or microlitre amounts of capture reagents and display picomolar
sensitivity. In contrast, protein microarrays require picogram or picolitre amounts of
capture agents and display femtomolar sensitivity. Protein microarrays also enable
20,000 spots per substrate.
Disadvantages
Protein array technology is not as straightforward as DNA arrays due to the complex
structure of proteins [4]. The complexity of proteins has proven to be a bottleneck in
the progression of protein arrays but the disadvantages will lessen as scientists work
to gain a better understanding of proteins.
Expression and purification of proteins is a tedious task and does not guarantee
the functional integrity of the protein [19,20]. Preserving the native characteristics
of proteins is essential for downstream analysis.
Many proteins are notoriously unstable, which raises concerns about microarray
shelf life [12,14].
Autoimmune Profiling
The diversity of the autoimmune response is a great challenge for the development
of antigen-specific tolerising therapies. In the area of autoimmune profiling,
researchers fabricated assays containing 196 distinct biomolecules, comprising
proteins, peptides, enzyme complexes, ribonucleoprotein complexes, DNA and post-
translationally modified antigens. The assays included sera from eight human
autoimmune diseases including systemic lupus erythematosus and rheumatoid
arthritis [16]. To identify new autoantigens that may act as tolerising vaccines, an
antigen microarray consisting of 232 proteins were identified as potential targets of
autoimmune response in chronic experimental autoimmune encephalomyelitis, a
mouse model for multiple sclerosis. Such analysis of immune response can also be
applied to other diseases [6].
Protein arrays have also been applied in identifying new kinases and their substrates.
Phosphorylation of proteins by protein kinases plays a central role in regulating
cellular processes and may contribute to many diseases, including diabetes,
inflammation, and cancer. Therefore, kinases are an important class of drug targets.
Different enzyme activities including phosphatases, peroxidases, galactosidases,
restriction enzymes and protein kinases have been analysed on protein, peptide, and
nanowell microarrays. In one study, a total of 119 known or predicted protein kinases
were expressed, purified as GST fusion proteins, arrayed and cross-linked on a
protein chip and assayed with 17 different substrates for auto-phosphorylation by
treatment with radio-labelled ATP. New activities were found. For example, 27
kinases showed phosporylation activity of poly-Glu-Tyr. This indicates that many
kinases are capable of phosphorylating tyrosine even if they are members of the
serine/threonine family on the basis of sequence comparison [6].
Cancer research is currently one of the largest areas of application for protein arrays.
Cancer proteomics encompasses the identification and quantitative analysis of
differentially expressed proteins from normal tissue, premalignant and malignant
tissues [10,23]. Serum screening was performed in several studies to characterise
the serum and plasma of patients suffering from diverse cancers, such as colon, lung
or nasopharyngeal cancer. All studies demonstrated the applicability of arrays to this
field and led to the identification of known of new potential biomarkers [16].
Researchers have also immobilised anti vascular endothelial growth factor (VEGF)
antibodies to ProteinChip arrays to analyse the expression of VEGF protein isoforms
in lung tumors and normal lung tissue. VEGF plays an important role in the
development and metastasis of tumors and is therefore, an important target in novel
anti-cancer treatments. The lung tumors that were analysed expressed a wide variety
of VEGF isoforms, while normal lung tissues only expressed low amounts of the
smallest VEGF isoform [10].
Proteomics
Proteomics is promising for the early detection of disease using proteomic patterns of
body fluid samples. Proteome analysis may also be important to make individualised
selection of therapeutic combinations that best target the entire disease-specific
protein network. Investigation of the proteome may also give a real-time assessment
of therapeutic efficacy and toxicity. Identifying changes in the diseased protein
network associated with drug resistance will make it possible to adjust the therapy
[10].
For example, reverse-phase protein arrays have been used to study the fluctuating
state of the proteome in minute cell quantities. The activation status of cell signalling
pathways controls cellular fate. Deregulation of these pathways can lead to
carcinogenesis. Reverse-phase protein arrays have used to analyse the status of key
points in cell signalling involved in prosurvival, mitogenic, apoptotic and growth
regulation pathways in the progression from normal prostate epithelium to invasive
prostate cancer. Focused analysis of phospospecific target proteins revealed
changes in cellular signalling events through disease progression and between
patients. Gene expression alone cannot determine the activation (ie.
phosphorylation) state of in vivo signal pathways checkpoints [9].
Food Safety
Food safety is a top priority for many countries. As a result, great effort has been put
into developing technologies to ensure food products meet safety standards. A
particularly interesting application is the xMAP (www.luminexcorp.com) system.
This assay uses colour-coded microspheres to attach capture molecules. The beads
are sorted by flow cytometry. Each type of bead is identified according to its
fluorescence label and the quantity of the captured target on each bead. This
approach has been shown capable of performing 100 different assay types
simultaneously and is used for the detection of bacterial pathogens in food [17].
The following websites are useful resources for learning more about protein arrays:
www.lab-on-a-chip.com
News resource for microarray and microfluidic applications
www.functionalgenomics.org.uk
Extensive resource for both genomic and proteomics-based applications
www.bioarraynews.com
Weekly news resource of developments in the microarrays sector
http://arrayit.com/
Commercial supplier (Telechem) with useful web resources
3. Arenkov, P., Kukhtin, A., Gemmell, A., Voloshchuk, S., Chupeeva, V. &
Mirzabekov, A. (2000) Protein microchips: use for immunoassay and
enzymatic reactions, Anal Biochem, 278, 123-131.
4. Espina, V., Woodhouse, E., Wulfkuhle, J., Asmussen, H., Petricoin, E. &
Liotta, L. (2004) Protein microarray detection strategies: focus on direct
detection technologies, J Immunol Methods, 290, 121-133.
6. Lueking, A., Cahill, D. & Mullner, S. (2005) Protein biochips: a new and
versatile platform technology for molecular medicine, Drug Discov Today:
Targets, vol. 10, no. 11, pp. 789-794.
9. Poetz, O., Schwenk, J., Kramer, S., Stoll, D., Templin, M. & Joos, T. (2004)
Protein microarrays: catching the proteome, Mech Ageing and Dev, 126, 161-
170.
10. Hoeben, A., Landuyt, B., Botrus, G., De Boeck, G., Guetens, G., Highly, M.,
van Oosterom, A. & de Bruijn, E. (2006) Proteomics in cancer research:
methods and application of array-based protein profiling technologies, Anal
Chimica Acta, 564, 19-33.
11. Cretich, M., Damin, F., Pirri, G. & Chiari, M. (2006) Protein and peptide
arrays: recent trends and new directions, Biomol Eng 23, 77-88.
13. Zhu, H. & Snyder, M. (2003) Protein chip technology, Current Opinion in
Chemical Biology, 7, 55-63.
14. Stoll, D., Bachmann, J., Templin, M. & Joos, T. (2004) Microarray technology:
an increasing variety of screening tools for proteomic research, Drug
Discovery Today: Targets, vol. 3, no. 1, pp. 24-31.
15. Lopez, MF & Pluskal, MG (2003) Protein micro- and macroarrays: digitising
the proteome, J Chromatogr B, 787, 19-27.
18. Lal, SP, Christopherson, RI & dos Remedios, CG (2002) Antibody arrays: an
embryonic but rapidly growing technology, Drug Discov Today, vol. 7, no. 18,
pp.143-149.
19. Binder, S., Hixson, C. & Glossenger, J. (2006) Protein arrays and pattern
recognition: new tools to assist in the identification and management of
autoimmune disease, Autoimm Reviews, 5, 234-241.
21. Cahill, DJ. (2003) Protein arrays and their role in proteomics, Adv Biochem
Eng Biotechnol, 83, 177-187.
9.1.1 Introduction
Biomarkers have been classified by (Perera and Weinstein) based on the sequence
of events from exposure to disease. Though biomarkers readily lend themselves to
epidemiological investigations, they are also useful in the investigation of the natural
history and prognosis of a disease. In addition to delineating the events between
exposure and disease, biomarkers have the potential to identify the earliest events in
the natural history, reducing the degree of misclassification of both disease and
exposure, opening a window to potential mechanisms related to the disease
pathogenesis, accounting for some of the variability and effect modification of risk
prediction. The recent interest in biomarker discovery is because of new molecular
biologic techniques that promise to find relevant markers rapidly, without detailed
insight into mechanisms of disease [4]. By screening many possible biomolecules at
a time, a parallel approach can be tested. Genomics and proteomics are some
technologies that are used in this process. There is considerable interest in
biomarker discovery from the pharmaceutical industry. Blood test or other biomarkers
could serve as intermediate markers of disease in clinical trials, and also be possible
drug targets.
Fig.9.1.1 Disease pathway and potential impact of biomarkers (Journal for American
Society for Experimental Neuro Therapeutics, Inc.)
During the Clinical Biomarker Summit March 2006, it was contemplated that as the
field matures, biomarkers were making their way into clinical trials. Faced with
relative lack of experience in implementing biomarkers in clinical trials, many
researchers and clinicians were facing similar challenges in modifying trial design
and defining the right control population, validating biomarker assays from the
biological and analytical perspective, and using biomarker data as a guideline for
decision making. The Clinical Biomarkers Summit also addressed biomarker
translation from pre-clinical to clinical studies and a variety of biomarker applications
in clinical trials, including patient selection, monitoring clinical efficacy and safety, and
clinical pharmacology. The Summit also takes note of the bridging gap between the
pharmaceutical and diagnostics industries and the potential of companion
diagnostics. Specific case studies of leveraging biomarkers in accelerating and
streamlining clinical trials will offer a steady status report, no hype! The Clinical
Biomarkers Summit, is built on a solid three-year track record of Cambridge Health
tech Institute’s Biomarker Series, is the first meeting in the Series to focus exclusively
on clinical applications of biomarkers.
At the Biomarker World Congress 2005, Pennsylvania over 500 thought leaders from
260+ organizations, representing 20 countries, had gathered to discuss biomarker
implementation in drug and diagnostic development. A year later, the largest meeting
of its kind, The Biomarker World Congress 2006 is dedicated to all areas of
biomarker research spanning the pharmaceutical and diagnostic pipeline. The
meeting brought together a unique and international mix of large and medium
pharmaceutical, biotech and diagnostics companies, leading universities and clinical
research institutions, government and national labs, CROs, emerging companies and
tool providers-making the Congress a perfect meeting-place to share experience,
foster collaborations across industry and academia, and evaluate emerging
technologies. The Congress also offers a balance of scientific sessions covering the
latest research and strategic presentations and brainstorming sessions for the
decision makers.
Since the start of the 21st century there has been a lot of research work on there has
been a lot of research worldwide for biomarkers three prominent of them are noted
below,
Now considering another biomarker, the p21 gene, is showing that people with p21-
positive tumors have a decreased probability of tumor recurrence. An article from
Journal of the National Cancer Institute, July 15, 1998, discusses a multi-centre,
randomized clinical trial using p53-status of tumor cells and other molecular markers
like p21 to guide treatment decisions in bladder cancer patients, one of the first of its
kind. [6]
The research team from (Norris Comprehensive Cancer Center) conducted a study
on 242 patients with locally confined bladder tumors who were followed for an
average of 8.5 years. Analysis was done of of the p21 protein and it’s interaction with
the p53 protein. Results of the study indicated that patients with p21-positive tumors
survived disease-free significantly longer than those patients with p21-negative
tumors. Furthermore, it was shown that the way the p21 and p53 proteins interact
with each other can give a very good indication of which patients must be considered
at high risk for recurrence. The article also stated that p53 is known to be a primary
regulator of p21, since genetic changes in p53 may lead to loss of p21 expression
and function. This in turn leads to unregulated cell growth, and is thought to
contribute to the aggressive behavior of some tumors. There hypothesis seemed
confirmed by the study was what the scientist there proclaimed. Patients with p53-
altered/p21-negative tumors demonstrated a higher rate of recurrence and worse
survival compared with those with p53-altered/p21-positive tumors. Patients with
p53-altered/p21-positive tumors demonstrated a similar rate of recurrence and
survival as those with p53-wild type tumors.
At the Centre for Translational Cancer Research, efforts being done to focus on
three types of biomarker discovery protein biomarkers in tissues and cells, bodily
fluid (blood, urine, and other fluids) protein biomarkers, and transcriptome (RNA)
biomarkers. These approaches together facilitate translational cancer research by
providing new tools to the clinician for the enhanced diagnosis, follow-up and
screening of cancer patients.
Tissue Proteins:
Fluid Proteins:
The emerging field of proteomics provides new tools for the early detection of cancer
from human serum, cerebral spinal fluid, urine and other complex samples.
Proteomic research provides information regarding the proteome’s dynamic and
rapid changes which result from exogenous exposure or endogenous factors. The
CTCR Core offers a proteomic analysis based on the patented Surface Enhanced
Laser desorption/Ionization (SELDI) technology. Assays using SELDI time-of-flight
mass spectrometry (TOF-MS) to provide a means to identify new candidate
biomarker proteins because of their ability to detect and quantify multiple post-
translationally modified and processed protein forms in a single assay.
RNA Molecules:
There are various production approaches used for biomarkers; it is essential that the
most rewarding and least time consuming procedure of them is selected which yields
the best suited biomarker that incorporate many capabilities.
Capabilities of Biomarkers,
TABLE 9.1.3,
Advantages Disadvantages
Objective assessment Timing is critical
Precision of measurement Expensive (costs for analyses)
Reliable; validity can be established Storage (longevity of samples)
Less biased than questionnaires Laboratory errors
Disease mechanisms often studied Normal range difficult to establish
Homogeneity of risk or disease Ethical responsibility
The advancement in the biomarker discovery has led to the use of Markers in most of
the therapeutic treatment or clinical trials, thus applications of these are many and a
few are listed below as per there success rate. The uses of the biomarkers are as
given,
As Cancer Markers
The increase in the use of tumor markers as screening and diagnostic tools has
generated much hope for the identification of broadly reacting or pan-cancer
antibodies. Upstate/Chemicon provides a large variety of many well characterized
cancer antibodies for breast cancer detection, chronic lymphatic leukemia (CCL), and
other cancers.
As Neurodegenerative Markers
Chemicon’s line of neurological antibodies has well-characterized neurodegenerative
antibodies and assays for researching Alzheimer’s, Fragile X Syndrome, Parkinson’s
& Huntington’s and other related biomarkers.
As Metabolic Markers
Inflammation, oxidative stress and reactive species are continuously being monitored
as drug response markers. Chemicon now has available a monoclonal antibody to
RAGE, a member of the immunoglobulin super family of cell surface molecules that
bind molecules that have been irreversibly modified by non enzymatic glycation and
oxidation.
• Immuno histochemistry
• Western Blotting
• Immuno precipitation
• Flow cytometric Analysis
• ELISA
1. http://www.healthsystem.virginia.edu/internet/biomolec/biomarkers.cf
m
(University of Virginia Health System)
2. http://www.udel.edu/ctcr/research/biomarkers_discovery.htm
(Centre for Translational Cancer Research)
3. http://www.frost.com/prod/servlet/report-brochure.pag?id=D301-01-00-
00
(Frost and Sullivan Research Services)
4.
http://www.mdsps.com/over/MarketingMaterials/BiomarkersDiscovery.pdf
(MDS Pharma Services)
5. http://innovationwell.net/COMTY_biomarkers?h2ls
9.1.6 Key Industry Suppliers
Millipore Serological Corporation, Life Sciences is the parent company for most of
the biomarker producing companies which have their acquisition in most of the
companies mentioned below, which mostly work in collaboration with Millipore
they are,
1. LINCO (http://www.lincoresearch.com)
2. Celliance (www.celliancecorp.com)
3. MDS Pharma Services
4. Frost & Sullivan Research and Partnership Services (GPS)
5. Biomarker Pharmaceuticals
Many studies using biomarkers in the above mentioned industries are going on but it
never achieves its full potential because of the failure to adhere to the same rules
that would apply for the use of variables that are not biological. The development of
any biomarker should precede or go in parallel with the standard design of any
epidemiological project or clinical trial. In forming the laboratory component, pilot
studies must be completed to determine accuracy, reliability, interpretability, and
feasibility. The investigator must establish “normal” distributions by important
variables such as age and gender. The investigator will also want to establish the
extent of intra individual variation, tissue localization, and persistence of the
biomarker. Moreover, he or she will need to determine the extent of inter individual
variation attributable to acquired or genetic susceptibility. Most, if not all of these
issues can be resolved in pilot studies preceding the formal investigation.
9.1.7 Reference
Bharathirajan Panneerselvam
9.2.1Introduction:
“SELDI employs aluminum chips 1-2 mm in diameter, spotted with protein capture
bait such as a chemical affinity resin, small molecule, antibody, DNA or enzyme.
Users apply a crude sample to the chip, allow proteins with affinities to capture
molecules to bind to the surfaces and then wash away any impurities or loosely-
bound proteins. Analytes are laser desorbed directly from the chip, ionized and
analysed by mass spectroscopy; A SELDI experiment produces a mass spectral
fingerprint that can distinguish differences in protein expression levels between
diseased and normal samples”.Giannoula Klemant,Pediatric Oncologist, Dana Farber
Cancer Institute and Children’s Hospital, Boston ,says,” The main reason why I
became really interested in this technology is that for a biotechnologist, it is often
quite important to analyze groups of treated and untreated animals or people.
Traditional Mass Spectrometry analysis or permits the analysis of sample pairs,(i.e.)
a comparison of a single treated and untreated sample, which not only takes really
long hours, but also prevents any statistical analysis”[1, 3]
The above diagram shows the Principles of SELDI-TOF MS using the ProteinChip
System.
(a) Protein profiling.
(i) The application of microliters of sample, for example, diseased and
healthy persons to an eight-spot array with hydrophilic, hydrophobic, cationic, anionic
or immobilized-metal affinity capture chromatography surface. [4] (Figure 3, 4&5)
Source: http://www.proteomicsnijmegen.nl/Seldi_pages/selditof.htm
Figure: 5 Bioprocessor for liquid handling procedures. The arrays are put into the
processor
Source: http://www.proteomicsnijmegen.nl/Seldi_pages/selditof.htm
(b) (Figure shows a comparative study of the results from SELDI result and Gel
result.)
“SELDI-TOF mass spectrum and the spectrum depicted in gel view. The protein m/z
is displayed on the x-axis and the protein abundance is depicted on the y-axis.
Spectra are searched for differentially expressed protein m/z values using the
ProteinChip bioinformatics software or other suitable statistics or bioinformatics.
Computational algorithms are used to build models for the classification of diseased
and healthy samples with the discriminating m/z values (e.g. a classification tree).
Here, expression differences are visible between ovarian cancer and control
samples. (i) A peak at 9.2 kDa (haptoglobin fragment) is over expressed in ovarian
cancer. Other unregulated peaks are visible at (ii) 4.1 kDa and (iii) 4.5 kDa, and (iv) a
down regulated peak is seen at 2.7 kDa.”[4, 6]
The most significant features of SELDI are its ability to provide a rapid protein
expression profile from different types of clinical and biological samples. Researchers
proved its effectiveness in biomarker discovery and identification. It also plays a vital
role in the study of protein - protein and protein-DNA interaction.SELDI proves to be
highly versatile by its success in the study of identification of potential diagnostic
markers for breast, bladder, prostate and ovarian cancers and Alzheimer’s diseases
to the study of biomolecular interactions and characterization of posttranslational
modification.Infact, this technique was first used in early-stage ovarian cancer
detection. This technique enables application to easily accessible body fluids such as
serum.
[4, 5] .In this minireview SELDI’s advantage and disadvantage, key industry suppliers
and its various applications along with few learning resources sites are discussed in
details.
Towards the end of 1993, this technology was introduced, its high potentiality and
versatile function attracted researchers for more studies on SELDI which lead to the
development of SELDI at commercial level in Ciphergen Biosystems, Palo Alto, CA,
USA, and this organisation appreciated more research that solved a number of
medical and basic research problems. [6-10]
Merchant and Weinberger’s review published in 2000 seems to be the first review
that states a brief and precise description of SELDI technology with its advantages in
biomedical research. But with in the next four years diversity of ProteinChip arrays
with improved array surface characteristics has increased with the availability of
SEND Technology. There has been continuous research publication that states the
enhancement in SELDI instrument performance, added automation tools, improved
experimental protocols and fine-turned software for biomarker pattern data analysis.
Today it is simple to investigate biomarker candidates that are characterized by
identifying PTM (Post Translational Modification). [10]
1. “The new ProteinChip System Series 4000 for the rapid translation of SELDI
biomarker discoveries into assays”. [1]
Validating markers up-front saves the valuable time of the researchers and their
effort when compared to those methods that favours discovery of whether the
markers stand up in larger sample populations after assay development. [1]
Using SELDI, Center for Orthopaedic Research in collaboration with the Arkansas
Breast Cancer Research Program has developed a core facility for protein biomarker
identification which was manufactured by Ciphergen. The focus of the UAMS SELDI
facility is biomarker identification in cancer. [22]
“The application of polypyrrole (PPY) solid phase micro extraction (SPME) coatings
as both an extraction phase and a surface to enhance laser desorption and ionization
(SELDI) analyte is introduced in here. This SPME/SELDI fiber integrates sample
preparation and sample introduction on the tip of a coated optical fiber, as well as the
transmission media for the UV laser light. Using ion mobility spectrometry (IMS)
detection, the signal intensity was examined as function of extraction surface area
and concentration of analyte. The linear relationship between concentration and
signal intensity shows potential applicability of this detection method for quantitative
analysis. Extraction time profiles for the fiber, using tetraoctylammonium bromide
(TOAB), illustrated that equilibrium can be reached in less than one minute. To
investigate the performance of the PPY coating, the laser desorption profile was
studied. The fiber was also tested using a Q-TOF MS instrument with leucine
enkephalin as a test analyte. Since no matrix was used, mass spectra free from
matrix background were obtained. This novel SPME/SELDI fiber is easy to
manufacture, and is suitable for studying low mass analytes because of the inherent
low background. These findings suggest that other types of conductive polymers
could also be used as an extraction phase and surface to enhance laser
desorption/ionization in mass spectrometry”. [11]
Advantages:
Disadvantages:
9.2.4 Application
Biomarker discovery
Interaction Difference Mapping
Expression Difference Mapping
DNA-protein interaction
Protein-protein interaction
Receptor-ligand interaction
Antibody-antigen interaction
Protein purification
Clinical trials stratification
Phosphorylation/signal transduction
Glycosylation analysis
Epitope mapping
Protein ID/peptide mapping
Toxicity markers
Biomarker discovery
Source: http://ciphergen.com/doclib/docFiles/262.pdf
PATTERN RECOGNITION
“Single protein biomarkers with clinically relevant predictive power are quite rare.
Multiple marker panels generated by SELDI are beginning to deliver on the promise
of high-confidence clinical stratification. Ciphergen’s Biomarker Patterns Software
enables rapid sample stratification with an easy-to-interpret, tree-based classification
output as shown in the figure –8“ .[17]
VALIDATION
To verify the reliability of potential biomarkers, high numbers of samples need to be
comparatively analyzed to further substantiate their usefulness. A single user can
manually prepare and analyze up to 200 samples per day.[17]
Source: http://ciphergen.com/doclib/docFiles/262.pdf
PROTEOMICS APPLICATIONS
POST-TRANSLATIONAL MODIFICATIONS
“Because the SELDI ProteinChip platform delivers the exact molecular weights of the
molecules present in a given sample, a wide variety of covalent modifications
(phosphorylation, acetylations, oxidations, Glycosylation and others) can be
analyzed.
For the detection of phosphorylation, an IMAC array loaded with Gallium enables
specific enrichment and detection of phospho-peptides. This also provides a powerful
approach to the quantitation of kinase activity”. [17, 14]
12. Prowl.rockefeller.edu This Website contains various tools for protein studies.
www.advion.com
www.agilent.com
www.appliedbiosystems.com
www.bdal.com
www.ionics.ca
www.mds-sciex.com
www.packardinst.com
www.perkinelmer.com
www.shimadzu-biotech.net
www.thermo.com
www.waters.com
www.affibody.com
www.affymetrix.com
www.bdbiosciences.com
www.biochem.roche.com
www.cambridgeantibody.com/
www.ciphergen.com
www.clontech.com
www.geneprot.com
www.morphosys.com
www.procognia.com
www.prolinx.com
www.sigmaaldrich.com
www.somalogic.com
www.zeptosens.com
www.zyomyx.com
www.abbottdiagnostics.com
www.astrazeneca.com
www.atheris.ch
www.bayerdiag.com
www.cellzome.com
www.chiron.com
www.Digene.com
www.incyte.com
www.amershambioscience.com
www.genomicsolutions.com
www.probes.com
www.proxeon.com
Conclusion:
9.2.7 References.
1. Biocompare - a guide for Life Scientist web site (15/10/06) (update Feb 16
‘05) http://www.biocompare.com/spotlight.asp?id=316
6. Frears E.R., Stephens D.J., Walters C.E., Davies H. , Austen B.M, (1999) The
role of cholesterol in the biosynthesis of β-amyloid, NeuroRepor., 10 : 1699-
1705.
7. Shane Beck, (1998) Ciphergen's ProteinChip Arrays , The Scientist. 12: 15-
17.
9. Glaser.V, Rosetta, (1998) Polymeric arrays and methods for their use in
binding assays, Nature biotechnology.15: 937-938.
10. Strauss, E., (1998) , After the genome IV meeting : News Ways to probes the
molecule of life, Science. 282: 1406-1407.
16. Judith Y.M.N. Engwegen, Marie-Christine W. Gast, Jan H.M. Schellens and
Jos H. Beijnen, (2006) Clinical proteomics: searching for better tumour
markers
with SELDI-TOF mass spectrometry, TRENDS in Pharmacological Sciences.
251-259.
18. Harri siitari and Heini koivistoinen (2004) Proteomics – challenges and
possibilities in Finland http://www.tekes.fi/eng/publications/proteomics.pdf
21. Sonja V,Steve C,Scot R W & Andreas W (2005) Protien quantification by the
SELDI-TOF-MS based Protienchip system Nature Methods .2 : 393-395.
22. Judith Y.M.N. Engwegen, Marie-Christine W. Gast, Jan H.M. Schellens and
Jos H. Beijnen, (2006) Clinical proteomics: searching for better tumour
marker
with SELDI-TOF mass spectrometry. Trends in Pharmacological Sciences
.27:
251-259.
23. Yue Hu, Suzhan Zhang_, Jiekai Yu, Jian Liu, Shu Zheng, (2005) SELDI-TOF-
MS: the proteomics and bioinformatics approaches in the diagnosis of breast
cancer, The Breast , 14 : 250–255.
26. Dan Agranoff, August Stich, Paulo Abel and Sanjeev Krishna, (2005)
Proteomic fingerprinting for the diagnosis of human African trypanosomiasis,
Trends in Parasitology. 21:154-157.
27. University of Arkansas for Medical Sciences, Center for Orthopaedic Research,
Orthopaedic Surgery Department. www.cor.uams.edu/services.asp
29. Junjun Wang,y Defa Li, Lawrence J. Dangott,z and Guoyao Wuy (2006)
Proteomics and Its Role in Nutrition Research,Journal of nutrition,1759-1762.
By Payal Patel
S3120173
10.1 Inroduction
The branch of engineering that deals with manipulating or constructing things smaller
than 100 nanometres is called as Nanotechnology. In healthcare, diagnosis takes
and important part to treat a disease. Identification and quantification of proteins and
their folding mechanism are very important in diagnosis of diseases. Small quantities
of proteins, which generally escape from detection and are responsible for the
diseases, now, can be quantified by protein nanotechniques. Proteins are long
stringy molecules that fold up into complex and useful shapes due to very subtle
interactions between their component parts. Proteins do most of the molecular
manipulation work in our bodies, joining and splitting molecules, moving things as
small as atoms and as large as cellular organelles from one place to another, and
making cellular metabolism work.
The worldwide emergence of nanoscale science and engineering was marked by the
announcement of the National Nanotechnology Initiative (NNI) in January 2000.
Recent research on biosystems at the nanoscale has created one of the most
dynamic science and technology domains at the confluence of physical sciences,
molecular engineering, biology,
biotechnology and medicine. Nanotechnology is beginning to allow scientists,
engineers, and physicians to work at the cellular and molecular levels to produce
major benefits to life sciences and healthcare. In the next century, the emerging field
of nanotechnology will lead to new biotechnology based industries and novel
approaches in medicine.
The atomic force microscope, for example, can locate and measure the
extraordinarily small forces associated with receptor-ligand binding on cell surfaces.
Microscopic electrical probes can detect a living cell’s exchange of ions with its
environment or the propagation of electrical signals in nerves. The optical
instruments combined with chemically selective light –emitting fluorescent probes,
can follow chemical processes on the surfaces of and inside a living cell. This
capability allows observation of the biochemical process and interactions of cells in
living systems. In human body cells contain naturally occurring molecular motors. For
eg. F1-ATPase which is part of the large, membrane-embedded complex that
synthesizes ATP within mitochondria (Figure). This structure is only about 10 nm in
size, is a robust, fully functional rotating motor that is powered by natural biochemical
processes.
Figure: The molecular motor protein F1-ATPase. An actin filament is attached to a
motor protein to provide load to and allow visualization of the motor rotation
www.wtec.org
During the last few years, scientists have developed the technology for rapidly
mapping the genetic information in DNA and RNA molecules, including detection of
mutations and measurement of expression levels. This technology uses DNA
microchip arrays that adapt some of the lithographic patterning technologies of the
integrated circuit industry. Miniaturization of allied analytical processes such as
electrophoresis will lead to increases in throughput and reduced cost for other
important methods of analysis such as DNA sequencing and fingerprinting. For
example, new research is aimed at replacing the tedious, slow, and expensive
process of DNA sequencing in slab gels with miniaturized integrated micro fabricated
analytical systems (Figure).
Figure: Photo mosaic of a DNA separation chip.
www.wtec.org
The image is pieced together from twelve optical micrographs. The inset shows a
small region 0.8 mm long containing dense pillars that act as a molecular sieve to
separate DNA molecules according to size. Conventional gel electrophoresis works
essentially the same way, and for this reason these nanofabricated structures are
called “artificial gels.” This technology has the potential to revolutionize DNA
separation techniques by providing an inexpensive, durable, and reproducible
medium for DNA electrophoresis
To deliver drugs and genes into cells nanoparticles considerably smaller than one
micron in diameter have been used. The particles can be combined with chemical
compounds that are ordinarily insoluble and difficult for cells to internalize. These
particles can then be introduced into the bloodstream with little possibility of clogging
the capillaries and other small blood vessels. The efficacy and speed of drug action
in the human body can thereby
be dramatically enhanced. As part of new technology nanoparticles can be used in
similar ways, by carrying DNA fragments they can be used to incorporate specific
genes into target cells (Figure).
Figure: The “Gene Gun,” a system that uses nanoparticles to deliver genetic material
to transfect plant and animal cells.
www.wtec.org
In this system, submicron gold particles coated with DNA are accelerated with a
supersonic expansion of helium gas. The particles leave the front of the device at
high velocity and penetrate the cell membrane and nuclear membrane, thus
delivering the genetic material to the nucleus.
In last 5 years NASA Ames has been a leader in nanotechnology. In this last 5 years
they have done so many projects to use this useful technique in bioscience and
medicine. So, they are important part of advances in protein nanotechnology and
their application in field.
As all other technology protein nanotechnology also have some disadvantage with
plenty of advantages.
There are few disadvanges of this technique that makes scientists think sometimes
while using it. It requires infrastructure for nanobiology. It is similar to those for other
fields: multi-user facilities to provide access to specialized technologies, funding
mechanisms and organization structures that encourage and support
multidisciplinary teams and are responsive to rapid technological change, and
training of a new generation of scientists and engineers who are prepared to
maximally exploit this new knowledge are required. It is very costly so that is big
disadvantage.
The protein nanotechnology has application in different fields. In life science and
medicine, engineering, physics etc.
DNA detector arrays that today operate in the micron size range provide the potential
to
do thousands of experiments simultaneously with very small amounts of material.
Figure shows an image of a chip with 6,400 microdots, each containing a small
amount of a different gene in the yeast genome and capable of determining how
active that gene is in yeast. Yeast cells were grown under various conditions; the
amount of red or yellow light represents the level of RNA produced from the DNA in
that gene, under those conditions. Similar experiments using this or related
technologies can now be performed with tens or hundreds of thousands of human
genes. By comparing the pattern of gene expression of normal tissue with cancerous
tissues, scientists can discover which few genes are being activated or inhibited
during a specific disease. This information is critical to both the scientific and clinical
communities in helping to discover new drugs that inhibit cancer-causing genes. The
important point is that these technologies allow physiological changes in yeast or
humans to be characterized, molecule by molecule, in just a few hours. Five years
ago, an experiment like this would have taken dozens of scientists months to
complete.
Figure: The full yeast genome in a microarray chip.
www.wtec.org
For more than a decade there has been an intensive effort to prepare high-quality
nanometer-size colloidal crystals of many common semiconductors. At the onset, this
effort had a strong focus on fundamental studies of scaling laws, in this case,
quantum confinement of electrons and holes. Over this decade, tremendous
advances occurred in both the spectroscopy and the fabrication methods. This
yielded a new class of very robust macromolecules with readily tunable emission
energy. To the extent that applications of this technology were envisioned at the
onset, they were focused in the domain of optoelectronics. Yet quite unexpectedly, it
turns out that these colloidal nanocrystals can be used as fluorescent labels for
biological tagging experiments. Biological tagging is one of the most widely employed
techniques for diagnostics and visualization. As shown in Figure it appears as though
for many applications, the colloidal nanocrystals are advantageous as labels. This
has led to rapid commercialization of the new nanotechnology.
One is a colorimetric sensor that can selectively detect biological agent DNA; it is in
commercial development with successful tests against anthrax and tuberculosis
(Mirkin 1999). Compared to present technology, the sensor is simpler, less expensive
(by about a factor of 10), and more selective—it can differentiate one nucleotide
mismatch in a sequence of 24, where 17 constitutes a statistically unique
identification.
Figure: Anthrax detection: when the anthrax target is present, pairs of nanoparticles
assemble together via the DNA filaments and change the color of the respective
suspension
www.wtec.org
10.5 References
1. Baselt, D.R., G.U. Lee, and R.J. Colton. 1996. Biosensor based on force
microscope technology. Journal of Vacuum Science and Technology B 14(2):789-
793.
4. Chan, W.C.W., and S.M. Nie. 1998. Quantum dot bioconjugates for ultrasensitive
nonisotopic detection.
Science 281:2016-2018.
10. Noji, H. 1998. The rotary enzyme of the cell: The rotation of F1-ATPase. Science
282: 1844-1845.
11. Odde, D.J. and M.J. Renn. 1998. Laser-based direct-write lithography of cells.
Ann. Biomed. Eng. 26:S- 141.
12. Renn, M.J., et al. 1999. Laser guidance and trapping of mesoscale particles in
hollow-core optical fibers. Phys. Rev. Lett. 82:1574-1577.
13. Santini, J.T. Jr., M.J. Cima and R. Langer. 1999. A controlled-release chip.
Nature 397:335-338.
14. Shi, H., et al. 1999. Template-imprinted nanostructured surfaces for protein
recognition. Nature 398:593- 597.
17. U.R. Muller, D.V. Nicolau (Eds), Microarray Technology and Its Applilcations
19. www.comspacewatch.com/news
21. www.pubmed.com