Vous êtes sur la page 1sur 21

I

m
a
g
e

L
i
c
e
n
s
e
d

b
y

I
n
g
r
a
m

I
m
a
g
e
www.jaxenter.com
#38
Issue June 2014 | presented by
Plugging the desktop/cloudy IDE holes
Project Flux is on the case
Dont get locked in
Long-term support, Eclipse style
Whats hot, whats not
The burning issues of the month
Game changing stuff
Why Open API and API is on target
Editorial
2 www.JAXenter.com | June 2014
With two and a half years slow and measured work behind
it, its not an exaggeration to call Javas development con-
servative when compared to languages like Scala, which are
driven by thousands of commiters in gazelle-esque leaps. And
whilst Java gently ticks along, the ecosystem around it shoots
up at a rate of knots.
But it's not just language developments that matter to us
at JAX. It's also the amazing innovations and products that
support you, whatever your code of choice. On this front
the RebelLabs recent look into the most popular tools and
technologies on the scene offered plenty of food for thought.
Continuous Integration is clearly becoming more of a norm
across the board, with a 10 percent uplift in respondents from
the previous year.
Another growing priority in the sector is that of API man-
agement. In this issue, Kai Whner examines the ways in
which Open API represents the cutting edge of a new busi-
ness model, presenting new routes for expanding branding
value and paths to market, as well as modelling new value
chains for intellectual property.
Thinking long-term
In this article, Kai also discusses the components and archi-
tecture of API Management solutions and compares different
API management products, and how these products comple-
ment other middleware software.
Elsewhere, weve got an interview with Eclipse project Flux
contributor Martin Lippert, who will be giving us the inside
story on this innovative new effort to bridge the gap between
the desktop bound and free-foating cloud IDE. Why are we
stuck in a creative rut with this technology, and how are the
lunar-crew attempting to mix things up?
Continuing the Eclipse theme, weve also got insights on
the foundations approach to long-term support, courtesy of
Dr. Jutta Bindewald and Pat Huff. Find out how their open
source principles inform their approach, and how they go
about avoiding the shackles of vendor lock-in.
Lucy Carey, Editor
Long term support: The Eclipse way 4
In it for the long haul
Dr. Jutta Bindewald and Pat Huff
In Flux: Martin Lippert on the mission to marry the desktop 11
and cloud IDE

Open API & API management: Redening SOA 13
Conceptual curve
Kai Whner
Cracking Microservices practices 19
Masterclass
Bilgin Ibryam
I
n
d
e
x
Hot or Not
3 www.JAXenter.com | June 2014
Oracle in-memory fronting
Larry Ellison ruffed a few feathers with his latest announcement about to Oracles In-
memory Computing strategy. Upstarts Hazelcast picked the same date to launch Hazelcast
Enterprise, a commercial offering which will go head to head with the Java overlords. Both
systems have scalability, high availability, reliability, cloud readiness, convergence of the
OLAP and OLTP approaches, and compatibility with SQL in common. Where the paths
diverge is that Hazelcast are all about low-cost, open source software, whilst Oracle oper-
ate on a path geared towards vendor lock-in, leading Hazelcast to label the San Francisco
behemoth an Apple wannabe. We do love a bit of sass on the side of any product launch.
Funky old Java
Weve banged on about this before: Java is best when fresh. This advice has been reiter-
ated again by Microsofts latest Security Intelligence Report, which councils that keep-
ing Java bang up to date provides maximum ROI for the security of your IT systems.
According to the report, old Java plug-ins especially are at risk of attacks from exploit
toolkits, and its a snap to hack into web pages using outdated software. Although its
easier said than done in some cases, if you can, for the best security ROI, it pays to use
the most recent versions of Java.
Groovy is Swift is Groovy is Swift
Apples reveal of glossy proprietary language Swift was perhaps the biggest shock of the
month. Cobbled together from a grab-bag of modern language features, everyone from
Rust users to die-hard JavaScript adherents laid claim to different aspects of Swift. One
particularly compelling argument came from team Groovy, who see their language as
Androids respective Swift counterpart. Lest anyone accuse Groovys creators of merely
jumping on the we-did-it-frst bandwagon, Apple did in fact make a direct reference to
Groovy during Swifts unveiling ceremony.
Its all coming up Gradle
Dont get us wrong, Groovys revolutionary build automation tool, Gradle, is top notch.
Still, we were surprised to see that it topped a recent poll by ZeroTurnaround as the tool
people want to better understand. Geared at resolving sticking points like maintainabil-
ity, performance, usability, extendability, and weak automation in other build systems,
Gradle is overseen by warders Gradleware, who have worked hard to build up a solid
reputation for the software against arch rivals Maven and Ant and its likely this solid
leadership that may be drawing peoples interest from around the JVM.
Red letter month for Linux
Java-ites were disgruntled at the thought of waiting over two years for their update so
spare a thought for Red Hat Enterprise Linux (RHEL) users, who had more than three
years before the launch of RHEL 7. First launched in 1994, Red Hat Linux forms a
central pillar for Red Hat solutions across cloud and virtualization, storage and middle-
ware. Its hoped that the operating system will further consolidate its grip in the space,
moving beyond its position as a commodity platform. In other Linux-related celebra-
tions, container technology Docker, which RHEL 7 will help push into the enterprise,
reached 1.0 status this month. Party on, Linux people!
Eclipse
4 www.JAXenter.com | June 2014
by Dr. Jutta Bindewald and Pat Huff
What do many commercial applications have in common
with design tools for aircraft? They are both mission-critical,
they both contain open source (often Eclipse components),
and they both have to be functional for many, many years.
Keeping these long term solutions up and running is crucial
and at the same time a major challenge. Changes in the en-
vironment or in the data structure may awaken bugs that
have slept peacefully for years. Changes in the upper software
layers may lead to undetected problems in the lower layers.
The reality is that an application may crash even after having
worked just fne for a long time.
If your business is to build, sell, and run professional soft-
ware, you are faced with the task of supporting your offering
over its complete life cycle. If the problem is in your own
code, you may be able to analyze and fx the bug. There is still
know-how in your company, you own the source code, you
still have access to the build scripts, and you still have a test
environment, so you have a way forward.
Long term support
The Eclipse way
In it for the long haul
As Open Source moves into mainstream applications, it becomes imperative
that support issues in that code can be addressed throughout the lifecycle
of those applications. This is often diffcult as the Open Source communities
tend to focus on new function and not on maintaining older releases.
I
m
a
g
e

L
ic
e
n
s
e
d

b
y

I
n
g
r
a
m

I
m
a
g
e
Eclipse
5 www.JAXenter.com | June 2014
However, what if it turns out that the prob-
lem is caused by a bug in an open source com-
ponent that you used in your solution? Eclipse
is well known as an open source community
that produces exceptional developer tools and
runtimes that also embraces and encourages
commercial adoption of their technology. In-
deed, the Eclipse Foundation is unique among
the open source communities in addressing is-
sues important to commercial adopters, such
as intellectual property and legal clearance.
Now, Eclipse is extending that to the support
issues of its consumers. As these consumers
come to depend on products built on Eclipse
technology, the ability to maintain these prod-
ucts throughout the complete life cycle of a
commercial offering becomes another impor-
tant part of the value proposition to them.
The Lifespan Problem
Offcial support for a version of an Eclipse
project ends after nine months with the re-
lease of Service Release 2. So if you discover a bug in an
ancient (from the open source project committer perspec-
tive) version, it will either already be fxed in the current
version under development at Eclipse, or you will be told
that it will be fxed in the next support release or in the next
major version. Moving your product to the latest version is
often not a viable option. The risk and the cost of exchang-
ing components inside a productive solution are high, and
many customers are not willing to accept that risk without
seeing a clear beneft. In addition, there are often regulatory
requirements for multiple years of stability which prohibit
a migration to newer releases at the same rate and pace of a
vibrant open source community like Eclipse. For example,
aerospace companies have to ensure software maintenance
of their offerings for decades.
The Options
So how do you fx bugs in (possibly very) old versions of
Eclipse projects? There are basically two options: You do it
yourself or you look for professional help.
The do-it-yourself option has a number of challenges
you must surmount. Yes, you will fnd the source code in
the Eclipse repositories; however do you understand it well
enough to apply critical changes? Can you re-create the build
and the appropriate test environment? Can you sign the bi-
naries? There are also legal considerations. For example, the
Eclipse Public License requires that you make all your source
code changes publicly available.
The professional-help option is also not as easy as it
sounds. You need to quickly fnd the right maintenance pro-
vider for your specifc problem. In the best case, you want to
choose between several providers, but how do you compare
them? Once you have decided, you have to make sure that
you are not locked into that one provider; what happens
to the source code, binaries, etc. if you want to switch to
another provider?
The Eclipse LTS Answer
This is where the Eclipse Long Term Support (LTS) Work-
ing Group comes into the picture. The LTS Working Group
was created to address the long term maintenance for Eclipse
technologies that are commercially adopted. LTS provides
the infrastructure and a marketplace of needed skills to allow
you to provide support for your customers over the many
years of your products life cycle.
At the same time LTS is designed to help those companies
or individuals who are in the business of providing profes-
sional support for Eclipse projects (the Maintenance Provid-
ers). Here are some of the building blocks of LTS:
1. The Marketplace: This is where maintenance providers
advertise their offerings and where a potential consumer
of these services can fnd the professionals to help them
with his problems.
2. The Technical Infrastructure: This shared infrastructure
offers all necessary components to the maintenance pro-
viders: from source repository to signed binaries. We will
describe it in more detail below.
3. The Organization: LTS is organized as an Eclipse Work-
ing Group. Maintenance providers must join the Working
Group in order to leverage the LTS infrastructure.
How Does It Work?
If you are a potential consumer of LTS-based support servic-
es, you do not have to be a member of the LTS organization.
You simply visit the LTS Marketplace to fnd and contact
an appropriate maintenance provider. The maintenance pro-
vider will then discuss the terms, costs, and details with you.
In general, you will get the fxes as signed binaries. The source
code is available as open source under the license of the origi-
nal project (usually the EPL).
If you are a maintenance provider, you join the LTS or-
ganization to beneft from the shared services and infra-
Abb. 1: Eclipse LTS Process Overview
Eclipse
6 www.JAXenter.com | June 2014
structure. Let us take a closer look at this infrastructure
and how a maintenance provider would use it. It consists
mainly of a Git repository, a Gerrit code review system, the
Maven/Tycho build infrastructure, and a Hudson server for
continuous integration.
When a bug in an older version of an Eclipse project is
reported, you can fetch the source from the LTS Git re-
pository. After applying the necessary source code modi-
fcations, you push the changes to Gerrit for code review,
thus also triggering the rebuild of your component. If the
build succeeds, you will get the frst positive vote from the
Hudson voter.
But this vote is not enough; other maintenance providers
are asked to review your changes and give their vote. If at
least one other provider accepts your changes and no one re-
jects them, you are done, and the source code changes can
be pushed back to the central LTS repository. You can now
build your component with Maven/Tycho and deliver the
signed binaries to your customer.
This is quite easy and straightforward, isnt it? But never-
theless, some questions may come to your mind.
Is Maven/Tycho mandatory to participate in LTS? No,
all projects can participate in LTS, but if you use another
build infrastructure than Maven/Tycho you are obliged
to check in your build scripts to ensure a reproducible
build
Who has access to the bug fxes I made? The source code
is available (usually under the EPL) as open source to any-
one, thus complying with the Open Source license terms.
Access to the binaries and to the signing service is limited
to LTS members.
How can I create private bug fxes that should only be
available to the public after a certain point in time (e.g.
security fxes)? LTS maintenance providers are granted
a private Git forge. Only the provider himself has access
and can grant rights to this branch. Code changes in this
branch are not subject to a public Gerrit code review.
The LTS Market Place
As you can see, Eclipse LTS offers an easy way for mainte-
nance providers to fx bugs in older releases. But how can they
reach potential customers? An important beneft to joining
LTS is the newly-created Eclipse Market Place for LTS pro-
viders, which offers an easy entry point for maintenance pro-
viders and their customers. Here maintenance providers can
offer their services on the marketplace, listing all the projects
they can provide support for. If you are a potential consumer
of these services, the LTS marketplace will be an ideal way to
fnd and choose between service providers that can help you
with your problems.
Go to http://marketplace.eclipse.org/ and select Long Term
Support under the list of Markets to see the current list of LTS
maintenance providers.
Collaborative and Open
Let us emphasize one aspect: Eclipse LTS is a collaborative
approach based on Open Source principles. LTS members
and maintenance providers work in a shared infrastructure
hosted by Eclipse. Fixes in LTS and associated binaries will
be available to all LTS members. In particular, that means
maintenance providers can offer not only their own bug fxes
to their customers but also fxes done by other maintenance
providers.
From a customer point of view this minimizes the risk of
vendor lock-in. If you are no longer satisfed with your main-
tenance provider, or they stop supporting the project, you
may choose another maintenance provider. Thanks to the
shared infrastructure and the collaborative approach, none
of the fxes from the original provider will be lost.
Dr. Jutta Bindewald is working as Development Manager for SAP. She is
a member of the Eclipse Board of Directors and Co-Chair of the LTS work-
ing group.
Pat Huff is a Program Director in IBM Rational, responsible for Open
Source and Eclipse. Pat manages the relationship between the IBM prod-
uct development community and Eclipse. He is a member of the Eclipse
Board of Directors and is Co-Chair of the LTS Working Group.
References
[1] LTS homepage: https://lts.eclipse.org/
Eclipse LTS offers and
easier wayto fx bugs
in older releases.
Participants
Steering Committee Members: IBM, SAP, Innoopract, Po-
larSys and CA Technologies
Premium Members: Atos, Codetrails, itemis, Boundless, Erics-
son, Obeo, Airbus, CEA LIST
Interested Parties: Bosch
www.jaxlondon.com
LONDON 2014
featuring
Business Design Centre, London
October 13 15th, 2014
IN

L
O
N
D
O
N

S
IN
C
E

2
0
10
follow us: @JAXLondon JAX London JAX London
S
ig
n
U
p
N
o
w

a
n
d
S
a
ve
B
ig
!
V
e
ry E
a
rly B
ird

D
isc
o
u
n
ts u
n
til
Ju
ly 3
1
st
The Enterprise Conference on Java,
Web & Mobile, Developer Practices,
Agility and Big Data!
jaxlondon.com
October 13
th
15
th
, 2014
Business Design Centre, London LONDON 2014
JAX London is returning from 13th 15th October at the
Business Design Centre, Islington, bringing Java, JVM and
Enterprise Professionals together for our technology and
methodology focused event. Known as the place where
the most respected thinkers and the brightest minds of the
tech community meet, JAX London offers a deep dive for
the modern developer and architect aiming to transform
open technologies into valuable business solutions. We
pride ourselves by always focusing on the big picture: Java
core, architecture, and web technologies, as well as expert
professional insight into the very latest methodologies and
best-practices.
JAX London offers a unique experience featuring some of
the most highly regarded and sought after speakers, pre-
senting the latest trends and developments, in a profes-
sional and relaxed atmosphere.
Learn how to increase your productivity, identify which tech-
nologies and practices suit your specific requirements and
learn about new approaches. Network and exchange ideas
with other developers and experts about Java, Agility and
Big Data and many other important topics.
The conference is divided into workshop and session days:
Monday 13th October hosts a pre-conference workshop
and tutorial day, allowing developers to get hands-on in half-
day and full-day workshops overseen by experts.
Tuesday 14th & Wednesday 15th October see the confer-
ence proper taking place with more than 60 technical
sessions, keynote presentations, the JAX Expo, community
events and more.
Big Data Con will also be joining JAX London. A perfect op-
portunity to find out more about the latest knowledge on
modern data stores, big data architectures based on Ha-
doop plus advanced data processing techniques.
Register now and save on the Very Early Bird offers until
31st July!
Standard ticket holders also have access to Big Data Con.
For more information and the latest news about the confer-
ence and our speakers check out www.jaxlondon.com.
Join us for JAX London 2014
Some highlights of the JAX London sessions
Richard Warburton (Monotonic Ltd)
Richard is an empirical technologist and solver of deep-dive techni-
cal problems. Recently he has been working on data analytics for high
performance computing and has written a book on Java 8 Lambdas for
OReilly.
Performance from Predictability
These days fast code needs to operate in harmony with its environ-
ment. At the deepest level this means working well with hardware:
RAM, disks and SSDs. A unifying theme is treating memory access
patterns in a uniform and predictable way that is sympathetic to the
underlying hardware. For example writing to and reading from RAM
and Hard Disks can be significantly sped up by operating sequentially
on the device, rather than randomly accessing the data. In this talk
hell cover why access patterns are important, what kind of speed
gain you can get and how you can write simple high level code which
works well with these kind of patterns.
Simon Ritter (Oracle)
Simon Ritter manages the Java Technology Evangelist team at Oracle
Corporation.
Lambdas and Streams in Java SE 8: Making Bulk
Operations Simple
The significant new language feature in Java SE 8 is the introduc-
tion of Lambda expressions, a way of defining and using anonymous
functions. On its own this provides a great way to simplify situations
where we would typically use an anonymous inner class today. How-
ever, Java SE 8 also introduces a range of new classes in the standard
libraries that are designed specifically to take advantage of Lambdas.
These are primarily included in two new packages: java.util.stream
and java.util.function.
After a brief discussion of the syntax and use of Lambda expressions
this session will focus on how to use Streams to greatly simplify the
way bulk and aggregate operations are handled in Java. He will show
at examples of how a more functional approach can be taken in Java
using sources, intermediate operations and terminators. He will also
discuss how this can lead to improvements in performance for many
operations through the lazy evaluation of Streams and how code can
easily be made parallel by changing the way the Stream is created.
jaxlondon.com
October 13
th
15
th
, 2014
Business Design Centre, London LONDON 2014
Some highlights of the JAX London sessions
Josh Long (Pivotal)
Josh Long is the Spring Developer Advocate. Josh is the lead author on
Apress Spring Recipes, 2nd Edition, the O'Reilly "Pro Spring Roo" book,
the Pearson "Livelessons for Spring" and a committer on several Spring
projects and the Activiti BPMN framework.
'Bootiful' Code with Spring Boot
Spring Boot, the new convention-over-configuration centric framework
from the Spring team at Pivotal, marries Spring's flexibility with conven-
tional, common sense defaults to make application development not just
fly, but pleasant! Join Spring developer advocate Josh Long for a look
at what Spring Boot is, why it's turning heads, why you should consider
it for your next application (REST, web, batch, big-data, integration,
whatever!) and how to get started. Let's take advantage of the dynamic
nature of a virtual JUG: Josh Long will be live coding and (attempting
to) answer your questions on all things Spring and Spring Boot as he
introduces the technology.
The Full Stack Java Developer
Today's Java developer is a rare bird indeed. SQL and JPA on the back-
end, or MongoDB or Hadoop? HTTP, REST and websockets on the web
tier? What about security? JavaScript, HTML, CSS, (not to mention LESS,
SASS, and CoffeeScript!) on the client? Today's Java developer is a full
stack developer, and has enough problems to deal with already. Join
Josh Long for a look at how Spring Boot reigns in the complexity, and
empowers you, today's Java developer, to build applications quickly. He
intends this talk to introduce the Spring toolsystem in live-coding of an
application to demonstrate websockets, REST, JPA and SQL, unit testing,
web security and HTML5 and Angular.js-powered client applications
rendered through Thymeleaf. This talk will also include a discussion on
modern web application concerns like resource processing (LESS, SASS,
CoffeeScript) and client-side dependency management options for the
Java developer.
David Delabasse (Oracle)
David Delabassee is a software evangelist working for the Java EE Team
at Oracle.
Server-Side JavaScript on the Java Platform
Project Avatar is an open source platform for server-side JavaScript
applications on the JVM. Project Avatar uses not only the Node.js pro-
gramming model, but also its ecosystem. These Java/JavaScript hybrid
applications can leverage capabilities of both environmentsaccessing
the latest node frameworks while taking advantage of the Java platforms
scalability, manageability, tools, and extensive collection of Java libraries.
In this session, you will learn how to write hybrid applications that take
advantage of both ecosystems.
Pushing Java EE outside of the Enterprise : Home Auto-
mation & IoT
Java EE is a rich platform widely used in the Enterprise. This session will
look at the relevance of Java EE outside of this traditional domain., i.e.
how Java EE is also relevant for a connected home and more generally
for IoT. David Delabasse will show the various extension points and
technologies available in Java EE to integrate external systems such as
using JCA to connect to a KNX Bus, the most widely used protocol in
home automation, etc.
Home Automation is by nature highly event driven so we will look how
Message Driven Bean and CDI can be leveraged in this context.
Finally, he will also have a look at additional technologies that are rel-
evant in the IoT space such as JAX-RS, WebSocket, etc.
Peter Friese (Google)
Peter works as a Developer Advocate at Google in the Developer Rela-
tions Team in London, UK. He is a regular open source contributor,
blogger, and public speaker.
Build iOS Apps with Java
In this session, Peter Friese will show how to use tools like RoboVM to
build fully native iOS apps based on the Java language ecosystem. You
can see how easy it is to create native iOS UIs using just Java code.
He will also demonstrate how to use 3rd party Java or Objective-C
libraries to make your life easier. Finally, he will show how alternative
JVM languages like Xtend, Kotlin or Scala can be used to create apps
based on RoboVM. Time permitting, you will have a look at how to use
RoboVM to create cross-platform games based on the Java platform.
Introduction to Android Wear
In this session, Peter Friese will give an overview of Android Wear and
how to integrate it in your product strategy. He will look at the underly-
ing design principles and discuss a number of use cases for apps that
connect to wearable devices. After that, he will show some code exam-
ples and teach how to use the Android Wear SDK.
jaxlondon.com
October 13
th
15
th
, 2014
Business Design Centre, London LONDON 2014
Agile knowledge exchange through acting
Pawel Badenski (Bottega IT Solutions) and
Johannes Thnes (ThoughtWorks)
Knowledge exchange is one of the primary
reasons people go to a conference or attend a
workshop. But talking about experience is not
always enough. Talking about it and then experiencing it through doing
is what makes the difference. Thats why weve combined knowledge
sharing and improvisation theatre in our workshop Agile Knowledge
Exchange through Acting.
In this workshop, participants act out improvised scenes from everyday
lean and agile life, including improvised retrospectives, standups, pair-
programming and other typical activities. We use improvisation games
to create interesting and challenging situations, allowing participants
to demonstrate how they might react and learn from the examples
provided by others.
In addition, participants will:
learn more about their emotions
gain experience using different communication styles
experiment with new behaviours
Dataflow, the forgotten way
Russel Winder (Freelancer)
Java 8 brings Streams to the JVM, an easy way of dealing
with stream based data evolution either sequentially or in
parallel. So JVM now has data parallelism covered. There
has been a lot of marketing of actors as a tool for handling
concurrency and parallelism, systems such as GPars and Akka bring this
to Java, Scala and Groovy programming. However there are other archi-
tectures: communicating sequential processes (CSP) and dataflow. Go
has brought these models of concurrency and parallelism to the world
of native code systems but what about the JVM? GPars has dataflow
capability.
This session will focus on the dataflow and CSP models of concurrency
and parallelism and will present some small practical examples as a
guide.
Hands on wit Data Grids
Stephen Millidge (C2B2)
This workshop will give you a rapid introduction to working
with Data Grid technologies like Hazelcast, Infinispan,
Coherence and others. It will be a mixture of brief slides on
the concepts of Data Grids followed by a large amount of
hands on Javas code. In the workshop we will configure a data grid on
Amazon EC2 with a node on each attendees machine. We will also cre-
ate a simple chat application using Data Grid Events, and finally we will
hunt the POJO across the grid using Entry Processors for On Grid parallel
processing.
If you've never played with a Data Grid and therefore never had a chance
to appreciate the power of this technology, then we will show you how
simple and easy it is to get started and use it. You will go away inspired
to create your own on-grid applications. Or if not inspired, at least mildly
enthusiastic.
Hands-on performance tuning
Matt Braiser (C2B2 Consulting)
This hands on lab will present a number of applications
which have been built to include some common significant
performance bottlenecks. Attendees will be able to work
through the applications using a variety of free and open
source tooling to get practical experience of identifying and resolving
the bottlenecks. The techniques required will be covered at the start of
the labs, and the exercises will give hands-on experience of using the
various tools and techniques.
Performance tuning is seen as a black art by many, when really it is a
scientific process of measurement and elimination. This lab aims to give
attendees practical experience of identifying and resolving some of the
key performance problems that are seen in enterprise java applications,
although the principals will extend to applications in any language. Hope-
fully once attendees have identified and resolved a performance problem
in a lab environment, they will be able to take this expertise and apply it
to their own applications.
The economies of scaling software
Abdelmonaim Remani (PolymathicCoder Inc)
You spend your precious time building the perfect applica-
tion. You do everything right. You carefully craft every piece
of code and rigorously follow the best practices and design
patterns, you apply the most successful methodologies
software engineering has to offer with discipline, and you pay attention
to the most minuscule of details to produce the best user experience
possible. It all pays off eventually, and you end up with a beautiful code
base that is not only reliable but also performs well. You proudly watch
your baby grow, as new users come in bringing more traffic your way
and craving new features. You keep them happy and they keep coming
back. One morning, you wake up to servers crashing under load, and
data stores failing to keep up with all the demand. You panic. You throw
in more hardware and try to optimize, but the hungry crowd that was
once your happy user base catches up to you. Your success is slipping
through your fingers. You find yourself stuck between having to rewrite
the whole application and a hard place. It's frustrating, dreadful, and
painful to say the least. Don't be that guy! Save your soul before it's too
late, and come to learn how to build, deploy, and maintain enterprise-
grade Java applications that scale from day one.
Topics covered include: parallelism, load distribution, state management,
caching, big data, asynchronous processing, and static content delivery.
Leveraging cloud computing, scaling teams and DevOps will also be
discussed.
P.S. This session is more technical than you might think.
JAX London Workshop Day
Eclipse Interview
11 www.JAXenter.com | June 2014
JAX Magazine: Can you outline the project for our readers?
Martin Lippert: Project Flux is a new project at Eclipse. Its
about moving towards the cloud based IDE. Project Flux tries
to solve two different problems in cloud base architecture:
frst, the idea that future developer tooling will in some ways
be based on the cloud, or run on the browser instead of the
classical desktop idea. But we dont really know what this is
going to look like. There are certain ideas about just imple-
menting exactly the same IDE thats running on the desktop
at the moment, just running in the browser. But I dont think
thats the right way. So Flux is about trying to fnding out what
this tool could look like, and how it could be implemented.
The second part is that, if you took a look at approaches
to cloud based developer tooling today, people are building
something thats running inside the cloud, you can access it
with a browser and thats it. You have to jump over the
wall into this cloud based world, and then you are caught
there. You have to check out your stuff in the cloud, and leave
everything else you were working on behind on your desktop
IDE. Its a totally separate world. Its quite diffcult for people
to jump over this wall because of the either-or decision they
have to make. So this step is quite huge.
Flux is trying to build a bridge between those two worlds,
so toolmakers like myself can move towards cloud based de-
velopment tooling in a very smooth and slow way. People can
start using cloud based developer tools at the same time, while
they are continuing to use their IDE. Cloud tooling is quite fea-
ture rich now but you dont want to leave everything behind.
Thats the basic thing were trying to solve with Flux.
JAXmag: Originally this was known as Project Flight right?
Lippert: Yes, Flight was our frst name. But right before we
made it public and presented it to Eclipse, we saw that there
was this project from Twitter that was also call Flight so we
decided it wasnt such a good idea to have two projects with
this name. Anyway, I totally dig the name Flux too like the
Flux capacitor thing in the Back to the Future movies.
JAXmag: Going back to what you were saying before, there
is a cognitive dissonance between developing for the cloud
and IDE. Why do you think this is?
Lippert: Im not sure if you can call this a cognitive dis-
sonance. But I think many of the elements that you have in
your desktop IDE, they are the usual elements theyre very
familiar and youre used to them. You have your navigator
and the open tabs and so on. Whereas on the browser, if you
are trying to rebuild exactly this, that feels a little strange. Its
not really adopting the idea of web applications. Interestingly,
in the Orion project at Eclipse, they are trying to move away
from the traditional desktop IDE paradigm, but are fnding
that people actually want some of these elements included
(like context-menus, the navigator tree, etc.). Maybe its just
the natural way of doing things, and maybe the traditional
way is actually better. But I like to step away, and step back
from these traditional views, to trigger new thoughts about
these things. Maybe youll end up building similar things that
are there in existing desktop IDEs if its a good solution. may-
be not. Im not totally sure what the future cloud based devel-
oper tool will look like exactly we have a lot of ideas, and
have built prototypes and things like that, but we dont have a
perfect answer yet. I think it will evolve over time.
Portrait
Martin Lippert is a Principal Software Engineer at Pivotal and
works on development tools for the Spring Framework, Cloud
Foundry and JavaScript. Together with his team, he is responsible
for the Spring IDE, Spring Tool Suite and the Cloud Foundry In-
tegration for Eclipse. Martin is also co-founder of it-agile GmbH,
one of the leading agile companies in Germany. He is interested
in Agile and Lean Software Development, refactoring techniques,
innovation, and the delivery of high-quality open-source software.
In Flux: Martin Lippert on
the mission to marry the
desktop and cloud IDE
Breaking a tired old paradigm: Pivotals Martin
Lippert outlines this groundbreaking new server
facing project to JAX Magazine.
Eclipse Interview
www.JAXenter.com | March 2014 12
JAXmag: Do you have an idea for the basic blueprints for the
architecture of Flux?
Lippert: Yes, we have a pretty clear vision for the archi-
tecture of Flux, and its totally different to what we have in
current desktop IDEs. The basic idea is that you have very
loosely coupled components that are running in a cloud based
environment. Those very small components are some kind of
micro services, and are connected with asynchronous mes-
saging. Theres no RESTful API theres no server that you
call to do some stuff. They are communicating with messages
only. Everything else is implemented on top of this messag-
ing backbone. For example the synchronization protocol that
Flux uses to sync project and fles across services is imple-
mented as a set of messages that are being sent around. The
same is true for the desktop IDE. It is connected to this mes-
sage channel and can communicate with other Flux services
that way. The real-time sync while you are typing is imple-
mented on top of this asynchronous messaging as well. It cre-
ates a whole new vision for building a distributed developer
tooling platform.
The project and fle synchronization is Flux is a bit like
Dropbox for code. You can connect your projects in your
favorite desktop IDE to Flux and from there on, all the fles
are synced with Flux it feels very seamless, and its all being
backed up automatically on the cloud. Thats a very funda-
mental service of Flux. On top of that, other service compo-
nents that are running inside the cloud can participate in this
synching mechanisms as well and thats very interesting, be-
cause it opens up a whole load of new possibilities for build-
ing all kinds of microservices inside the cloud. It opens up a
whole new world of possibilities, and to me its an awesome
foundation for the future of cloud-based developer tooling.
Advert
Also available to buy on:
Clojure Made
Simple
Introduction to Clojure
John Stevenson
HTML5
Security
Carsten Eilers
O
n
ly
$
3
.
9
9
e
a
c
h
Take a shortcut.
Searching for the code to success?
Buy now at: www.developerpress.com
API
13 www.JAXenter.com | June 2014
by Kai Whner
Open API represents the leading edge of a new business mod-
el, providing innovative ways for companies to expand brand
value and routes to market, and create new value chains for
intellectual property. In the past, SOA strategies mostly tar-
geted internal users. Open API targets mostly external part-
ners. So, API management requires developer portals, key
management, and metering/billing facilities that SOA man-
agement never provided.
This article introduces the concepts of Open API, its chal-
lenges and opportunities. API Management will become
important in many areas, no matter if business-to-business
(B2B) or business-to-customer (B2C) communication. Sev-
eral real world use cases will discuss how to gain leverage
due to API Management. The second part of the article dis-
cusses the components and architecture of API Management
solutions and compares different API management prod-
ucts, and how these products complement other middleware
software.
The Open API Business Model
Service-oriented architecture (SOA) is already established
within most enterprises. It is commodity. As discussed by
Anne Thomas Mannes as long ago as 2009, the next software
evolution can be built on top of SOA see SOA is Dead;
Long Live Services for more details: http://apsblog.burton
group.com/2009/01/soa-is-dead-long-live-services.html.
Conceptual curve
Open API & API
management:
Redefning SOA
A deep dive into the concepts of Open API and the obstacles and opportunities it presents.
I
m
a
g
e

L
ic
e
n
s
e
d

b
y

I
n
g
r
a
m

I
m
a
g
e
API
www.JAXenter.com | June 2014 14
Welcome to the Open API Business Model, which fnally
represents this evolution, building on top of SOA and ser-
vices. The modern buzzword for this topic is Open API.
Analysts and software vendors classify the product around
Open API as API Management.
What is Open API?
Lets take a look at the defnition of Open API from Wiki-
pedia: Open API is a word used to describe sets of tech-
nologies that enable websites to interact with each other by
using REST, SOAP, JavaScript and other web technologies.
While its possibilities arent limited to web-based applica-
tions, its becoming an increasing trend in so-called Web
2.0 applications. This defnition should be endorsed by
clarifying that Open API is also used for automated ma-
chine-to-machine (M2M) communication more and more.
It is not just used for communicating with web browsers
and mobile apps.
Drivers for Open API
The introduction of Open API establishes several benefts:
Enable New Business Models: Increase revenue from
existing services through partner ecosystems, and extend
presence to social and mobile platforms serving digital
customers.
Deliver High Performance: Accelerate edge services per-
formance through load balancing, caching, and a high-
performance event-driven architecture.
Secure Internal Services for External Exposure: Stand-
ardize authentication and authorization across the enter-
prise and through to partners, and protect services from
attack through security policies, message verifcation,
and adaptive throttling.
Map Business Agreements to Enforceable Policies: Use
throttles to enforce SLAs for service consumers, and
monetize by metering or charging for systems integra-
tion.
Federate Disparate Enterprise Applications: Unify cloud
and mobile platforms through service aggregation,
content-based routing, protocol bridging, mediation, and
lightweight orchestration.
Rapidly On-Board Partners: Create new channels with
hot deployment of partners, services, and policies.
The benefts generate several opportunities for business.
Without IT, Open API would not be possible. However,
Open API initiatives usually are driven by the line-of-busi-
ness, not IT (see Figure 1, taken from http://www.tibco.
com/products/ automation/api-management/default.jsp).
Business can increase revenue, reduce costs and improve
effciency by introducing Open API. Revenue growth:
New revenue streams via repurposed intellectual prop-
erty
Expand channel partners and customers
Extend brand value and market reach
Cost reduction/improved effciency:
Reduce costs through partner self service
Increase supply chain and B2B fexibility
Enhance R&D through crowd source innovation
Due to these opportunities, [many companies want to] ex-
pose APIs directly to third-party development organiza-
tions. These inquiries come from companies across multiple
industries, from entertainment and media to retail, fnan-
cial services, tele communications, and even government
(see The Forrester Wave: API Management Platforms,
Q1 2013 by Eve Maler and Jeffrey s. Hammond, Febru-
ary 5, 2013, http://www. forrester.com/The+Forrester+W
ave+API+Management+Platforms+Q1+2013/fulltext/-/E-
RES81441).
Real world use cases for Open API
Open API is not just a buzzword. Many success stories ex-
ist already. The following shows different use cases where
companies increased their revenue signifcantly by making
their internal APIs public. These companies monetize with
every service call:
PayPal is an international e-commerce business allowing
payments and money transfers to be made through the
Internet. It was acquired by eBay in 2002, and was used
mainly for payment of eBay auctions for several years.
In the meantime, PayPal is integrated into thousands
other online shops which sell anything from books over
computers to coupons. The payment process is easy to
implement for the online shop, and also easy to use and
secure for the customer. PayPal earns money with every
payment of any online shop. PayPal offers its payment
service through several different APIs and applica-
tions or gadgets on top of it. See the PayPal community
PayPal Forward for numerous success stories: https://
www.paypal-community.com/t5/PayPal-Forward/bg-p/
PPFWD.
Amazon started as an international electronic commerce
company selling books, DVDs, and other goods. Today,
it is the largest online retailer worldwide. Therefore,
Amazon needs thousands of servers and a good, stable
infrastructure. In 2006, Amazon started renting parts of
this infrastructure as cloud service (infrastructure as a
Figure 1: Drivers for Open API
API
www.JAXenter.com | June 2014 15
service) via Open API. Today, Amazon is the worldwide
leader in this segment. Revenue is growing quarter by
quarter. See The Secret to Amazons Success Internal
APIs (http:// apievangelist.com/2012/01/12/the-secret-
to-amazons-success-internal-apis/) to understand how
Amazon realizes this Open API approach.
Mobile enablement is one of the hottest topics for API
Management as most companies enable their workforce
to become more mobile, and also customers want to use
their mobile phone for communicating with the com-
panies. The easiest way to solve this is offering an open
API to push the implementation of new mobile applica-
tions using these APIs. A funny success story for mobile
enablement is Dominos pizza service, where a hacker
found out that he could order pizzas via a command
line API, see Track Your Dominos Pizza Order from
a Terminal (http://lifehacker.com/388708/track-your-
dominos-pizza-order-from-a-terminal). This is a post
from 2008. Think about yourself and your friends. How
many of you guys order a pizza or a taxi via mobile apps
on your smartphone, today? In some years, you will use
Google Glass or the panel on your refrigerator to do
these things.
These use cases make internal interfaces public as Open
API to be used by consumers. Besides, many other enter-
prise scenarios exist where API Management makes a lot
of sense, for example:
Partner Gateway: Access control for well known exter-
nal parties
Mobile App Gateway: Access control for apps deployed
externally
Cloud Integration Gateway: Governance and mediation
control for SaaS
Internal Governance: Manage internal SOA
Technical Overview about Open API
The drivers for Open API were explained in the last section.
The most important thing is to be aware of these opportuni-
ties. The upcoming section discusses the technical concepts
of Open API and its corresponding API Management prod-
ucts. The section takes a look at the basic lifecycle, different
components, technical architecture and relevant roles for
implementing the Open API concept in your enterprise.
Open API Lifecycle
The lifecycle of Open API is illustrated in Figure2, taken
from http://www.tibco.com/products/ automation/api-
management/ default.jsp. It can be described in four steps:
1. API Providers enable access to their data or business
functionality using public APIs.
2. API Consumers (e.g. external developers) embed the
API functionality in their applications.
3. API usage is monetized with a pay for use model.
4. Focus is on leveraging existing services in new ways.
Components of an Open API Solution
Before API consumers can use APIs, different work has to
be done by the API provider. Offering an Open API solution
involves the following steps:
Create a API engine on premise (in your datacenter or
DMZ) or use a (private or public) cloud solution.
Open enterprise services as APIs.
Make it easy for others to use the APIs.
Act on feedback to improve your offering (e.g. regarding
SLAs, customer satisfaction) and to increase revenue.
To realize this solution, you need three components: API
Gateway, API Portal Platform and API Analytics. These
components are described in the following.
API Gateway
The API Gateway is a centralized access point for manag-
ing enterprise APIs, providing mediation between internal
and external services, systems, and devices. You create poli-
cies to secure and control services with these capabilities to
ensure performance and availability at web scale through
security and access control, event-based runtime policy
management, and federated web scale performance.
API Portal Platform
API Providers and API Consumers use an API Portal Plat-
form. It is a complete software environment for building,
Figure 2: Lifecycle of Open API
API
www.JAXenter.com | June 2014 16
deploying, and managing a customized developer portal for
API exchange.
API providers use the portal for API portfolio and life-
cycle management, product creation and pricing, partner
management, and monetization.
API consumers use the portal to browse the product catalog
(i.e. combination of APIs for different prices/usage quotas),
explore and test APIs, and subscribe to services.
API Analytics
API Analytics is an integrated analytics server, which offers
reporting and monitoring capabilities including:
Reporting dashboard (out of the box or customized re-
ports)
Monitor KPIs and SLA compliance
Drill-down into specifc user behaviour
Understand API usage patterns
Full execution auditing and debugging
Isolate problems through data discovery
Trend analysis to identify future capacity needs
Look for opportunities to grow the API ecosystem
Architecture of an API Management Solution
After describing the different components needed to build an
Open API solution, lets take a look at the basic deployment
architecture (see Figure3, taken from http://www.tibco.com/
products/automation/api-management/default.jsp).
Several challenges have to be solved. The three major archi-
tectural challenges for API Management are scalability and
policy-based service delivery. Though, lets frst discuss how
API Management fts into enterprise architecture at all.
API Management in an Enterprise Architecture
The frst question is how API Management fts into the exist-
ing enterprise architecture. The short answer: It is a perfect
complementary add-on. It can be placed between existing
services and API consumers. Usually, it is deployed on prem-
ise in the DMZ of the enterprise, or deployed as cloud ser-
vice.
The architecture behind the DMZ does not have to
change. Existing infrastructure for realizing integration
(ESB), business processes (BPM), event processing (CEP),
enterprise software (ERP, CRM), etc. can still be leveraged
as before. It just gets a new gateway for publishment and
monetization.
Scalability
API Management solutions have to scale. Unlike internal
service calls in a SOA, public APIs have other requirements:
Service consumers are not known, SLAs differ, and the num-
ber of service calls depends on success of external partners
and can increase quickly and signifcantly. Therefore, an API
Management solution has to ensure different scalability as-
pects:
No single-point-of-failure
Multiple instances of a gateway can be deployed across
multiple hosts
Can be scaled dynamically to address peak demand
Can be deployed across data centers for geo-resilience
Allow caching to improve performance
Gateways report availability to load-balancer
Centralized logging of distributed deployments
Infrastructure can be monitored by existing agents and tools
Policy based Service Delivery
The service delivery of open APIs has to be policy based. De-
pending on the use case, many of the following policy aspects
have to be managed in a fexible, confgurable way:
Figure 3: Architecture of an API Management Solution
API
www.JAXenter.com | June 2014 17
Authentication and Authorization
Access control granularity down to service endpoint or
operation
Single-edit confguration changes through web user interface
Throttle Capabilities
Rate and high-water mark technical throttles designed to
protect particular service implementations
Quota commercial throttle designed to prevent commercial
over-use of services (e.g. wholesale usage)
Time-of-day throttle behavior down during known busy
periods or maintenance
Error-rate/ payload-size technical throttles designed to
minimize impact of external parties
Group logical throttle to help manage large partners,
large service deployments
Routing Capabilities
By operation: Routing based on the requested service,
such as a SOAPAction or URL string
By requestor: Routing based on the name, type or class
of requesting-device
By version: Version routings can be used to support mul-
tiple concurrent versions of a service or a service imple-
mentation
By protocol: Routing can be used to safely abstraction
requests bi-directionally
Time-of-day routing to difference services (explicitly,
orchestrations)
By identity: Different partners and consumers can be
routed to different services
By size: For some requests (e.g. with attachments), re-
quests can be routed by size to appropriate service han-
dlers
Roles
Different roles have to be defned for Open API. API con-
sumers represent partners and external developers while API
providers can be developers, administrators, or partner man-
agers. Lets take a look at the functions each role has to ac-
complish:
Developers
Provide a catalog for organizing and publishing APIs
Supply a repository for documentation, sample code,
usage tips
Create product options and support plans
Provide REST and SOAP service interoperability
Monitor and report on API usage and performance
Administrators
Manage environments and developer accounts
Set access rights by user or organization
Confgure deployment policies for APIs
Partner Managers
Partner onboarding
Community management
API Consumers
Discover and use APIs
Use a self-service portal for enrollment, key requests, and
API testing
Select product options and support plans
Monitor and report on API consumption
Products
Different kinds of API Management solutions are available
on the market. More are arising year by year. Analysts such
as Gartner or Forrester see Open API as a hot topic for fu-
ture IT investments and growing revenue. Lets recap from
above: To realize an Open API solution, you need three
components:
A portal (used by API providers to offer API products
and by API consumers to use them)
A gateway (confgured by API providers)
An analytics tool (used by providers and consumers) to
react to feedback, usage and other events
Some vendors offer all these components. Others just offer
some of these features. On the one side, there are some vendors
just offering a portal to publish existing APIs and manage
payment/billing. On the other side, there are vendors, which
offer a total solution for building services, making them public
(including billing), and analytics for improving your services.
In between, some vendors offer something in the middle.
To make the right selection of an API Management prod-
uct, you have to think about your requirements and then
evaluate corresponding products available on the market.
An overview or extensive comparison of different vendors
would push boundaries of this article. Though, the next two
sections give a short overview about questions you should
concern before making a selection, and a short overview of
some available API Management products.
Selection Criteria
Vendors differ a lot regarding their API Management offer-
ings. Ask yourself the following questions, decide what you
need, and evaluate corresponding vendors with regard to:
Do you just want to build a directory for your existing
service, or do you want a real infrastructure for building,
governing, deploying, and managing your services?
Do you just want to publish REST services, or do you
also want/have to make other service protocols such as
SOAP or JMS public?
Vendors differ a lot re-
garding their API Ma-
nagement offerings.
API
www.JAXenter.com | June 2014 18
Do you need a fexible confguration and routing options
using different security standards (e.g. LDAP, SAML,
Kerberos, OAuth, WS-*, XACML, etc.)?
Do you need an elastic highly scalable architecture for
millions of messages (based on event driven architecture
instead of synchronous HTTP calls)?
Do you need to extend the portal to your needs (regard-
ing topics such as service management, developer portal,
analytics)?
Do you want to leverage other products of the same
vendor (e.g. products for integration, mapping, trans-
formation, routing, business processes, complex event
processing, etc.)?
Do you want to deploy your API Management solution
on premise or in the cloud? If in the cloud, is virtualiza-
tion through VMs fne for you, or do you want a real,
i.e. elastic, cloud solution?
Do you want a hardware appliance or just software? Is
it required to confgure your API engine for running in
your DMZ on existing servers?
Products
Available products differ a lot in functionality and matu-
rity. A more detailed evaluation is required to make the
right decision for your use case. The following is just a short
overview of different vendors, which offer API Manage-
ment solutions (no complete list):
Apigee (http://apigee.com) offers a complete API Plat-
form. As the companys name states, Apigee is focused
especially on API Management. The solution is designed
to meet the challenges of the new mobile, social, cloud
marketplace head-on. Users can start with a (very lim-
ited) free version.
Mashery (http://www.mashery.com) is another solution
focused especially on API Management. It was born out
of the Web mashup movement of the 2000s, hence its
name. However, Intel Corporation acquired it in 2013.
Mashery offers a very affordable and easy to use cloud
solution to publish existing APIs. Thus, it is good for
simple scenarios. Users can start with a (limited) free
version.
Layer 7 (http://www.layer7tech.com) has deep roots in
the XML gateway market and offers many advanced
routing and security features. It has extended its prod-
uct portfolio to API Management. The solution is very
powerful, but therefore very good technical knowledge is
required.
TIBCO (http://www.tibco.com/products/automation/
api-management/default.jsp) provides a comprehensive
operating platform called API Exchange, which lets you
build and test APIs, defne runtime governance policies,
migrate APIs between environments, and monitor and
report on API usage. TIBCO API Exchange leverages
other TIBCO products to combine ESB, BPM, CEP, etc.
with its API Management solution. TIBCOs products
focus on complex enterprise scenarios.
IBM is also focused on complex enterprise scenarios and
has different powerful API Management solutions in its
portfolio (www.ibm.com/software/products/en/api-man-
agement-family/), for example IBM API Management,
DataPower XML gateway, Cast Iron Live Web API Ser-
vices, and others.
Several further vendors offer API Management solutions.
Some of the most visible ones are 3scale (http://www.3scale.
net/), Vordel (http://www.vordel.com/) and Apiphany
(http://apiphany.com/, acquired by Microsoft recently)
focusing especially on API Management while Software
AG (http://www.softwareag.com/corporate/ products/wm/
integration/products/api/overview/default.asp) and Oracle
(http://www.oracle.com/us/products/middleware/soa/api-
management/overview/index.html?ssSourceSiteId=opn)
are examples for other big software vendors which offer
not just API Management, but solutions for the whole
integration portfolio. MuleSoft (http://www.mulesoft.
com/ platform/api/manager) and WSO2 (http://wso2.com/
products/api- manager/) are two open source Enterprise
Service Bus vendors, which also included API Management
solutions in their portfolio.
Conclusion
Open API and API Management represent the leading edge
of a new business model. Enterprises have very good op-
portunities to increase revenue, reduce costs and improve
effciency by publishing and monetizing internal services to
external consumers via API Management solutions. Many
API Management solutions are available on the market.
Their functionality differs a lot. The products are still young
and have to improve in the next months and years but they
are already mature enough for getting started to innovate in
creating new business models and increasing revenue. Open
API and API Management have a great future.
Kai Whner works as Technical Lead at TIBCO. All published opinions are
his own and do not necessarily represent his employer. Kais main area
of expertise lies within the felds of Application Integration, Big Data, SOA,
BPM, Cloud Computing and Enterprise Architecture Management. He is
speaker at international IT conferences such as JavaOne, ApacheCon, JAX
or OOP, writes articles for professional journals, and shares his experiences with new
technologies on his blog www.kai-waehner.de/blog. Find more details and refer ences
(presentations, articles, blog posts) on his website www.kai-waehner.de.
kontakt@kai-waehner.de @KaiWaehner
Apache Camel
19 www.JAXenter.com | June 2014
by Bilgin Ibryam
Ive been using microservice architectures before I knew they
were called so. I used to create pipeline applications made up
of isolated modules which interacted with each other through
queues. Since then, a number of (ex)ThoughtWorks gurus
have talked about microservices. Fred George[1], then James
Lewis[2] and fnally Martin Fowler[3] have all blogged about
microservices, all of them helping to make it the next buzzword
and now every company wants to have few microservices.
Nowadays there are #hashtags, endorsements, likes, train-
ings, even two day conferences[4] about it. The more I read
and listen about microservice architectures, the more I realize
how Apache Camel (and the accompanying projects around
it) fts perfectly with this style of applications. In this post we
will see how the Apache Camel framework can help us create
microservice style applications in Java with few lines of code.
Microservices Practices
There is nothing new in microservices. Many similar ap-
plications have been designed and implemented as such for
a long time. Microservices is just a new term that describes
a style of software systems that have certain characteris-
tics and follow certain principles. It is an architectural style
where an application or software system is composed of
individual standalone services communicating using light-
weight protocols in event based manner. In the same way
as TDD helps us to create decoupled single responsibility
classes, microservices principles guide us to create simple
applications at system level.
Here, we will not discuss the principles and character-
istics of such architectures or argue whether it is a way of
implementing SOA or a totally new approach to application
design, but rather look at the most common practices used
for implementing microservices and how Apache Camel can
Masterclass
Cracking Microservices
practices
Learn how the Apache Camel framework can assist you in creating
microservice style applications in Java with just a few lines of code.

iS
t
o
c
k
p
h
o
t
o
.
c
o
m
/
d
in
n

Apache Camel
20 www.JAXenter.com | June 2014
helps us accomplish that in practice.
There is not a defnitive list (yet) but
if you read around or watch the videos
posted above, you will notice that the
following are quite common practices
for creating microservices:
1. Small in size. The very fundamen-
tal principle of microservices says that
each application is small in size and it
only does one thing and does it well. It
is debatable what constitutes small or
large, the number varies from 10 LOC to
1000 but personally, I like the idea that
it should be small enough to ft in your
head. There are people with big heads,
so even that is debatable, but I think that
as long as an application does one thing
and does it well so that it is not consid-
ered a nanoservice [5], that is a good
size. Camel applications are inherently
small in size. A CamelContext with couple of routes with er-
ror handling and helper beans is approximately 100 LOC.
Thanks to Camel DSLs and URIs for abstracting endpoints,
receiving an event either through HTTP or JMS, unmarshal-
ing it, persisting and sending a response back is around 50
LOC. That is small enough to be understood end-to-end easi-
ly, rewritten and even thrown away without feel any remorse.
2. Having transaction boundaries. An application consisting
of multiple microservices forms an eventually consistent system
of systems where the state of the whole system is not known
at any given time. This on its own creates a barrier for under-
standing and adopting microservices in teams who are not used
to work with this kind of distributed applications. Even though
the state of the whole system is not fxed, it is important to have
transaction boundaries that defne where a message currently
belongs. Ensuring transactional behaviour across heteregenous
systems is not an easy task, but Camel has great transactional
capabilities. Camel has endpoints that can participate in trans-
actions, transacted routes and error handlers, idempotent con-
sumers and compensating actions, all of which help developers
easily create services with transactional behavior.
3. Self monitoring. This is one of my favorite areas of mi-
croservices. Services should expose information that describes
the state of various resources it depends on and the service
itself. These are statistics such as average, min, max time to
process a message, number of successful and failed messages,
being able to track a message, memory usage and so forth.
This is something you get OOTB with Camel without any
effort. Each Camel application gathers JMX statistics by de-
fault for the whole application, individual routes, endpoints
and custom beans. It will tell you how many messages have
completed successfully, how many failed, where they failed,
etc. This is not read only API, JMX allows also updating and
tuning the application at run time, so based on these statis-
tics, using the same API you can tune the application accord-
ingly. Also the information can be accessed with tools such as
jConsole, VisualVM, Hyperic HQ, exposed over HTTP using
Jolokia or feed into a great web UI called Hawtio[6].
If the functionality that is available with OOTB doesnt
ft your custom requirements, there are multiple extension
points such as the nagios, jmx, amazon cloudwatch compo-
nents and Interceptors for custom events. Logging in mes-
saging applications is another challenge, but Camels MDC
logging combined with Throughput logger makes it easy to
track individual messages or get aggregated statistics as part
of the logging output.
5. Designed for failure Each of the microservices can
be down or unresponsive for some time but that should not
bring the whole system down. Thus microservices should be
fault tolerant and be able to recover when that is possible.
Camel has lots of helpful tools and patterns to cope with
these scenarios too. Dead Letter Channel can make sure
messages are not lost in case of failure the retry policy
can retry a message send a couple of times for certain er-
ror conditions using custom backoff method and collision
avoidance. Patterns such as Load balancer which supports
Circuit breaker[7], Failover and other policies, Throttler
to make sure certain endpoints do not get overloaded, De-
tour, Sampler, are all needed in various failure scenarios.
So why not use them rather than reinventing the wheel in
each service?
6. Highly Confgurable It should be easy to confgure
the same application for high availability, scale it for reli-
ability or throughput, or said another way: have different
degrees of freedom through confguration. When creating
a Camel application using the DSLs, all we do is to defne
the message fow and confgure various endpoints and other
characteristics of the application. So Camel applications are
highly confgurable by design. When all the various options
are externalized using properties component, it is possible to
confgure an application for different expectations and rede-
ploy without touching the actual source code at all. Camel
is so confgurable that you can change an endpoint with an-
other one (for example replace HTTP endpoint with JMS)
without changing the application code at all which we will
cover next.
Figure 1: Hawtio
Apache Camel
21 www.JAXenter.com | June 2014
7. With smart endpoints. Microservices favour RESTish
protocols and lightweight messaging rather than Web Servic-
es. Camel favors anything. It has HTTP support as no other
framework. It has components for Asynchronous Http, GAE
URL fetch service, Apache HTTP Client, Jetty, Netty, Servlet,
Restlet, CXF and multiple data formats for serializing/dese-
rializing messages. As for the queuing support, OOTB there
are connectors for JMS, ActiveMQ, ZeroMQ, Amazon SQS,
Amazon SNS, AMQP, Kestrel, Kafka, Stomp, you name it.
8. Testable. There is no common view on this characteristic.
Some favor no testing at all and relying on business metrics.
Some cannot afford to have bad business metrics at all. I like
TDD and for me having the ability to test my business POJOs
in isolation from the actual message fow, then test the fow
separately by mocking some of the external endpoints is in-
valuable. Camel testing support can intercept and mock end-
points, simulate events, verify expectations with ease. Having
a well tested individual microservice is the only guarantee to
have the whole system to work as expected.
9. Provisioned individually. The most important charac-
teristics of microservices is that they run in isolation from
other services most commonly as standalone Java applica-
tions. Camel can be embedded in Spring, OSGI or web con-
tainers. Camel can also run as a standalone Java application
with embedded Jetty endpoints easily. But managing multiple
processes, all running in isolation without a centralized tool is
a hard job. This is what Fabric8[8] is made for. Fabric8 is de-
veloped by the same guys who developed Camel and support-
ed by Red Hat JBoss. It is a poly Java application provisioning
and management tool that can deploy and manage a variety
of Java containers and standalone processes. To fnd out more
about Fabric8, here is[9] nice post by Christian Posta.
10. Language neutral. Having small and independently de-
ployed applications allow developers choose the best suited
languagage for the given task. Camel has XML, Java, Scala,
Groovy and few other DSLs with similar syntax and capa-
bilities. But if you dont want to you use Camel at all for a
specifc microservice, you can still use Fabric8 do deploy and
manage applications written in other languages and run them
as a native processes[10].
In summary: Microservices are not strictly defned and
thats the beauty. It is a lightweight style of implementing
SOA that works. So is Apache Camel. It is not a full featured
ESB, but it can be as part of JBoss Fuse. It is not a strictly
defned spec driven project, but a lightweight tool that works
and developers love it.
Bilgin Ibryam is a JBoss Fuse consultant currently working for Red Hat in
London. He is interested in message oriented middleware and distributed
applications. He is also the author of Instant Apache Camel Message
Routing book, an open source enthusiast, Apache OFBiz, and Apache
Camel committer. In his spare time, he enjoys contributing to open source
projects and blogging at http://ofbizian.com.
Twitter: @bibryam
References
[1] Micro-Service Architecture, by Fred George (video): https://www.youtube.com/
watch?v=2rKEveL55TY
[2] Micro-Services Java, the UNIX way, by James lewis (video): http://jz13.java.
no/ presentation.html?id=2a7b489a
[3] Microservices, by Martin Fowler: http://martinfowler.com/articles/microservices.
html
[4] Con: The Microservices Conference: https://skillsmatter.com/
conferences/6312-mucon
[5] Nanoservices: http://arnon.me/wp-content/uploads/2010/10/Nanoservices.pdf
[6] Hawtio: http://hawt.io/
[7] Circuit Breaker Pattern in Apache Camel by Bilgin Ibryam: http://www.ofbizian.
com/2014/04/circuit-breaker-pattern-in-apache-camel.html
[8] Fabric8: http://fabric8.io/
[9] Meet Fabric8: An open-source integration platform by Christian Posta:
http://www.christianposta.com/blog/?p=376
[10] Micro Services the easy way with Fabric8 by James Strachan: http://macstrac.
blogspot.co.uk/2014/05/micro-services-with-fabric8.html
Publisher
Software & Support Media GmbH
Editorial Offce Address
Software & Support Media Limited
86 88 Great Suffolk Street
London SE1 0BE
United Kingdom
www.jaxenter.com
Editor in Chief: Sebastian Meyen
Editors: Lucy Carey
Authors: Dr. Jutta Bindewald, Pat Huff, Bilgin Ibryam,
Martin Lippert, Kai Whner
Copy Editor: Jennifer Diener
Creative Director: Jens Mainz
Layout: Flora Feher
Sales:
Anika Stock
+49 (0) 69 630089-22
astock@sandsmedia.com
Entire contents copyright 2014 Software & Support Media GmbH. All rights reserved. No
part of this publication may be reproduced, redistributed, posted online, or reused by any
means in any form, including print, electronic, photocopy, internal network, Web or any other
method, without prior written permission of Software & Support Media GmbH.
The views expressed are solely those of the authors and do not refect the views or po-
sition of their frm, any of their clients, or Publisher. Regarding the information, Publisher
disclaims all warranties as to the accuracy, completeness, or adequacy of any informa-
tion, and is not responsible for any errors, omissions, in adequacies, misuse, or the con-
sequences of using any information provided by Pub lisher. Rights of disposal of rewarded
articles belong to Publisher. All mentioned trademarks and service marks are copyrighted
by their respective owners.
Imprint

Vous aimerez peut-être aussi