Vous êtes sur la page 1sur 139

Java - Multimedia

Hands on Lab

September 2011
For the latest information, please see bluejack.binus.ac.id

i|Page
Information in this document, including URL and other Internet Web site references, is
subject to change without notice. This document supports a preliminary release of software
that may be changed substantially prior to final commercial release, and is the proprietary
information of Binus University.
This document is for informational purposes only. BINUS UNIVERSITY MAKES NO
WARRANTIES, EITHER EXPRESS OR IMPLIED, AS TO THE INFORMATION IN THIS
DOCUMENT.
The entire risk of the use or the results from the use of this document remains with the
user. Complying with all applicable copyright laws is the responsibility of the user. Without
limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means
(electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without
the express written permission of Binus University.
Binus University may have patents, patent applications, trademarks, copyrights, or other
intellectual property rights covering subject matter in this document. Except as expressly
provided in any written license agreement from Binus University, the furnishing of this
document does not give you any license to these patents, trademarks, copyrights, or other
intellectual property.
Unless otherwise noted, the example companies, organizations, products, domain names, email addresses, logos, people, places and events depicted herein are fictitious, and no
association with any real company, organization, product, domain name, email address,
logo, person, place or event is intended or should be inferred.
2011 Binus University. All rights reserved.
The names of actual companies and products mentioned herein may be the trademarks of
their respective owners.

ii | P a g e

Table of Contents
OVERVIEW ..................................................................................................... iii
Chapter 01 Digital Image .................................................................................. 1
Chapter 02 Digital Audio ................................................................................. 33
Chapter 03 Digital Video ................................................................................. 58
Chapter 04 Graphic 2D ................................................................................... 71
Chapter 05 Object 3D ................................................................................... 102
Chapter 06 Multimedia Network Communication............................................... 114

iii | P a g e

OVERVIEW
Chapter 01

Digital Image

Chapter 02

Digital Audio

Chapter 03

Digital Video

Chapter 04

Graphic 2D

Chapter 05

Object 3D

Chapter 06

Multimedia Network Communication

1|Page

Chapter 01
Digital Image

Objectives
1. Color
2. Color Space
3. Digital Imaging
4. Image Transformation
5. Image Enhancement
6. Java2D API

2|Page

1.1.

Color
The Color class is used to encapsulate color in the default sRGB 1 color space 2or colors
in arbitrary color space identified by a ColorSpace class. In short, Color class represents
colors in java programming language.

1.2.

Color Space
The ColorSpace abstract class is used to serve as a color space tag to identify the specific
color space of a Color object or, via a ColorModel object, of an Image, a
BufferedImage, or a GraphicsDevice. It represents a system for measuring colors,

typically using three separate values or components. The ColorSpace class contains
methods for converting between the original color space and one of two standard color
spaces, CIEXYZ and RGB. ColorSpace is defined in the java.awt.color package.
Digital images, specifically digital color images, come in several different forms. The
form is often dictated by the means by which the image was acquired or by the image's
intended use.
One of the more basic types of color image is RGB, for the three primary colors (red,
green, and blue). RGB images are sometimes acquired by a color scanner or video
camera. These devices incorporate three sensors that are spectrally sensitive to light in the
red, green, and blue portions of the spectrum. The three separate red, green, and blue
values can be made to directly drive red, green, and blue light guns in a CRT. This type
of color system is called an additive linear RGB color system, as the sum of the three full
color values produces white.
Printed color images are based on a subtractive color process in which cyan,
magenta, and yellow (CMY) dyes are deposited onto paper. The amount of dye deposited
is subtractively proportional to the amount of each red, blue, and green color value. The
sum of the three CMY color values produce black.
The black produced by a CMY color system often falls short of being a true black.
To produce a more accurate black in printed images, black is often added as a fourth

sRGB is a standard RGB color space created cooperatively by HP and Microsoft in 1996 for use on monitors,
printers, and the Internet
2
A color space is the set of all colors which can be portrayed by a single color system

3|Page

color component. This is known as the CMYK color system and is commonly used in the
printing industry.
The amount of light generated by the red, blue, and green phosphors of a CRT is not
linear. To achieve good display quality, the red, blue, and green values must be adjusted a process known as gamma correction. In computer systems, gamma correction often
takes place in the frame buffer, where the RGB values are passed through lookup tables
that are set with the necessary compensation values.
In television transmission systems, the red, blue, and green gamma-corrected color
video signals are not transmitted directly. Instead, a linear transformation between the
RGB components is performed to produce a luminance signal and a pair of chrominance
signals. The luminance signal conveys color brightness levels. The two chrominance
signals convey the color hue and saturation. This color system is called YCC (or, more
specifically, YCbCr).
Another significant color space standard for Java is CIEXYZ. This is a widely-used,
device-independent color standard developed by the Commission Internationale de
l'clairage (CIE). The CIEXYZ standard is based on color-matching experiments on
human observers.

1.3.

Digital Imaging
Imaging is shorthand for image acquisition, the process of sensing our surroundings and
then representing the measurements that are made in the form of an image. The sensing
phase distinguishes image acquisition from image creation; the latter can be
accomplished using an existing set of data, and does not require a sensor. (Efford, 2000)
Java supported image manipulation via the Image class and a small number of related
classes. The Image3 class is part ofthe java.awt package, and its helpers are part of
java.awt.image. To load image data into a program, is accomplished by the getImage()
method, which is directly available to Java applets. The method takes a URL specifying
the location of the image as its parameter. Applications can obtain a java.awt.Toolkit
object and call its getImage() method.

Note that Image is an abstract class; when you manipulate an Image object, you are actually working with an
instance of a platform-specific subclass.

4|Page

Sample Code

Another way to support image manipulation is by using BufferedImage class. A


buffered image is a type of image whose pixels can be modified. For example, you can
draw on a buffered image and then draw the resulting buffered image on the screen or
save it to a file. A buffered image supports many formats for storing pixels.
An Image object cannot be converted to a BufferedImage object. The closest
equivalent is to create a buffered image and then draw the image on the buffered image.
This example defines a method that does this.

5|Page

6|Page

1.4.

Image Transformation
Geometric operations change image geometry by moving pixels around in a carefully
constrained way. We might do this to remove distortions inherent in the imaging process,
or to introduce a deliberate distortion that matches one image with another. There are
three elements common to most geometric operations: transformation equations that
move a pixel to a new location, a procedure for applying these equations to an image, and
some way of computing a value for the transformed pixel.
Affine transformation is an arbitrary geometric transformation that will move a pixel
at coordinates ( , ) to a new position, (, ), given by a pair of transformation
equations,
= ( , ),
= (, ) ,

Txand Ty are typically expressed as polynomials in and . In their simplest form,


they are linear in and , giving us an affine transformion,
= 0 + 1 + 2
= + +

Transformation coefficients for some simple affine transformations.


Transformation
Translation by ,
Scaling by factor
Clockwise rotation through angle
Horizontal Shear by factor

0
1

cos
1

1
2 0
1
2
0
0
1

0
0
0
0

sin 0 sin cos 0


s
0
0
1
0

7|Page

Figure 1. Translation Transformation

Figure 3. Scaling Transformation

Figure 2. Rotation Transformation

Figure 4. Shear Transformation

The Java2D API supports affine transformations of images and other graphic objects. A
transformation is specified by an instance of the class java.awt.geom. Affine
Transform.

1.5.

Image Enhancement
This chapter describes the basics of improving the visual appearance of images through
enhancement operations.
A single pixel considered in isolation conveys information on the intensity and
possibly the colour at a single location in an image, but it can tell us nothing about the
way in which these properties vary spatially. It follows that the point processes described
in the preceding chapter, which change a pixel's value independently of all other pixels,
cannot be used to investigate or control spatial variations in image intensity or colour. For
this, we need to perform calculations over areas of an image; in other words, a pixel's
new value must be computed from its old value and the values of pixels in its vicinity.
These neighbourhood operations are invariably more costly than simple point processes,
but they allow us to achieve a whole range of interesting and useful effects.

8|Page

Convolution and correlation are the fundamental neighbourhood operations of image


processing. They are linear operations. In practice, trus means that, for an image and a
scale factor
[( , )] = [(, )] ,
where C denotes either convolution or correlation. It also means that, for any two
images f1 and f2.
[1( , ) + 2(, )] = [1(, )] + [2(, )]

The calculations perfonned in convolution are almost identical to those done for
correlation.
In convolution, the calculation performed at a pixel is a weighted sum of grey levels
from a neighbourhood surrounding a pixel. The neighbourhood includes the pixel under
consideration, and it is customary for it to be disposed symmetrically about that pixel. We
shall assume this to be the case in our discussion, although we note that it is not a
requirement of the technique. Clearly, if a neighbourhood is centred on a pixel, then it
must have odd dimensions, e.g., 3 x 3, 5 x 5, etc. The neighbourhood need not be square,
but this is usually the case- since there is rarely any reason to bias the calculations in the
or direction. Grey levels taken from the neighbourhood are weighted by coefficients
that come from a matrix or convolution kernel. In effect, the kernel's dimensions define
the size of the neighbourhood in which calculations take place. Usually, the kernel is
fairly small relative to the image-dimensions of 3 x 3 are the most common. Figure 7
shows a 3 x 3 kernel and the corresponding 3 x 3 neighbourhood of pixels from an image.

Figure 7. A 3 x 3 convolution kernel and the corresponding image neighbourhood

The kernel is centered on the shaded pixel. The result of convolution will be a new
value for this pixel. During convolution, we take each kernel coefficient in turn and

9|Page

multiply it by a value from the neighbourhood of the image lying under the kernel. We
apply the kernel to the image in such a way that the value at the top-left corner of the
kernel is multiplied by the value at the bottomright comer of the neighbourhood.
Denoting the kernel by and the image by, the entire calculation is
, = 1, 1 + 1, + 1
+
0, 1 , + 1
+
1 , 1 1, + 1
+
1,0 + 1,
+
0, 0 ,
+
1,0 1,
+
1, 1 ( + 1, 1) +
0,1 , 1
+
1 , 1 1, 1
,

This summation can be expressed more succinctly as

, =

(, )( , )
= =

For the kernel and neighbourhood illustrated in figure 7, the result of convolution is
, = 1 82 + 1 88 + 2 65 + 2 76 + 1 60 + 1 72 = 40

Note that a new image (denoted g in Equation 7.4) has to be created to store the results of
convolution. We cannot perfonn the operation in place, because application of a kernel to
any pixel but the first would make use of values already altered by a prior convolution
operation.
Java2D provides two classes to support image convolution: Kernel and ConvolveOp.
The Kernel class represents convolution kernels. A Kernel object is constructed by
providing kernel dimensions and a one-dimensional float array of coefficients

This example creates a 3 x 3 kernel whose coefficients are all equal. Note that each
coefficient is normalised, such that the sum of coefficients equals 1.

10 | P a g e

A convolution kernel is a 2D matrix of numbers that can be used as coefficients for


numerical operations on pixels. Suppose you have a 3x3 kernel that looks like this:

As you loop over all pixels in the image, you would, for any given pixel, multiply the
pixel itself by zero; multiply the pixel directly above the given pixel by 2; also multiply
by 2 the pixels to the left, right, and below the pixel in question; multiply by one the
pixels at 2 o'clock, 4 o'clock, 8 o'clock, and 10 o'clock to the pixel in question; add all
these numeric values together; and divide by 9 (the kernel size). The result is the new
pixel value for the given pixel. Repeat for each pixel in the image.
In the example just cited, the kernel (1 2 1 etc.) would end up smoothing or blurring
the image, because in essence we are replacing a given pixel's value with a weighted
average of surrounding pixel values. To sharpen an image, you'd want to use a kernel that
takes the differences of pixels. For example:

This kernel would achieve a differencing between the center pixel and pixels
immediately to the north, south, east, and west. It would cause a fairly harsh, small-radius
(high frequency) sharpening-up of image features
Sample code for ConvolveOp:

11 | P a g e

1.6.

Java2D API
The Java 2D API is a set of classes for advanced 2D graphics and imaging, encompassing
line art, text, and images in a single comprehensive model. The API provides extensive
support for image compositing and alpha channel images, a set of classes to provide
accurate color space definition and conversion, and a rich set of display-oriented imaging
operators.
Java 2D is part of the core classes of the Java 2 platform (formerly JDK 1.2). The 2D
API introduces new classes in the following packages:

java.awt
java.awt.image

In addition, the 2D API encompasses six entirely new packages:

java.awt.color
java.awt.font
java.awt.geom
java.awt.print
java.awt.image.renderable
com.sun.image.codec.jpeg

Java 2D is designed to do anything you want it to do (with computer graphics, at


least).Prior to Java 2D, AWT's graphics toolkit had some serious limitations:

All lines were drawn with a single-pixel thickness.

Only a handful of fonts were available.

AWT didn't offer much control over drawing. For example, you couldn't
manipulate the individual shapes of characters.

If you wanted to rotate or scale anything, you had to do it yourself.

If you wanted special fills, like gradients or patterns, you had to make them
yourself.

Image support was rudimentary.

Control of transparency was awkward.

The 2D API remedies these shortcomings and does a lot more, too. To appreciate
what the 2D API can offer, you need to see it in action. Java 2 includes a sample program
that demonstrates many of the features of the API. To run it, navigate to the
demo/jfc/Java2D directory in the JDK installation directory. Then run the Java2Demo
class. The things 2D can do, including:

12 | P a g e

Shapes
Arbitrary geometric shapes can be represented by combinations of straight lines
and curves. The 2D API also provides a useful toolbox of standard shapes, like
rectangles, arcs, and ellipses.

Stroking
Lines and shape outlines can be drawn as a solid or dotted line of any widtha
process called stroking. You can define any dotted-line pattern and specify how
shape corners and line ends should be drawn.

Filling
Shapes can be filled using a solid color, a pattern, a color gradient, or anything
else you can imagine.

Transformations
Everything that's drawn in the 2D API can be stretched, squished, and rotated.
This applies to shapes, text, and images. You tell 2D what transformation you
want and it takes care of everything.

Alpha compositing
Compositing is the process of adding new elements to an existing drawing. The
2D APIgives you considerable flexibility by using the Porter-Duff compositing
rules.

Clipping
Clipping is the process of limiting the extent of drawing operations. For example,
drawing in a window is normally clipped to the window's bounds. In the 2D API,
however, you can use any shape for clipping.

Antialiasing
Antialiasing is a technique that reduces jagged edges in drawings. The 2D API
takes care of the details of producing antialiased drawing.

Text
The 2D API can use any TrueType or Type 1 font installed on your system.[1]
You can render strings, retrieve the shapes of individual strings or letters, and
manipulate text in the same ways that shapes are manipulated.

13 | P a g e

Color
It's hard to show colors correctly. The 2D API includes classes and methods that
support representing colors in ways that don't depend on any particular hardware
or viewing conditions.

Images
The 2D API supports doing the same neat stuff with images that you can do with
shapes and text. Specifically, you can transform images, use clipping shapes, and
use alpha compositing with images. Java 2 also includes a set of classes for
loading and saving images in the JPEG format.

Image processing
The 2D API also includes a set of classes for processing images. Image
processing is used to highlight certain aspects of pictures, to achieve aesthetic
effects, or to clean up messy scans.

Printing
Finally, Java developers have a decent way to print. The Printing API is part of
the 2D API and provides a compact, clean solution to the problem of producing
output on a printer.

14 | P a g e

1.7.

Exercises
1.7.1. Exercise 1 Color
For exercise, this module will explain the Color class usage using a JPanel object,
which resides in a JFrame object. In this exercise we will change the JPanel color
using the Color class.
1.

Lets make a JFrame reference named myFrame, then set its sizes and
visibility.

2.

Then make a JPanel reference named myPanel, and add it to myFrame

15 | P a g e

Output:

Figure 5. The Resulting JFrame and JPanel


3.

Lets examine the setBackground() method which a JPanel had

Figure 6. The setBackground() method in JDK Documentation


4.

The setBackground method have a single parameter with the type of


Color. This method will set the JPanel object, (in our case, it will the the
myPanel) to the color that we defined in the setBackground method

parameter.
5.

We can use the Color class to set myPanel color in two ways:
a.

Use the available static constant member of the Color class, this
member represent general colors which is common to us.

b.

Create a new object of Color class, allows us to define specific color.

16 | P a g e

Output:

6.

Creating a new object of Color class to define the myPanel color

Output:

17 | P a g e

1.7.2. Exercise 2 Color Space


For exercise, this module will explain the ColorSpace class usage. First, we learn
how to make a ColorSpace4 object named myColorSpace, To instantiate
ColorSpace class, we use its getInstance() method

The getInstance() method had one parameter of int type which to


define a specific color space identified by one of the predefined class constants
(e.g. CS_sRGB, CS_LINEAR_RGB, CS_CIEXYZ, CS_GRAY, or CS_PYCC)

In this code, myColorSpace currently holds an ICCProfile object with the


type of Linear RGB color space
1.7.3. Exercise 3 Digital Imaging

For this exercise, we will try to load image file to your program. I will use this
troll.jpg for this exercise.

Note that ColorSpace is an abstract class; when you manipulate a ColorSpace object, you are actually working with
an instance of a platform-specific subclass.

18 | P a g e

we will load this image, and then draw this image on the window of your program
1.

Lets make a JFrame object first named myFrame, then set its sizes and
visibility.

2.

Then we define a new Class named MyPanelClass which will draw our
Image object

19 | P a g e

3.

Override the paintComponent() method on the MyPanelClass

4.

Inspect the drawImage() method that Graphics and Graphics2D class had.

Any of these overloaded drawImage() method will draw an image by using


Java2D technique. Choose one method that you properly needed.
5.

For this excercise i will use the third method

This method asks for the image object to be drawn, the x coordinate, the y
coordinate, and the observer

20 | P a g e

6.

Load the image file, and use the drawImage() method to draw it on the
MyPanelClass object

7.

Construct a new MyPanelClass object, and add it to myFrame

8.

Output

9.

To draw another image, please repeat step number 6

21 | P a g e

1.7.4. Exercise 4 Image Transformation


For this exercise, we will use our last exercise, and try to transform the image file
in your program using AffineTransform class.
1.

These are the list of some of the method to define the geometric
transformation by AffineTransform class
a.

rotate() method

b.

translate() method

c.

shear() method

d.

scale() method

22 | P a g e

2.

Insert these three new lines of code into your previous code, we will construct
a new AffineTransform object, then use the rotate() method. rotate()
method have a double type parameter which used to set the amount of
rotation

Output:

23 | P a g e

3.

Lets do another transformation, we use the scale() method. scale()


method have a double type parameter which used to set the amount of
scaling

Output:

24 | P a g e

1.7.5. Exercise 5 Image Enhancement


For this chapter exercise we will make an application which will open a file, and
sharpen / desharpen that image using ConvolveOp class
1.

First, we will construct a GUI like this one

The red area is our custom JPanel class object, the blue area is a regular
JPanel object, open button is a JButton object, sharpen, normal and blur is a
JToggleButton object, the following code will construct the blue area, the

red area would be on next code.

25 | P a g e

26 | P a g e

2. This following is the half of custom JPanel class code that will enhance our
image

27 | P a g e

3.

Create MyCustomPanel object, then add to the northern portion of myFrame

28 | P a g e

4.

ConvolveOp need BufferedImage object to enhance the image, the following

code is to convert Image object into BufferedImage object, this is already


explained in Digital Imaging chapter

29 | P a g e

5.

Next we give those sharpen, blur, and normal button an event handler add this
following code

30 | P a g e

6.

Finally we make the code to show sharpen, blur, and normal image in the
MyCustomPanel class, add this following code

31 | P a g e

a.

Output: Normal Mode

b.

Output: Sharpen Mode

32 | P a g e

c.

Output: Blur Mode

33 | P a g e

Chapter 02
Digital Audio

Objectives
1.
2.
3.
4.

Audio Digitization
MIDI
Audio Compression
Java Audio API

34 | P a g e

2.1.

Audio Digitization
This chapter is going to discuss the conversion process between digital and analog
signals. Nearly every piece of electronics in use today makes some use of an analog-todigital or digital-to-analog converter. Because of the prevalence of these types of data
conversions, it is important for people to understand the limitations and drawbacks of
these processes. This book will discuss the conversion process (analog-to-digital and
digital-to-analog), the mathematical models of signal conversion, some topics related
specifically to these processes (quanization, companding, delta modulation, etc.), and the
physical electrical hardware that makes the conversions happen.

Signal
What is a "signal", exactly? A signal in the sense that we will be considering in this book
is changing value of electric voltage or current through a transmission medium. There are
two general types of signals: periodic and aperiodic. Periodic signals repeat themselves
after a certain period of time -- after they have cycled through one period, following
periods don't contain any new information. Aperiodic signals, on the other hand, don't
repeat themselves, and therefore can contain information. Signals also can be analog or
digital signals, and we will discuss them both below.

Analog
Analog signals equate levels of electric voltage or current to amounts of information by
applying some rule. Consider for instance, an analog clock, where the passage of time is
displayed as the motion of the clock hands. In electric signals, a certain amount of
voltage corresponds directly to a measured physical phenomenon. For instance, on an
accelerometer, the amount of acceleration, measured in g's will correspond directly to
volts. So at one g, we have 1 volt output, at 2 g's we have 2 volts output, etc. Analog
signals have an advantage that they can represent any fractional quantity, by outputting
an equivalent fractional quantity of voltage or current.
Put in a different manner an Analog signal is continuous in time and also
continuous in value.

35 | P a g e

Uses of Analog
Analog signals have a number of uses. AM and FM radio, for instance, are signals that
are transmitted in analog. Telephones (at least simple, older telephones) use analog
signals to transmit voice data to the phone central office. Many electrical components,
such as sensors, will output analog data, because of the accuracy that can be obtained
from analog signals.

Digital
Digital signals are different from analog signals in that there are generally only two levels
of voltage: high and low. These different voltage levels are put into a sequence to
describe the value being transmitted. For convenience, regardless of the actual voltage
levels used, a "high" is called a 1, and a "low" is called a 0. Each signal level must be
transmitted for at least a certain period of time called the Bit Time. A single signal level
for a single bit time is called a Bit.
From bits, we have Binary Numbers, a collection of bits that can be arranged to
form larger quantities than the simple numbers 0 and 1.

Uses of Digital
Because bits can only be a 0 or a 1, digital transmissions don't have the same amount of
accuracy as analog signals. Also, digital systems need to have complicated digital
circuitry to read and understand the signals, that can cost more money then analog
hardware does. However, the benefit is that digital signals can be manipulated, created,
and read by computers and computer hardware.
One of the best examples of digital signals are the control signals and data that are
in use on your computer. Computers are almost completely digital, except for the sound
card (which produced analog sound signals), and maybe a few other peripherals.
Cellphones now are mostly digital, and the internet is a digital network.

36 | P a g e

2.2.

MIDI
The Musical Instrument Digital Interface (MIDI) standard defines a communication
protocol for electronic music devices, such as electronic keyboard instruments and
personal computers. MIDI data can be transmitted over special cables during a live
performance, and can also be stored in a standard type of file for later playback or
editing.
MIDI is both a hardware specification and a software specification. To understand
MIDI's design, it helps to understand its history. MIDI was originally designed for
passing musical events, such as key depressions, between electronic keyboard
instruments such as synthesizers. Hardware devices known as sequencers stored
sequences of notes that could control a synthesizer, allowing musical performances to be
recorded and subsequently played back. Later, hardware interfaces were developed that
connected MIDI instruments to a computer's serial port, allowing sequencers to be
implemented in software. More recently, computer sound cards have incorporated
hardware for MIDI I/O and for synthesizing musical sound. Today, many users of MIDI
deal only with sound cards, never connecting to external MIDI devices. CPUs have
become fast enough that synthesizers, too, can be implemented in software. A sound card
is needed only for audio I/O and, in some applications, for communicating with external
MIDI devices.
Most programs that avail themselves of the Java Sound API's MIDI package do so
to synthesize sound. The entire apparatus of MIDI files, events, sequences, and
sequencers, which was previously discussed, nearly always has the goal of eventually
sending musical data to a synthesizer to convert into audio. (Possible exceptions include
programs that convert MIDI into musical notation that can be read by a musician, and
programs that send messages to external MIDI-controlled devices such as mixing
consoles.)
The Synthesizer interface is therefore fundamental to the MIDI package. This
page shows how to manipulate a synthesizer to play sound. Many programs will simply
use a sequencer to send MIDI file data to the synthesizer, and won't need to invoke many
Synthesizer methods directly. However, it's possible to control a synthesizer directly,

37 | P a g e

without using sequencers or even MidiMessage objects, as explained near the end of this
page.
The synthesis architecture might seem complex for readers who are unfamiliar with
MIDI. Its API includes three interfaces:

Synthesizer
MidiChannel
Soundbank

and four classes:

Instrument
Patch
SoundbankResource
VoiceStatus

As orientation for all this API, the next section explains some of the basics of MIDI
synthesis and how they're reflected in the Java Sound API. Subsequent sections give a
more detailed look at the API.

2.3.

Audio Compression
Digital audio compression allows the efficient storage and transmission of audio data.
The various audio compression techniques offer different levels of complexity,
compressed audio quality, and amount of data compression.
This chapter is a survey of technique used to compress digital audio signals. The next
section present detailed descriptions of a relatively simple approach to audio
compression: -law.
-law Audio Compression
The -law transformation is a basic audio compression technique specified by the
Comit Consultatif Internationale de Tlgraphique et Tlphonique (CCITT)
Recommendation G.711.[5] The transformation is essentially logarithmic in nature and
allows the 8 bits per sample output codes to cover a dynamic range equivalent to 14 bits
of linearly quantized values. This transformation offers a compression ratio of (number of
bits per source sample)/8 to 1. Unlike linear quantization, the logarithmic step spacings
represent low-amplitude audio samples with greater accuracy than higher-amplitude
values. Thus the signal-to-noise ratio of the transformed output is more uniform over the
range of amplitudes of the input signal. The -law transformation is

38 | P a g e

127
ln 1 + 0
ln
(1 + )
=
127
127
ln 1 + < 0
ln
(1 + )
255

where m = 255, and x is the value of the input signal normalized to have a maximum
value of 1. The CCITT Recommendation G.711 also specifies a similar A-law
transformation. The -law transformation is in common use in North America and Japan
for the Integrated Services Digital Network (ISDN) 8- kHz-sampled, voice-grade, digital
telephony service, and the A-law transformation is used elsewhere for the ISDN
telephony.

2.4.

Java Audio API


The Java Sound API is a low-level API for effecting and controlling the input and output
of sound media, including both audio and Musical Instrument Digital Interface (MIDI)
data. The Java Sound API provides explicit control over the capabilities normally
required for sound input and output, in a framework that promotes extensibility and
flexibility.
The Java Sound API fulfills the needs of a wide range of application developers.
Potential application areas include:

Communication frameworks, such as conferencing and telephony

End-user content delivery systems, such as media players and music using
streamed content

Interactive application programs, such as games and Web sites that use dynamic
content

Content creation and editing

Tools, toolkits, and utilities

The Java Sound API provides the lowest level of sound support on the Java platform.
It provides application programs with a great amount of control over sound operations,
and it is extensible. For example, the Java Sound API supplies mechanisms for installing,
accessing, and manipulating system resources such as audio mixers, MIDI synthesizers,
other audio or MIDI devices, file readers and writers, and sound format converters. The
Java Sound API does not include sophisticated sound editors or graphical tools, but it

39 | P a g e

provides capabilities upon which such programs can be built. It emphasizes low-level
control beyond that commonly expected by the end user.
The Java Sound API includes support for both digital audio and MIDI data. These
two major modules of functionality are provided in separate packages:

javax.sound.sampled

This package specifies interfaces for capture, mixing, and playback of digital
(sampled) audio.

javax.sound.midi

This package provides interfaces for MIDI synthesis, sequencing, and event
transport.
Two other packages permit service providers (as opposed to application developers)
to create custom software components that extend the capabilities of an implementation
of the Java Sound API:

javax.sound.sampled.spi
javax.sound.midi.spi

Sampled Sound
The javax.sound.sampled package handles digital audio data, which the Java Sound
API refers to as sampled audio. Samples are successive snapshots of a signal. In the case
of audio, the signal is a sound wave. A microphone converts the acoustic signal into a
corresponding analog electrical signal, and an analog-to-digital converter transforms that
analog signal into a sampled digital form. The following figure shows a brief moment in
a sound recording.

A sampled sound wave

This graph plots sound pressure (amplitude) on the vertical axis, and time on the
horizontal axis. The amplitude of the analog sound wave is measured periodically at a
certain rate, resulting in the discrete samples (the red data points in the figure) that

40 | P a g e

comprise the digital audio signal. The center horizontal line indicates zero amplitude;
points above the line are positive-valued samples, and points below are negative. The
accuracy of the digital approximation of the analog signal depends on its resolution in
time (the sampling rate) and its quantization, or resolution in amplitude (the number of
bits used to represent each sample). As a point of reference, the audio recorded for
storage on compact discs is sampled 44,100 times per second and represented with 16
bits per sample.
The term "sampled audio" is used here slightly loosely. A sound wave could be
sampled at discrete intervals while being left in an analog form. For purposes of the Java
Sound API, however, "sampled audio" is equivalent to "digital audio."
Typically, sampled audio on a computer comes from a sound recording, but the
sound could instead be synthetically generated (for example, to create the sounds of a
touch-tone telephone). The term "sampled audio" refers to the type of data, not its origin.
The Java Sound API does not assume a specific audio hardware configuration; it is
designed to allow different sorts of audio components to be installed on a system and
accessed by the API. The Java Sound API supports common functionality such as input
and output from a sound card (for example, for recording and playback of sound files) as
well as mixing of multiple streams of audio. Here is one example of a typical audio
architecture:

A Typical Audio Architecture

In this example, a device such as a sound card has various input and output ports, and
mixing is provided in the software. The mixer might receive data that has been read from
a file, streamed from a network, generated on the fly by an application program, or

41 | P a g e

produced by a MIDI synthesizer. The mixer combines all its audio inputs into a single
stream, which can be sent to an output device for rendering.

Sampled Sound
The javax.sound.midi package contains APIs for transporting and sequencing MIDI
events, and for synthesizing sound from those events.
Where as sampled audio is a direct representation of a sound itself, MIDI data can be
thought of as a recipe for creating a sound, especially a musical sound. MIDI data, unlike
audio data, does not describe sound directly. Instead, it describes events that affect the
sounds (or actions) performed by a MIDI-enabled device or instrument, such as a
synthesizer. MIDI data is analogous to a graphical user interface's keyboard and mouse
events. In the case of MIDI, the events can be thought of as actions upon a musical
keyboard, along with actions on various pedals, sliders, switches, and knobs on that
musical instrument. These events need not actually originate with a hardware musical
instrument; they can be simulated in software, and they can be stored in MIDI files. A
program that can create, edit, and perform these files is called a sequencer. Many
computer sound cards include MIDI-controllable music synthesizer chips to which
sequencers can send their MIDI events. Synthesizers can also be implemented entirely in
software. The synthesizers interpret the MIDI events that they receive and produce audio
output. Usually the sound synthesized from MIDI data is musical sound (as opposed to
speech, for example). MIDI synthesizers are also capable of generating various kinds of
sound effects.
Some sound cards include MIDI input and output ports to which external MIDI
hardware devices (such as keyboard synthesizers or other instruments) can be connected.
From a MIDI input port, an application program can receive events generated by an
external MIDI-equipped musical instrument. The program might play the musical
performance using the computer's internal synthesizer, save it to disk as a MIDI file, or
render it into musical notation. A program might use a MIDI output port to play an
external instrument, or to control other external devices such as recording equipment.
The following diagram illustrates the functional relationships between the major
components in a possible MIDI configuration based on the Java Sound API. (As with

42 | P a g e

audio, the Java Sound API permits a variety of MIDI software devices to be installed and
interconnected. The system shown here is just one potential scenario.) The flow of data
between components is indicated by arrows. The data can be in a standard file format, or
(as indicated by the key in the lower right corner of the diagram), it can be audio, raw
MIDI bytes, or time-tagged MIDI messages.

A Possible MIDI Configuration

In this example, the application program prepares a musical performance by loading


a musical score that's stored as a standard MIDI file on a disk (left side of the diagram).
Standard MIDI files contain tracks, each of which is a list of time-tagged MIDI events.
Most of the events represent musical notes (pitches and rhythms). This MIDI file is read
and then "performed" by a software sequencer. A sequencer performs its music by
sending MIDI messages to some other device, such as an internal or external synthesizer.
The synthesizer itself may read a soundbank file containing instructions for emulating the
sounds of certain musical instruments. If not, the synthesizer will play the notes stored in
the MIDI file using whatever instrument sounds are already loaded into it.
As illustrated, the MIDI events must be translated into raw (non-time-tagged) MIDI
before being sent through a MIDI output port to an external MIDI instrument. Similarly,
raw MIDI data coming into the computer from an external MIDI source (a keyboard
instrument, in the diagram) is translated into time-tagged MIDI messages that can control
a synthesizer, or that a sequencer can store for later use.

43 | P a g e

Service Provider Interfaces


The javax.sound.sampled.spi and javax.sound.midi.spi packages contain APIs
that let software developers create new audio or MIDI resources that can be provided
separately to the user and "plugged in" to an existing implementation of the Java Sound
API. Here are some examples of services (resources) that can be added in this way:

An audio mixer

A MIDI synthesizer

A file parser that can read or write a new type of audio or MIDI file

A converter that translates between different sound data formats

In some cases, services are software interfaces to the capabilities of hardware devices,
such as sound cards, and the service provider might be the same as the vendor of the
hardware. In other cases, the services exist purely in software. For example, a synthesizer
or a mixer could be an interface to a chip on a sound card, or it could be implemented
without any hardware support at all.
An implementation of the Java Sound API contains a basic set of services, but the
service provider interface (SPI) packages allow third parties to create new services. These
third-party services are integrated into the system in the same way as the built-in services.
The AudioSystem class and the MidiSystem class act as coordinators that let application
programs access the services explicitly or implicitly. Often the existence of a service is
completely transparent to an application program that uses it. The service-provider
mechanism benefits users of application programs based on the Java Sound API, because
new sound features can be added to a program without requiring a new release of the
JDK or runtime environment, and, in many cases, without even requiring a new release of
the application program itself.

2.5.

Exercises
2.6.1. Exercise 1 Audio Digitization
For this exercise, we will try to make a java program which will record your
sound using the line-in audio jack on your computer via microphone.

44 | P a g e

The javax.sound.sampled package consists of eight interfaces, twelve toplevel classes, twelve inner classes, and two exceptions. To record and play audio,
you only need to deal with a total of seven parts of the package.
1.

Describe the audio format in which you want to record the data. This includes
specifying the sampling rate and the number of channels (mono versus stereo)
for the audio. You specify these properties using the aptly named
AudioFormat class. There are two constructors for creating an AudioFormat
object:

The first constructor lets you explicitly set the audio format encoding,
while the latter uses a default. The available encodings are ALAW,
PCM_SIGNED, PCM_UNSIGNED, and ULAW. The default encoding used for the

second constructor is PCM. Here is an example that uses the second


constructor to create an AudioFormat object for single channel recording in 8
kHz format:

2.

After you describe the audio format, you need to get a DataLine. This
interface represents an audio feed from which you can capture the audio. You
use a subinterface of DataLine to do the actual capturing. The subinterface is
called TargetDataLine. To get the TargetDataLine, you ask the AudioSystem.
However when you do that, you need to specify information about the line.
You make the specification in the form of a DataLine.Info object. In
particular, you need to create a DataLine.Info object that is specific to the
DataLine type and audio format. Here are some lines of source that get the
TargetDataLine.

45 | P a g e

If the TargetDataLine is unavailable, a LineUnavailableException is


thrown.
3.

At this point you have your input source. You can think of the
TargetDataLine like an input stream. However, it requires some setup

before you can read form it. Setup in this case means first opening the line
using the open() method, and then initializing the line using the start()
method:

4.

Your data line is ready, so you can start recording from it as shown in the
following lines of code. Here you save a captured audio stream to a byte array
for later playing. You could also save the audio stream to a file. Notice that
you have to manage when to stop outside the read-loop construct.

5.

Summed, it will be like this, whole recording code on a record() method

46 | P a g e

6.

Now let's examine playing audio. There are two key differences in playing
audio as compared to recording audio. First, when you play audio, the bytes
come from an AudioInputStream instead of a TargetDataLine. Second,
you write to a SourceDataLine instead of into a ByteArrayOutputStream.
Besides that, the process is the same. To get the AudioInputStream, you
need to convert the ByteArrayOutputStream into the source of the
AudioInputStream. The AudioInputStream constructor requires the bytes

47 | P a g e

from the output stream, the audio format encoding used, and the number of
sample frames:

7.

Getting the DataLine is similar to the way you get it for audio recording, but
for playing audio, you need to fetch a SourceDataLine instead of a
TargetDataLine:

8.

Setup for the line is identical to the setup for audio recording:

9.

The last step is to play the audio as shown below. Notice that this step is
similar to the last step in recording. However, here you read from the buffer
and write to the data line. There is also an added drain operation that works
like a flush on an output stream.

48 | P a g e

10. Summed, it will be like this, whole playing code on a play() method

11. Code to stop recording/playing

49 | P a g e

12. Code to construct the GUI

Put all codes under the Main class, and youre done,
Output:

50 | P a g e

2.6.2. Exercise 2 MIDI


For this exercise we will try to make a simple synthesizer. But first, we will try to
understand how Java Synthesizer works, along with its companion
How to get Synthesizer object; since Synthesizer is an interface we
cant instantiate it directly

This is how we get Synthesizer object.


The Synthesizer interface includes methods for loading and unloading
instruments from soundbanks. An instrument is a specification for synthesizing a
certain type of sound, whether that sound emulates a traditional instrument or is
some kind of sound effect or other imaginary sound. A soundbank is a collection
of instruments, organized by bank and program number (via the instrument's
Patch object).

51 | P a g e

What all synthesis techniques have in common is the ability to create


many sorts of sounds. Different algorithms, or different settings of parameters
within the same algorithm, create different-sounding results. An instrument is a
specification for synthesizing a certain type of sound. That sound may emulate a
traditional musical instrument, such as a piano or violin; it may emulate some
other kind of sound source, for instance, a telephone or helicopter; or it may
emulate no "real-world" sound at all. A specification called General MIDI defines
a standard list of 128 instruments, but most synthesizers allow other instruments
as well. Many synthesizers provide a collection of built-in instruments that are
always available for use; some synthesizers also support mechanisms for loading
additional instruments.
An instrument may be vendor-specificin other words, applicable to only
one synthesizer or several models from the same vendor. This incompatibility
results when two different synthesizers use different sound-synthesis techniques,
or different internal algorithms and parameters even if the fundamental technique
is the same. Because the details of the synthesis technique are often proprietary,

52 | P a g e

incompatibility is common. The Java Sound API includes ways to detect whether
a given synthesizer supports a given instrument.
An instrument can usually be considered a preset; you don't have to know
anything about the details of the synthesis technique that produces its sound.
However, you can still vary aspects of its sound. Each Note On message specifies
the pitch and volume of an individual note. You can also alter the sound through
other MIDI commands such as controller messages or system-exclusive messages.

Many synthesizers are multimbral (sometimes called polytimbral),


meaning that they can play the notes of different instruments simultaneously.
(Timbre is the characteristic sound quality that enables a listener to distinguish
one kind of musical instrument from other kinds.) Multimbral synthesizers can
emulate an entire ensemble of real-world instruments, instead of only one
instrument at a time. MIDI synthesizers normally implement this feature by taking
advantage of the different MIDI channels on which the MIDI specification allows
data to be transmitted. In this case, the synthesizer is actually a collection of
sound-generating units, each emulating a different instrument and responding

53 | P a g e

independently to messages that are received on a different MIDI channel. Since


the MIDI specification provides only 16 channels, a typical MIDI synthesizer can
play up to 16 different instruments at once. The synthesizer receives a stream of
MIDI commands, many of which are channel commands. (Channel commands are
targeted to a particular MIDI channel; for more information, see the MIDI
specification.) If the synthesizer is multitimbral, it routes each channel command
to the correct sound-generating unit, according to the channel number indicated in
the command.
In the Java Sound API, these sound-generating units are instances of
classes that implement the MidiChannel interface. A synthesizer object has at
least one MidiChannel object. If the synthesizer is multimbral, it has more than
one, normally 16. Each MidiChannel represents an independent sound-generating
unit.
Because a synthesizer's MidiChannel objects are more or less independent,
the assignment of instruments to channels doesn't have to be unique. For example,
all 16 channels could be playing a piano timbre, as though there were an ensemble
of 16 pianos. Any grouping is possiblefor instance, channels 1, 5, and 8 could
be playing guitar sounds, while channels 2 and 3 play percussion and channel 12
has a bass timbre. The instrument being played on a given MIDI channel can be
changed dynamically; this is known as a program change.
Even though most synthesizers allow only 16 or fewer instruments to be
active at a given time, these instruments can generally be chosen from a much
larger selection and assigned to particular channels as required.
Thats the introduction part of this MIDI chapter, you may run the code, it will
produces a <ding> sound. The noteOn method plays the selected instrument
and tone. Lets continue to make our simple synthesizer, it will be like this

54 | P a g e

The list on the left hand lists all the instrument that java currently had on your
system, and the buttons on the right hand side is to play the tone.

55 | P a g e

1.

Construct a JList object that will contains the instrument list, and add it to a
JscrollPane, so it will have a scrollbar on its side

56 | P a g e

2.

Create methods that will be called when the button/ list is click later on

3.

Create and instantiate all JButton array object, add their event handler, add
to the JPanel, and add to the Jframe

57 | P a g e

4.

Create event handler for the instrumenList, thus when clicked it will change
the loaded instrument

5.

Then the simple synthesizer is finished, you may create the more
sophisticated synthesizer, currently like this one:

58 | P a g e

Chapter 03
Digital Video

Obejctives
1. Video Digitization and Video Compression
2. Java Media Framework API
3. Quick Time
4. Java Quick Time API

59 | P a g e

3.1.

Video Digitization and Video Compression


Alternatively referred to as a video digitiser, a video digitizer is software that takes an
analog video still frame and coverts it to a digital still image. This is generally
accomplished with the aid of computer hardware.

3.2.

Java Media Framework API


The Java Media Framework (JMF) is a recent API for Java dealing with real-time
multimedia presentation and effects processing. JMF handles time-based media, media
which changes with respect to time. Examples of this are video from a television source,
audio from a raw-audio format file and animations.
Stages
The JMF architecture is organized into three stages:

During the input stage, data is read from a source and passed in buffers to the
processing stage. The input stage may consist of reading data from a local capture device
(such as a webcam or TV capture card), a file on disk or stream from the network.
The processing stage consists of a number of codecs and effects designed to modify
the data stream to one suitable for output. These codecs may perform functions such as
compressing or decompressing the audio to a different format, adding a watermark of
some kind, cleaning up noise or applying an effect to the stream (such as echo to the
audio).
Once the processing stage has applied its transformations to the stream, it passes the
information to the output stage. The output stage may take the stream and pass it to a file
on disk, output it to the local video display or transmit it over the network.
For example, a JMF system may read input from a TV capture card from the local
system capturing input from a VCR in the input stage. It may then pass it to the

60 | P a g e

processing stage to add a watermark in the corner of each frame and finally broadcast it
over the local Intranet in the output stage.

Component Architecture
JMF is built around a component architecture. The compenents are organized into a
number of main categories:

Media handlers
MediaHandlers are registered for each type of file that JMF must be able to
handle. To support new file formats, a new MediaHandler can be created

Data Sources
A DataSource handler manages source streams from various inputs. These
can be for network protocols, such as http or ftp, or for simple input from disk.

Codec/effects
Codecs and Effects are components that take an input stream, apply a
transformation to it and output it. Codecs may have different input and output
formats, while Effects are simple transformations of a single input format to an
output stream of the same format.

Renderers
A renderer is similar to a Codec, but the final output is somewhere other than
another stream. A VideoRenderer outputs the final data to the screen, but another
kind of renderer could output to different hardware, such as a TV out card.

Mux/Demuxes
Multiplexers and Demultiplexers are used to combine multiple streams into a
single stream or vice-versa, respectively. They are useful for creating and reading
a package of audio and video for saving to disk as a single file, or transmitting
over a network.

3.3.

Quick Time

QuickTime provides some powerful constructions that can simplify the


production of applications or applets that contain or require multimedia content.

A multimedia presentation can be very complex.

61 | P a g e

The Quicktime architecture and its API (C or Java) can help to simplify this
process.

3.4.

Java Quick Time API


If you're a Java or QuickTime programmer and want to harness the power of
QuickTime's multimedia engine, you'll find a number of important advantages to using
the QuickTime for Java API. A C Quicktime API also exists. But we focus only on the
Java API in this course.
For one thing, the API lets you access QuickTime's native runtime libraries and,
additionally, provides you with a feature rich Application Framework that enables you to
integrate QuickTime capabilities into Java software.
Aside from representation in Java, QuickTime for Java also provides a set of
packages that forms the basis for the Application Framework found in the
quicktime.app group. The focus of these packages is to present different kinds of

media. The framework uses the interfaces in the quicktime.app packages to abstract and
express common functionality that exists between different QuickTime objects.
As such, the services that the QuickTime for Java Application Framework renders to
the developer can be viewed as belonging to the following categories:

creation

of

objects

that

present

different

forms

of

media,

using

QTFactory.makeDrawable()methods

various utilities (classes and methods) for dealing with single images as well as
groups of related images

spaces and controllers architecture, which enables you to deal with complex data
generation or presentation requirements

composition services that allow the complex layering and blending of different
image sources

timing services that enable you to schedule and control time-related activities

video and audio media capture from external sources

exposure of the QuickTime visual effects architecture

All of these are built on top of the services that QuickTime provides. They provided
interfaces and classes in the quicktime.app packages can be used as a basis for

62 | P a g e

developers to build on and extend in new ways, not just as a set of utility classes you can
use
The media requirements for such presentations can also be complex. They may
include ``standard'' digital video, animated characters, and customized musical
instruments. QuickTime's ability to reference movies that exist on local and remote
servers provides a great deal of flexibility in the delivery of digital content.
A movie can also be used to contain the media for animated characters and/or
customized musical instruments. For example, a cell-based sprite animation can be built
where the images that make up the character are retrieved from a movie that is built
specifically for that purpose. In another scenario, a movie can be constructed that
contains both custom instruments and a description of instruments to be used from
QuickTime's built-in Software Synthesizer to play a tune.
In both cases we see a QuickTime movie used to contain media and transport this
media around. Your application then uses this media to recreate its presentation. The
movie in these cases is not meant to be played but is used solely as a media container.
This movie can be stored locally or remotely and retrieved by the application when it is
actually viewed. Of course, the same technique can be applied to any of the media types
that QuickTime supports. The sprite images and custom instruments are only two
possible applications of this technique.
A further interesting use of QuickTime in this production space is the ability of a
QuickTime movie to contain the media data that it presents as well as to hold a reference
to external media data. For example, this enables both an artist to be working on the
images for an animated character and a programmer to be building the animation using
these same images. This can save time, as the production house does not need to keep
importing the character images, building intermediate data containers, and so on. As the
artist enhances the characters, the programmer can immediately see these in his or her
animation, because the animation references the same images.
Following the 2003 release of QTJ 6.1, Apple has made few updates to QTJ, mostly
fixing bugs. Notably, QuickTime 7 was the first version of QuickTime not to be
accompanied or followed by a QTJ release that wrapped the new native API's.
QuickTime 7's new API's, such as those for working with metadata and with frame-

63 | P a g e

reordering codecs, are not available to QTJ programmers. Apple has also not offered new
classes to provide the capture preview functionality that was present in versions of QTJ
prior to 6.1. Indeed, QTJ is dependent on some native API's that Apple no longer
recommends, most notably QuickDraw. In short, QTJ has been deprecated by Apple.

3.5.

Exercises
3.5.1. Exercise 1 - Video Digitization and Video Compression
For this exercise we will try to make a image capturer from a webcam, thus we
will need a webcam. The application would be like this

Before we start, Java Media Framework only recognized VfW(Video for


Window) / WDM (Windows Driver Model) supported webcam. To check if your
webcam is supported by java, open start -> all programs -> Java Media
Framework 2.1.1e -> JMFRegistry, then go to capture devices tab. There should
be a vfw:<insert webcam brand here> WDM Image Capture(Win32):0 entry. If
your webcam is installed after installing Java Media Framework, JMFRegistry
wont recognize your webcam yet. To register your webcam, press Detect

64 | P a g e

Capture Device button. If your webcam still not listed, then your webcam
probably not supported
1.

Code the GUI Part

65 | P a g e

2.

Detecting Capturing Device using CaptureDeviceManager class, and search


for the webcam through the listed capturing device

66 | P a g e

3.

Load the captured image from the webcam to the GUI

67 | P a g e

4.

Code to capture the image, and use a JFileChooser to direct the save file
directory

The needed import code

68 | P a g e

3.5.2. Exercise 2 Quick Time and Java Quick Time API


For this exercise we will try to make simple video player application using Quick
Time for java
1.

Build the GUI

2.

Create the Quick Time Component

69 | P a g e

3.

Method to load the movie

4.

Event Handler to the open Button

70 | P a g e

Output:

71 | P a g e

Chapter 04
Graphic 2D

Obejctives
1. What is Java2D?
2. What Can Java 2D Do?
3. Drawing on Components
4. Drawing on Images
5. Graphics2D
6. Line2D
7. Rectangle2D
8. Ellipse2D and Arc2D
9. Text
10. Animation

72 | P a g e

4.1.

What is Java2D ?
The Java 2D Application Programming Interface (the 2D API) is a set of classes that can
be used to create high quality graphics. It includes features like geometric transformation,
antialiasing, alpha compositing, and image processing.
Java 2D is part of the core classes of the Java 2 platform (formerly JDK 1.2). The
2D API introduces new classes in the following packages:

java.awt
java.awt.image

In addition, the 2D API encompasses new packages:

4.2.

java.awt.color
java.awt.font
java.awt.geom
java.awt.print
java.awt.image.renderable

What Can Java 2D Do?


Java 2D is designed to do anything you want it to do (with computer graphics, at
least). Prior to Java 2D, AWT's graphics toolkit had some serious limitations:

All lines were drawn with a single-pixel thickness.

Only a handful of fonts were available.

AWT didn't offer much control over drawing. For example, you couldn't
manipulate the individual shapes of characters.

If you wanted to rotate or scale anything, you had to do it yourself.

If you wanted special fills, like gradients or patterns, you had to make them
yourself.

Image support was rudimentary.

Control of transparency was awkward.

73 | P a g e

And this is what you can do with Java2D

You can view the Java2D demo at C:\Program Files\Java\[jdk folder]\demo\jfc\Java2D


This list will explain generally what Java2D can do:

Shapes
Arbitrary geometric shapes can be represented by combinations of straight lines
and curves. The 2D API also provides a useful toolbox of standard shapes, like
rectangles, arcs, and ellipses.

Stroking
Lines and shape outlines can be drawn as a solid or dotted line of any widtha
process called stroking. You can define any dotted-line pattern and specify how
shape corners and line ends should be drawn.

Filling
Shapes can be filled using a solid color, a pattern, a color gradient, or anything
else you can imagine.

74 | P a g e

Transformations
Everything that's drawn in the 2D API can be stretched, squished, and rotated.
This applies to shapes, text, and images. You tell 2D what transformation you want
and it takes care of everything.

Alpha compositing
Compositing is the process of adding new elements to an existing drawing. The
2D API gives you considerable flexibility by using the Porter-Duff compositing
rules.

Clipping
Clipping is the process of limiting the extent of drawing operations. For example,
drawing in a window is normally clipped to the window's bounds. In the 2D API,
however, you can use any shape for clipping.

Antialiasing
Antialiasing is a technique that reduces jagged edges in drawings. The 2D API
takes care of the details of producing antialiased drawing.

Text
The 2D API can use any TrueType or Type 1 font installed on your system. You
can render strings, retrieve the shapes of individual strings or letters, and manipulate
text in the same ways that shapes are manipulated.

Color
The 2D API includes classes and methods that support representing colors in
ways that don't depend on any particular hardware or viewing conditions.

Images
The 2D API supports doing the same neat stuff with images that you can do with
shapes and text. Specifically, you can transform images, use clipping shapes, and use
alpha compositing with images. Java 2 also includes a set of classes for loading and
saving images in the JPEG format.

Image processing
The 2D API also includes a set of classes for processing images. Image
processing is used to highlight certain aspects of pictures, to achieve aesthetic effects,
or to clean up messy scans.

75 | P a g e

Printing
Java developers have a decent way to print. The Printing API is part of the 2D
API and provides a compact, clean solution to the problem of producing output on a
printer.

4.3.

Drawing on Components
Every GUI component (we only use Jpanel in examples) shows on the screen has a
paint() method. The system passes a Graphics to this method. In JDK 1.1 and earlier,

you could draw on Components by overriding the paint() method and using the
Graphics to draw things. It works exactly the same way in Java 2, except that it's a
Graphics2D that is passed to paint(). To take advantage of all the spiffy 2D features,
you'll have to perform a cast in your paint() method, like this:

Note that your component may not necessarily be drawn on the screen. The
Graphics2D that gets passed to paint() might actually represent a printer or any other
output device.
Swing components work almost the same way. Strictly speaking, however, you should
implement the paintComponent() method instead of paint(). Swing uses the paint()
method to draw child components. Swing's implementation of paint() calls
paintComponent() to draw the component itself. You may be able to get away with

implementing paint() instead of paintComponent(), but then don't be surprised if the


component is not drawn correctly.

4.4.

Drawing on Images
You can use a Graphics or Graphics2D to draw on images, as well. If you have an Image
that you have created yourself, you can get a corresponding Graphics2D by calling
createGraphics(), as follows:

76 | P a g e

This works only for any Image you've created yourself, not for an Image loaded
from a file. If you have a BufferedImage (Java 2D's new image class), you can obtain a
Graphics2D as follows:

Starter: Code Template


To maintain simplicity, each of our examples will use this template to manage the
GUI window. If you still having some difficulties on using the GUI, it is recommended
that you use this template.

4.5.

Graphics2D
Rendering is the process of taking a collection of shapes, text, and images and
figuring out what colors the pixels should be on a screen or printer. Shapes, text, and
images are called graphics primitives ; screens and printers are called output devices. If I
wanted to be pompous, I'd tell you that rendering is the process of displaying graphics

77 | P a g e

primitives on output devices. A rendering engine performs this work; in the 2D API, the
Graphics2D class is the rendering engine. The 2D rendering engine takes care of the
details of underlying devices and can accurately reproduce the geometry and color of a
drawing, regardless of the device that displays it.

Coordinate Space
The Java coordinate space is different from what we learn at school. The X value
increases as we go to the right, and the Y value increases as we go down(the opposite of
the regular Cartesian Coordinate). Please keep this in mind as this is essential to your
drawings.

78 | P a g e

Start Drawing
Take a look of this sample code; this is useful for the big picture of what we will learn.

79 | P a g e

Output:

What we do here is creating a rectangle, an ellipse, give colors, draw the shapes, and
drawing string using different fonts. Its not really hard if you understand the concepts.

Geometry
Java 2D allows you to represent any shape as a combination of straight and curved line
segments. The rectangle and ellipse above are one of them.
Draw able objects in Java2D implements the java.awt.Shape interface. You can
refer to the JDK documentation to look for available shapes.

80 | P a g e

Points
The java.awt.geom.Point2D class encapsulates a single point (an x and a y) in User
Space. It is the most basic of the Java 2D classes and is used throughout the API. Note
that a point is not the same as a pixel. A pixel is a tiny square (ideally) on a screen or
printer that contains some color. A point, by contrast, has no area, so it can't be rendered.
Points are used to build rectangles or other shapes that have area and can be rendered.
Point2D

demonstrates

an

inheritance

pattern

that

is

used

throughout

java.awt.geom. In particular, Point2D is an abstract class with inner child classes that

provide concrete implementations.

The subclasses provide different levels of precision for storing the coordinates of
the point. The original java.awt.Point, which dates back to JDK 1.0, stores the
coordinates as integers. Java 2D provides Point2D.Float and Point2D.Double for
higher precision. You can either set a point's location or find out where it is:

This method sets the position of the point. Although it accepts double values, be aware
that the underlying implementation may not store the coordinates as double values.

This method sets the position of the point using the coordinates of another Point2D.

This method returns the x (horizontal) coordinate of the point as a double.

81 | P a g e

This method returns the y (vertical) coordinate of the point as a double.


Point2D also includes a handy method for calculating the distance between two points:

Use this method to calculate the distance between this Point2D and the point specified by
PX and PY.

This method calculates the distance between this Point2D and pt.
The inner child class Point2D.Double has two constructors:

This constructor creates a Point2D.Double at the coordinates 0, 0.

This constructor creates a Point2D.Double at the given coordinates.


Point2D.Float has a similar pair of constructors, based around floats instead of doubles:

Furthermore, Point2D.Float provides an additional setLocation() method that


accepts floats instead of doubles:

This method sets the location of the point using the given coordinates. Why use floats
instead of doubles? If you have special concerns about the speed of your application or
interfacing with an existing body of code, you might want to use Point2D.Float.
Otherwise, I suggest using Point2D.Double, since it provides the highest level of
precision.

Shapes and Paths


Two of Graphic2D basic operations are filling shapes and drawing their outlines. But
Graphics2D doesn't know much about geometry, as the song says. In fact, Graphics2D

82 | P a g e

only knows how to draw one thing: a java.awt.Shape. The Shape interface represents a
geometric shape, something that has an outline and an interior. With Graphics2D, you
can draw the border of the shape using draw(), and you can fill the inside of a shape
using fill().
The java.awt.geom package is a toolbox of useful classes that implement the Shape
interface. There are classes that represent ellipses, arcs, rectangles, and lines. First, I'll
talk about the Shape interface, and then briefly discuss the java.awt.geom package.

Lines
The 2D API includes shape classes that represent straight and curved line segments.
These classes all implement the Shape interface, so they can be rendered and manipulated
like any other Shape. Although you could create a single straight or curved line segment
yourself using GeneralPath, it's easier to use these canned shape classes. It's interesting
that these classes are Shapes, even though they represent the basic segment types that
make up a Shape's path.

4.6.

Line2D
The java.awt.geom.Line2D class represents a line whose coordinates can be retrieved
as doubles. Like Point2D, Line2D is abstract. Subclasses can store coordinates in any
way they wish.

Line2D includes several setLine() methods you can use to set a line's endpoints:

83 | P a g e

This method sets the endpoints of the line to x1, y1, and x2, y2.

This method sets the endpoints of the line to p1 and p2.

This method sets the endpoints of the line to be the same as the endpoints of the given
line. Here are the constructors for the Line2D.Float class. Two of them allow you to
specify the endpoints of the line, which saves you the trouble of calling setLine().

This constructor creates a line whose endpoints are 0, 0 and 0, 0.

This constructor creates a line whose endpoints are x1, y1 and x2, y2.

84 | P a g e

This constructor creates a line whose endpoints are p1 and p2.


The Line2D.Double class has a corresponding set of constructors:

The following code will show you how to use Line2D and Point2D

85 | P a g e

Output:

4.7.

Rectangle2D
Like the Point2D class, java.awt.geom.Rectangle2D is abstract. Two inner subclasses,
Rectangle2D.Double and Rectangle2D.Float, provide concrete representations.

86 | P a g e

Because so much functionality is already in Rectangle2D's parent class,


RectangularShape, there are only a few new methods defined in Rectangle2D. First,

you can set a Rectangle2D's position and size using setRect(:

These methods work just like the setFrame() methods with the same argument types.
Two methods are provided to test if a line intersects a rectangle:

If the line described


by x1, y1, x2, and y2 intersects this Rectangle2D, this method returns true.

If the line represented by l intersects this Rectangle2D, this method returns true.
Two other methods will tell where a point is, with respect to a rectangle. Rectangle2D
includes some constants that describe the position of a point outside a rectangle. A
combination of constants is returned as appropriate.

These methods return some combination of OUT_TOP, OUT_BOTTOM, OUT_LEFT, and


OUT_RIGHT, indicating where the given point lies with respect to this Rectangle2D. For

points inside the rectangle, this method returns 0.

87 | P a g e

RoundRectangle2D
A round rectangle is a rectangle with curved corners, represented by instances of
java.awt.geom.RoundRectangle2D.

Round rectangles are specified with a location, a width, a height, and the height and
width of the curved corners.

To set the location, size, and corner arcs of a RoundRectangle2D, uses the following
method:

88 | P a g e

This method sets the location of this round rectangle to x and y. The width and height
are provided by w and h, and the corner arc lengths are arcWidth and arcHeight. Like
the other geometry classes, RoundRectangle2D is abstract. Two concrete subclasses are
provided. RoundRectangle2D.Float uses floats to store its coordinates. One of its
constructors allows you to completely specify the rounded rectangle:

This constructor creates a RoundRectangle2D.Float using the specified location,


width, height, arc width, and arc height.
RoundRectangle2D.Double has a corresponding constructor:

89 | P a g e

Sample Code:

Output:

90 | P a g e

4.8.

Ellipse2D and Arc2D


Ellipse2D
An ellipse, like a rectangle, is fully defined by a location, a width, and a height. The
same with the other geometry classes, java.awt.geom.Ellipse2D is abstract. A
concrete inner subclass, Ellipse2D.Float, stores its coordinates as floats:

This constructor creates an Ellipse2D.Float using the specified location, width,


and height.
Another inner subclass, Ellipse2D.Double, offers a corresponding constructor:

Arc2D
The 2D API includes java.awt.geom.Arc2D for drawing pieces of an ellipse. Arc2D
defines three different kinds of arcs.

This constant represents an open arc. This simply defines a curved line that is a
portion of an ellipse's outline.

This constant represents an arc in the shape of a slice of pie. This outline is produced
by drawing the curved arc as well as straight lines from the arc's endpoints to the center
of the ellipse that defines the arc.

In this arc type, a straight line is drawn to connect the two endpoints of the arc.

91 | P a g e

Arc2D provides quite a few different ways to set the size and parameters of the arc:

This method sets the shape of the arc using the given parameters. The arc will be part
of the ellipse defined by x, y, w, and h. x and y are the top left corner of the bounding
box of the ellipse, while w and h are the width and height of the bounding box. The given
angles determine the start angle and angular extent of the arc. Finally, the new arc has the
type given by closure.

This method is the same as the previous method, except the supplied Point2D and
Dimension2D are used to specify the location and size of the whole ellipse.

This method is the same as the previous method, except the given rectangle gives the
location and size of the arc's ellipse.

92 | P a g e

This method makes this arc have the same shape as the supplied Arc2D.

This method specifies an arc from a center point, given by x and y, and a radius. The
angle start, angle extent, and closure parameters are the same as before. Note that this
method will only produce arcs that are part of a circle. Like the other geometry classes,
Arc2D is abstract, with Arc2D.Float and Arc2D.Double as concrete subclasses.
Arc2D.Float has four useful constructors:

This constructor creates a new OPEN arc.

This constructor creates a new arc of the given type, which should be OPEN, PIE, or
CHORD.

This constructor is the same as the previous constructor, but it uses the supplied
rectangle as the ellipse's bounding box. The other inner subclass, Arc2D.Double, has four
corresponding constructors:

93 | P a g e

Sample:

94 | P a g e

Output:

There are two things you need to know about arc angles:
a.

They're measured in degrees.

b.

Finally, the arc angles aren't really genuine angles. Instead, they're defined relative to
the arc's ellipse, such that a 45 arc angle is always defined as the line from the
center of the ellipse through a corner of the bounding box of the ellipse.

4.9.

Text
Drawing string is simple. You can use one of these methods.

This method draws the given string, using the current font, at the location specified
by x and y. If you want to change the font, you can use the Font class.

95 | P a g e

This Font constructor creates a new font with the specified name (eg. Times New
Roman, Comic Sans MS), specified font size and style. The style parameter is filled with
these constants.

This code use to gives the bold style to the font.

This code use to gives the italic style to the font.

This code use to gives the standard plain style to the font.
Sample:

96 | P a g e

Output:

4.10. Animation
Animation is the rapid display of a sequence of images. The time for an image
changes to another image, is called fps (Frame Per Second). An animation usually use 24
fps, this means that every one second, there will be 24 changing images. To make an
animation in Java, we just need to update the images every time steps. Take a look at this
example:

97 | P a g e

The timer is used to control the animation timing. Every 41.66(1000/24) second,
the actionPerformed method is called to render the animation. Notice that we use 2
Graphics objects. We created a BufferedImage and use its Graphics object called temp.
The temp Graphics is used to store the next frame image. After the drawing completed on
temp, the content of temp is being drawn on g. This is called double buffering.

4.11. Exercise
Create a bouncing ball animation using Graphics2D. You can choose the balls starting
location anywhere. If the ball hit the frames boundary, change its direction.

98 | P a g e

99 | P a g e

What we need to do:


1. Implement the panel with ActionListener
2. Create a Timer with the panel as its listener, to respond every animation frame.

3. Set the speed of the ball. In this example, we set it to 10.

4. Create the ball

5. Handle how the panel draws the ball

6. Handle what to do in every frame

100 | P a g e

In this function we need to do 2 things:

Draw the graphics from the temporary buffer to the panel.

Handle balls movement.

Full source code:

101 | P a g e

102 | P a g e

Chapter 05
Object 3D

Obejctives
1. What is jMonkeyEngine?
2. jMonkey Platform
3. Code
4. Understanding the Code

103 | P a g e

5.1.

What is jMonkeyEngine?
Its a game engine, made especially for game developers who want to create 3D
games with modern technology standards. The software is programmed entirely in Java,
intended for wide accessibility and quick deployment.
Key Features

Free, open source software, as covered by the New BSD license. Meaning, you
are free to do with it as you would like.

Minimal adaptations for cross-compatibility runs on any OpenGL 2 - ready


device with the Java Virtual Machine.

Built around shader based architecture. This ensures excellent compliance


with current and next generation graphics standards.

jMonkeyPlatform, a complete IDE with graphical editors and plugin


capabilities. Aka the jMonkeyEngine 3 SDK.

Complete modularity keeps the end-developer in power, while getting all game
development essentials straight out of the box.

The jMonkeyEngine framework (jME) is a high-performance, 3D scenegraph-based


graphics API with state-of-the-art features. The engine is written in Java and uses
LWJGL for OpenGL access. jME3 is completely open-source under the BSD license and
it is free to use in any way you see fit, be it hobby, educational, or commercial. In the
time of writing, the third release of jMonkeyEngine is in active development (Alpha 4)

5.2.

JMonkey Platform
The jMonkeyPlatform is a game development environment for the jMonkeyEngine 3.
It is the main component of the jMonkeyEngine SDK (software development kit). You
can download the platform at www.jmonkeyengine.org
What does the jMonkeyPlatform deliver?

A fully integrated SDK for jMonkeyEngine 3 game development.

Easy update to current plugin and jme3 versions (stable + nightly SVN) via the
update center.

104 | P a g e

One unifying development platform that prevents scattered content design


projects (like in jme2).

A platform for plugins: jMonkeyPlatform is a stand-alone download that can be


extended with plugins.

Independent development possibilities by using NetBeans platform plugin API.

What Does This Mean for a Non-Programmer?


Game Designers get to

Easily install the game engine and everything they need to test and preview WIP
material.

Use tools user-friendly enough to construct multi-media game content without a


programming background terrain and scene editing, lightweight scripting, and
more

Effectively be much more integrated in the technical course of game


development.

Documentation
To get help right in jMonkeyPlatform, press F1. The help lets you browse a local
copy of the documentation pulled from this wiki, and also contains info about installed
plugins. If you come from another IDE and want to try jMonkeyPlatform, you can easily
set up the keyboard shortcut mappings to resemble the configuration you are used to.
Profiles exist for Ecplise, IntelliJ and others. Just go to Settings Keymap and select
one of the existing profiles.

5.3.

Code
Writing a simple Application
Now we will begin writing some jME codes. Follow these steps.
1. Create a new project. Select file -> new project at the menu bar.

105 | P a g e

2. Select JME3 -> BasicGame and then click next.

3. Change the project name and location, and then click finish.

106 | P a g e

4. At the left panel, you will see your created project. Open the source folder and you
will see the main.java

5. Delete the current content of main.java, and change into this code.

107 | P a g e

6. Now press f6 to run the application. You will see this configuration GUI. Follow the
settings bellow and select OK.

7. Use WSAD to travel the world, use mouse to rotate camera. Press escape to exit.

Now you have created your first game.

108 | P a g e

5.4.

Understanding the Code


Here some basic rules those are valid for all JME3 games:
Starting the Game
Note that the HelloJME3.java class extends com.jme3.app.SimpleApplication,
which is a subclass of com.jme3.app.Application. Every JME3 game is an instance of
com.jme3.app.Application (directly, or indirectly).

To run a JME3 game, you first instantiate your Application-based class, and then
call its start() method:
Main app = new Main();
app.start();

Usually, you do that from your Java application's main method.


The simpleInitApp() method is automatically called once at the beginning of every
JME3 game. In this method you create or load game objects before the game starts! Here
is the usual process:
1. Initialize game objects:

Create or load all objects, and position them.

To make geometry (like the box) appear in the scene, attach it to the rootNode.

Examples: Load player, terrain, sky, enemies, obstacles, and place them in their
start positions.

2. Initialize game variables

Game variables track the game state. Set them to their start values.

Examples: Here you set the score to 0, and health to 100%, and so on.

3. Initialize navigation

The following key bindings are pre-configured by default:

W,A,S,D keys Move around

Mouse and arrow keys Turn the camera

Escape key - Quit game

The important part is: The JME3 Application has a rootNode object. Your game
automatically inherits the rootNode. Everything attached to the rootNode appears in the
scene. Or in other words: An object that has been created, but is not attached to the
rootNode, remains invisible.

109 | P a g e

Conclusion
These few lines of code do nothing but display a static object in 3-D, but they already
allow you to navigate around in 3D. You have learned that a SimpleApplication is a good
starting point because it provides you with:

a simpleInitApp() method to initialize the game objects

a rootNode where you attach geometries to make them appear in the scene

useful default navigation settings

In a real game, you will want to:


1. Inititialize the game world,
2. Trigger actions in the event loop,
3. Respond to user input.

Node
When creating a 3D game, you start out with creating a scene and some objects. You
place the objects (player tokens, obstacles, etc) in the scene, and move, resize, rotate,
color, and animate them.
In this Tutorial we will have a look at a simple 3D scene. You will learn that the 3D
world is represented in a scene graph, and why the rootNode is important. You will learn
how to create simple objects and how to transform them move, scale, rotate. You will
understand the difference between the two types of Spatials in the scene graph, Node and
Geometry.

110 | P a g e

Understanding the Terminology


In this tutorial, you will learn some new terms:
1. The scene graph represents your 3D world.
2. Objects in the scene graph (such as the boxes in this example) are called Spatials.

A Spatial is a collection of information about an object: its location, rotation, and


scale.

111 | P a g e

A Spatial can be loaded, transformed, and saved.

3. There are two types of Spatials, Nodes and Geometries.


4. To add a Spatial to the scene graph, you attach the Spatial to the rootNode.
5. Everything attached to the rootNode is part of the scene graph.

Understanding the Code


So what exactly happens in this code snippet? Note that we are using the
simpleInitApp() method that was introduced in the first tutorial.

1. We create a box Geometry.

The box Geometry's extends are (1,1,1), that makes it 2x2x2 world units big.

We place the box at (1,-1,1)

We give it a solid blue material.

2. We create a second box Geometry.

This box Geometry is also 2x2x2 world units big.

We place the second box at (1,3,1). This is straight above the blue box, with a gap
of 2 world units inbetween.

We give it a solid red material

3. We create a Node.

By default the Node is placed at (0,0,0).

We attach the Node to the rootNode.

An attached Node has no visible appearance in the scene.

112 | P a g e

4. Note that we have not attached the two boxes to anything yet!
If we ran the application now, the scenegraph would appear empty.
5. We attach the two boxes to the node.
If we ran the app now, we would see two boxes: one straight above the other.

6. Now, we rotate the node.


When we run the application now, we see two boxes on top of each other but both
are tilted at the same angle.

5.5.

Exercise
Create a sphere on top of a box.
What we need to do is:
1. Create a ball

2. Assign a green solid color to the ball

3. Create a blue box

4. Assign a blue solid color to the box

113 | P a g e

5. Place the ball on top of the box

6. Add both object to root node

7. Full Code:

Output:

114 | P a g e

Chapter 06
Multimedia Network Communication

Obejctives
1. What is J2ME?
2. Configurations
3. Profile
4. High Level User Interface

115 | P a g e

6.1.

What is J2ME?
In the early 1990s, Sun Microsystems created a new programming language called Oak
as part of a research project to build consumer electronics products that relied heavily on
software. The first prototype for Oak was a portable home controller called Star7, a small
handheld device with an LCD touchscreen and built-in wireless networking and infrared
communications. It could be used as remote control for a television or VCR and as an
electronic program guide, and it also had some of the functions that are now associated
with PDAs, such as appointment scheduling. Software for this type of device needs to be
extremely reliable and must not make excessive demands on memory or require an
extremely powerful (and therefore expensive) processor. Oak was developed as a result
of the development team's experiences with C++, which, despite having many powerful
features, proved to be prone to programmer errors that affected software reliability.
Oak was designed to remove or reduce the ability for programmers to create
problems for themselves by detecting more errors at compile time and by removing some
of the features of the C++ language (such as pointers and programmer-controlled
memory management) that seemed to be most closely associated with the reliability
problems. Unfortunately, the market for the type of devices that the new language was
intended for did not develop as Sun hoped, and no Oak-based devices were ever sold to
consumers. However, at around the same time, the beginnings of public awareness of the
Internet created a market for Internet browsing software. In response to this, Sun renamed
the Oak programming language Java and used it to build a cross-platform browser called
HotJava. It also licensed Java to Netscape, which incorporated it into its own popular
browser, at the time the undisputed market leader. Thus, the world was introduced to Java
applets.
Within a couple of years, the cross-platform capabilities of the Java programming
language and its potential as a development platform for free-standing applications that
could be written once and then run on both Windows and Unix-based systems had
sparked the interest of commercial end users as a way of reducing software development
costs. In order to meet the needs of seasoned Windows and Motif/X-Windows developers
working to create applications for sophisticated end users accustomed to using rich user
interfaces, Sun rapidly expanded the scope (and size) of the Java platform. This expanded

116 | P a g e

platform included a much more complex set of user interface libraries than those used to
build the original applets, together with an array of features for distributed computing and
improved security.
By the time Sun released the first customer shipment of the Java 2 platform, it had
become necessary to split it into several pieces. The core functionality, regarded as the
minimum support required for any Java environment, is packaged as the Java 2 Standard
Edition(J2SE).
Several optional packages can be added to J2SE to satisfy specific requirements for
particular application domains, such as a secure sockets extension to enable electronic
commerce. Sun also responded to an increasing interest in using Java for enterprise-level
development and in application server environments with the Java 2 Enterprise Edition
(J2EE), which incorporates new technology such as servlets, Enterprise JavaBeans, and
JavaServer pages. As with most software, Java's resource requirements have increased
with each release. Although it has its roots in software for consumer electronics products,
J2SE requires far too much memory and processor power to be a viable solution in that
marketplace. Ironically, while Sun was developing Java for the Internet and commercial
programming, demand began to grow for Java on smaller devices and even on smart
cards, thus returning Java to its roots. Sun responded by creating several reducedfunctionality Java platforms, each tailored to a specific vertical market segment, some of
which will be covered briefly at the end of this chapter. These platforms are all based on
JDK 1.1, the predecessor of the Java 2 platform, and they take different approaches to the
problem of reducing the platform to fit the available resources. In a sense, therefore, each
of these reduced-functionality platforms represents an ad-hoc solution to this problem, a
solution that has evolved over time to meet the needs of its own particular markets.
J2ME is a platform for small devices that is intended eventually to replace the
various JDK 1.1-based products with a more unified solution based on Java 2. Unlike the
desktop and server worlds targeted by J2SE and J2EE, the micro-world includes such a
wide range of devices with vastly different capabilities that it is not possible to create a
single software product to suit all of them. Instead of being a single entity, therefore,
J2ME is a collection of specifications that define a set of a platforms, each of which is
suitable for a subset of the total collection of consumer devices that that fall within its

117 | P a g e

scope. The subset of the full Java programming environment for a particular device is
defined by one or more profiles, which extend the basic capabilities of a configuration.
The configuration and profile or profiles that are appropriate for a device depend both on
the nature of its hardware and the market to which it is targeted.

6.2.

Configurations
A configuration is a specification that defines the software environment for a range
of devices defined by a set of characteristics that the specification relies on, usually such
things as:

The types and amount of memory available

The processor type and speed

The type of network connection available to the device

A configuration is supposed to represent the minimum platform for its target device
and is not permitted to define optional features. Vendors are required to implement the
specification fully so that developers can rely on a consistent programming environment
and, therefore, create applications that are as device-independent as possible.
J2ME currently defines two configurations:

Connected Limited Device Configuration (CLDC)


CLDC is aimed at the low end of the consumer electronics range. A typical CLDC
platform is a cell phone or PDA with around 512 KB of available memory. For
this reason, CLDC is closely associated with wireless Java, which is concerned
with allowing cell phone users to purchase and download small Java applications
known as MIDlets to their handsets.

Connected Device Configuration (CDC)


CDC addresses the needs of devices that lie between those addressed by CLDC
and the full desktop systems running J2SE. These devices have more memory
(typically 2 MB or more) and more capable processors, and they can, therefore,
support a much more complete Java software environment. CDC might be found
on high-end PDAs and in smart phones, web telephones, residential gateways, and
set-top boxes.

118 | P a g e

Each configuration consists of a Java virtual machine and a core collection of Java
classes that provide the programming environment for application software. Processor
and memory limitations, particularly in low-end devices, can make it impossible for a
J2ME virtual machine to support all of the Java language features or instruction byte
codes and software optimizations provided by a J2SE VM. Therefore, J2ME VMs are
usually defined in terms of those parts of the Java Virtual Machine Specification and the
Java Language Specification that they are not obliged to implement. As an example of
this, devices targeted by CLDC often do not have floating point hardware, and a CLDC
VM is therefore not required to support the Java language types float and double or any
of the classes and methods that require these types or involve floating-point operations.

6.3.

Profile
A profile complements a configuration by adding additional classes that provide
features appropriate to a particular type of device or to a specific vertical market segment.
Both J2ME configurations have one or more associated profiles, some of which may
themselves rely on other profiles. These processes are described
in the following list:

Mobile Information Device Profile (MIDP)


This profile adds networking, user interface components, and local storage to
CLDC. This profile is primarily aimed at the limited display and storage facilities
of mobile phones, and it therefore provides a relatively simple user interface and
basic networking based on HTTP 1.1. MIDP is the best known of the J2ME
profiles because it is the basis for Wireless Java and is currently the only profile
available for PalmOSbased handhelds.

PDA Profile (PDAP)


The PDA Profile is similar to MIDP, but it is aimed at PDAs that have better
screens and more memory than cell phones. The PDA profile, which is not
complete at the time of writing, will offer a more sophisticated user interface
library and a Java-based API for accessing useful features of the host operating
system. When this profile becomes available, it is likely to take over from MIDP

119 | P a g e

as the J2ME platform for small handheld computers such as those from Palm and
Handspring.

Foundation Profile
The Foundation Profile extends the CDC to include almost all of the core Java 2
Version 1.3 core libraries. As its name suggests, it is intended to be used as the
basis for most of the other CDC profiles.

Personal Basis and Personal Profiles


The Personal Basis Profile adds basic user interface functionality to the
Foundation Profile. It is intended to be used on devices that have an
unsophisticated user interface capability, and it therefore does not allow more
than one window to be active at any time. Platforms that can support a more
complex user interface will use the Personal Profile instead.

RMI Profile
The RMI Profile adds the J2SE Remote Method Invocation libraries to the
Foundation Profile. Only the client side of this API is supported.

Game Profile
The Game Profile, which is still in the process of being defined, will provide a
platform for writing games software on CDC devices.

120 | P a g e

6.4.

High Level User Interface


Now we will learn how to make a simple J2ME user interface using netbeans. We will
learn how to make a simple form, change the display, and other basic user interface.
Setting Up The Project
Before we start coding, we must prepare a J2ME project on netbeans. These are the
simple steps:
1. Start netbeans.
2. Create A Project: File -> New Project.

3. Select the Java ME folder -> Mobile Application.

121 | P a g e

4. Click Next, and you will need to fill the project name and choose the project location,
and then click finish.

5. Open the source package folder at the left panel. Right click and choose new->Visual
Midlet

6. Rename the Midlet and then click finish

Now you are ready to develop your J2ME applications.

Creating a simple application


Now we will make a simple application to give you the big picture of making a J2ME
application.
1. Drag a form at the right panel.

122 | P a g e

2. Right click the form and choose properties.

3. Look at the code properties. Instance Name is used when you code the form. Text
filled in Instance Name will be the components variable name. for example, change
the Instance Name to welcomeScreen

123 | P a g e

4. Pick the started menu, drag it to the welcomeScreen

Now the welcome screen is the first screen when the application is started.
5. Click on the welcomeScreen and go to the screen tab, now the screen will look like
this.

This is the form content editor. You can freely add and remove components in this
screen.
6. Drag a textfield. Double click the bold text, change to Message.

124 | P a g e

7. Drag an Exit Command to the form.

8. Back to the flow tab, click the exitCommand menu, drag it to the Mobile Device
component.

9. Now lets run the application by pressing F6.

Now you have developed your first application. You can test it by clicking the button
below the launch text. The exit button on the left, will tell the application to stop.

125 | P a g e

It stops after we drag the flow from exit command to the starting component.

Displayable
The Display class represents a logical device screen on which a MIDlet can display its
user interface. Each MIDlet has access to a single instance of this class; you can obtain a
reference to it by using the static getDisplay( ) method:
public static Display getDisplay(MIDlet midlet);

A MIDlet usually invokes getDisplay( ) when its startApp( ) method is called


for the first time and then uses the returned Display object to display the first screen of its
user interface. You can safely call this method at any time from the start of the initial
invocation of the startApp( ) method, up to the time when the MIDlet returns from

126 | P a g e

destroyApp( ) or notifyDestroyed( ), whichever happens first. Each MIDlet has its

own, unique and dedicated instance of Display, so getDisplay( ) returns the same value
every time it is called. A MIDlet will, therefore, usually save a reference to its Display
object in an instance variable rather than repeatedly call getDisplay( ).
Every screen that a MIDlet needs to display is constructed by mounting user
interface components (which are called items in MIDP terminology) or drawing shapes
onto a top-level window derived from the abstract class Displayable, which will be
discussed later. A Displayable is not visible to the user until it is associated with the
MIDlet's Display object using the Display's setCurrent( ) method:
public void setCurrent(Displayable displayable)

Similarly, the Displayable currently associated with a Display can be retrieved by


calling getCurrent( )
Forms and Items
Form is a subclass of Screen that can be used to construct a user interface from simpler

elements such as text fields, strings, and labels. Like TextBox, Form covers the entire
screen and inherits from its superclasses the ability to have a title, display a Ticker, and
be associated with Commands.
Item provides only the ability to store and retrieve a text label, but because each

component that can be added to a Form is derived from Item, it follows that all of them
can have an associated label. The implementation displays this somewhere near the
component in such a way as to make the association between the label and the component
clear. The components that MIDP provides are described briefly.
Item
StringItem

Description
An item that allows a text string to be placed in the user
interface

TextField

A single-line input field much like the full-screen TextBox

DateField

A version of TextField that is specialized for the input of


dates;
it includes a visual helper that simplifies the process of

127 | P a g e

choosing a date
Gauge

A component that can be used to show the progress of an


ongoing operation or allow selection

of a value from a

contiguous range of values


ChoiceGroup

A component that provides a set of choices that may or may


not be mutually exclusive and therefore may operate either as
a collection of checkboxes or radio buttons

ImageItem

A holder that allows graphic images to be placed in the user


interface

Some examples:
1. Create a new Visual Midlet.
2. Create a new Form with regisForm as its instance name.
3. Drag a textfield, datefield, and a choice group so that the layout looks like below.

128 | P a g e

4. Now we will edit the data in the choice Group. Change the text in the choicegroup to
gender.
5. Drag two choice elements to the choice group, and rename them by double clicking
components.

129 | P a g e

6. Change the choice group type to EXCLUSIVE from the property panel.

7. Add an OK command, and change the text to Submit

8. Drag the flow from the start component to the regisForm

9. Create A new Form, rename it to resultForm, and create a flow from regisForms
submit button to the resultForm.

130 | P a g e

10. Fill the resultForm with 3 string items. Change the instance name respectively to
itemName, itemDate, itemGender.
11. Add an Exit Command to the form, and give flow of the command to the starting
component.
12. Go to the source tab, find the commandAction method.

13. Add these codes

131 | P a g e

Now lets test the application.

This first form lets you to fill your data.

This second form displays the data entered at the first form.

Now for the explanation:


Every command button pressed, the commandAction method handles its action.

This piece of code tells you if the form is the regisForm and the command is the
Submit, then do something.

This code lets the itemName string item to change its content from the textfields text.

This code lets the itemDate string item to change its content from the dateField.

132 | P a g e

This code is to change itemGenders text. If the index selected at the choicegroup is
the first(index 0), then set gender to Male, otherwise, set its gender to Female

This code is the default code to change forms to display. No need to edit this code.
For complete examples, you can create a new project named UI Demo from netbeans
samples.

6.5.

Exercise
Create a simple arithmatic calculator that calculates sum and difference of two numbers.

133 | P a g e

What to do:
1. Create a form inputs two number and calculate them.
2. Add two textfields and a string item.

3. Change both textfields input constrains to numeric. This will ensure you that no
other characters than 0-9 will be filled in the textfield.
4. Add 3 commands, 1 exit command and two ok commands. The exit command quits
the application, and the other two is to select which operation(addition, substraction).

5. Change the first okCommands label to Add, and its instance name to
commandAdd. This command will be used to do addition.

6. Change the second okCommands label to Substract, and its instance name to
commandSubstract.

134 | P a g e

7. Now we need to create the applications flow. Create the flow so it looks like this.

this ensures you that after you start the application, you will go to the start form, and
after you choose the exit command, the application will stop.
8. Now for the applications logic.

For every operation, we need to convert both textfields data from String to int.

After the conversion, we calculate the value based on the chosen operation, and
then store the result to the string item component.

Note that the string item only receives string as its input, so we need to convert
the result variable from int to string by adding empty string.
9. Run the application and see the result.

135 | P a g e

Full application logics code:

Vous aimerez peut-être aussi