Académique Documents
Professionnel Documents
Culture Documents
282
3.1 Mobile Media API (JSR-135) already be set by the implementation (the address is
taken from the URL that was passed when the client
The Mobile Media API (MMAPI) extends the connection was created). Before sending the text
functionality of the J2ME platform by providing audio, message, the method populates the outgoing message
video and other time-based multimedia support to by calling setPayloadText().
resource-constrained devices [7] [9]. TextMessage tmsg
=(TextMessage)mc.newMessage(MessageConnection.
3.1.1 Getting a Video Capture Player The first step TEXT_MESSAGE);
in taking pictures (officially called video capture) in a tmsg.setPayloadText(msg);
MIDlet is obtaining a Player from the Manager. mc.send(tmsg);
Player mPlayer =
Manager.createPlayer("capture://video"); The main title (on the first page) should begin 1-3/8
The Player needs to be realized to obtain the inches (3.49 cm) from the top edge of the page,
resources that are needed to take pictures. centered, and in Times 14-point, boldface type.
mPlayer.realize(); Capitalize the first letter of nouns, pronouns, verbs,
adjectives, and adverbs; do not capitalize articles,
3.1.2 Showing the Camera Video The video coming coordinate conjunctions, or prepositions (unless the
from the camera can be displayed on the screen either title begins with such a word). Leave two 12-point
as an Item in a Form or as part of a Canvas. A blank lines after the title.
VideoControl makes this possible. To get a
VideoControl, just ask the Player for it: 4. Prototype
VideoControl mVideoControl =
(VideoControl)mPlayer.getControl("VideoControl"); The system architecture is shown in Figure 2.
283
Data GPRS Class 10 CDMA1X of several frame time. The “Template time” is the total
Service 32-48kbps 400-700kbps time to construct the background template.
In this instance, the “Snapshot time” , ”DIP time”
Figure 3 is the UI (User Interface) of the prototype and “Frame time” of the 100th frame are presented.
application. The first picture in the form is real time And the “Frame time Average” of the 100 frames and
frame, which is got from the camera originally. The “Template time” are also given.
second image is the template image. If there are some
moving objects being detected, the third picture will be Table 2: Performance (“ms” is millisecond)
displayed on the form. And some real time information
E680 IC902
is displayed below the pictures.
Image size 192*192 160*120
Snapshot time (100th frame) 1213 ms 1565 ms
DIP (digital image process) 278 ms 296 ms
time (100th frame)
Frame time (100th frame) 1491 ms 1861 ms
Frame time Average (100 1542 ms 1792 ms
frames)
Template Time (first 10 20422 ms 25639 ms
frames)
284
Also, the system can be extended to a distributed
wireless network system. Many terminals work
together, reporting to a control center and receiving
commands from the center. Thus, a low-cost wide-area
intelligent video surveillance system can be built.
Further more, with the development of embedded
hardware, more complex digital image process
algorithms can be used to give more kinds of
applications in the future.
7. Acknowledgements
This work was supported by the National High
Technology Research and Development Program of
China (863 Program) (2007AA11Z227).
Figure 5: Real Time Frame
8. References
[1] M Valera, SA Velastin, Intelligent distributed
surveillance systems: a review. IEE Proceedings on Visual
Image Signal Processing, April. 2005, vol. 152, vo.2, pp.192-
204.
[2] M. Piccardi, Background subtraction techniques: a
review, IEEE International Conference on Systems, Man and
Cybernetics, Oct. 2004, vol. 4, pp. 3099–3104.
[3] T. Horprasert, D. Harwood and L.S. Davis, A Robust
Background Subtraction and Shadow Detection, Proc.
Proceedings of the Fourth Asian Conference on Computer
Vision, January 2000, vol. 1, pp. 983-988.
[4] Y Ivanov, A Bobick, J Liu, Fast Lighting Independent
Background Subtraction, International Journal of Computer
Vision, Jun. 2000, vol. 37, no. 2, pp. 199–207.
[5] A Elgammal, D Harwood, L Davis, Non-parametric
Figure 6: Foregrounds (Moving Object) Model for Background Subtraction, Proceedings of the 6th
European Conference on Computer Vision-Part II, 2000, pp.
6. Conclusion 751-767
[6] Alan, M. McIvor, Background Subtraction Techniques,
The moving object recognition technology led to Proceedings of Image & Vision Computing New Zealand
the development of autonomous systems, which also 2000 IVCNZ’00, Reveal Limited, Auckland, New Zealan,
2000.
minimize the network traffic. [7] C. Enrique Ortiz, The Wireless Messaging API
With good mobile ability, the system can be developers.sun.com, 2002
deployed rapidly in emergency. And can be a useful [8] Jonathan Knudsen, Taking Pictures with MMAPI
supplement of traditional monitoring system. developers.sun.com, 2003.
With the help of J2ME technology, the differences [9] Sun Microsystems, Inc., Nokia Corporation., JSR135
of various hardware platforms are minimized. All MMAPI 1.2 (Final Version) Specification, 2006.
embedded platforms with camera equipped and [10] Sun Microsystems, Inc., JSR 120 Wireless Messaging
JSR135/JSR120 supported can install this system API (WMA) specification, 2002.
without making any changes to the application.
285